source
stringlengths
62
1.33M
target
stringlengths
127
12.3k
Background In fiscal year 2009, over 9 million beneficiaries were eligible to receive health care and mental health care through DOD’s TRICARE program. Under TRICARE, beneficiaries have choices among various benefit options and may obtain care from either military treatment facilities or civilian providers. Composition of TRICARE’s Beneficiary Population TRICARE beneficiaries fall into different categories: (1) active duty personnel and their dependents, (2) National Guard and Reserve servicemembers and their dependents, and (3) retirees and their dependents or survivors. Retirees and certain dependents and survivors who are entitled to Medicare Part A and enrolled in Part B, and who are generally age 65 and older, are eligible to obtain care under a separate program called TRICARE for Life. As shown in figure 1, active duty personnel and their dependents represented 32 percent of the beneficiary population, while National Guard and Reserve servicemembers and their dependents represented 14 percent. Ntionl Guard nd Reerve ervicememer nd dependentActive dty peronnel nd dependentRetiree nd their dependent or survivor(generlly nder ge 65) TRICARE beneficiaries under 65 years of age who are eligible for Medicare Part A on the basis of disability or end-stage renal disease are eligible for TRICARE for Life if they enroll in Medicare Part B. TRICARE’s Benefit Options TRICARE provides its benefits through several options for its non- Medicare-eligible beneficiary population. These options vary according to TRICARE beneficiary enrollment requirements, the choices TRICARE beneficiaries have in selecting civilian and military treatment facility providers, and the amount TRICARE beneficiaries must contribute toward the cost of their care. Table 1 provides additional information about these options. Beneficiaries’ Use of TRICARE Claims data from fiscal years 2006 to 2009 show that the percentage of claims paid for using TRICARE Prime and TRICARE Extra has gradually increased, while the percentage of claims paid for using TRICARE Standard has declined 2 to 3 percentage points each year. (See fig. 2.) Moreover, in 2006 we reported that in fiscal year 2005 about 1.8 million beneficiaries who were eligible for TRICARE Standard or Extra elected not to use their benefits. In fiscal year 2009, about 926,000 beneficiaries who were eligible for TRICARE Standard, TRICARE Extra, and TRICARE Reserve Select elected not to use their benefits. Network and Nonnetwork Providers under TRICARE In order for network and nonnetwork civilian providers to be authorized to provide care and be reimbursed under TRICARE, they must be licensed by their state, accredited by a national organization (if one exists), and meet other standards of the medical community. Individual TRICARE- authorized civilian providers can include health care providers, such as primary care physicians and specialists, as well as mental health care providers, including clinical psychologists. There are two types of authorized civilian providers—network and nonnetwork providers. Network providers are TRICARE-authorized providers who enter a contractual agreement with a managed care support contractor to provide health care to TRICARE beneficiaries. By law, TRICARE maximum allowable reimbursement rates must generally mirror Medicare rates, but network providers may agree to accept lower reimbursements as a condition of network membership. However, network civilian providers are not obligated to accept all TRICARE beneficiaries seeking care. For example, a network civilian provider may decline to accept TRICARE beneficiaries as patients because the provider’s practice does not have sufficient capacity. Nonnetwork providers are TRICARE-authorized providers who do not have a contractual agreement with a managed care support contractor to provide care to TRICARE beneficiaries. Nonnetwork civilian providers have the option of charging up to 15 percent more than the TRICARE reimbursement rate for their services on a case-by-case basis. The beneficiary is responsible for paying the extra amount billed in addition to required co-payments. NDAA 2008 Requirements for Beneficiary and Provider Surveys to Determine Access to Care for Nonenrolled TRICARE Beneficiaries The NDAA 2008 directed DOD to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE. Specifically, the mandate directed DOD to conduct surveys of beneficiaries and providers in at least 20 Prime Service Areas and 20 non-Prime Service Areas in each of fiscal years 2008 through 2011. The mandate also directed DOD to give a high priority to locations having high concentrations of Selected Reserve servicemembers. Additionally, the NDAA 2008 required DOD to consult with representatives of TRICARE beneficiaries and health care and mental health care providers to identify locations where nonenrolled beneficiaries have experienced significant access-to-care problems, and to survey health care and mental health care providers in these areas. The NDAA 2008 also required that specific types of information be requested in the surveys. For example, the mandate stated that the provider survey must include questions to determine whether providers are aware of TRICARE. Within DOD, TMA has primary responsibility for designing and implementing the beneficiary and provider surveys. Implementation of DOD’s 2008 Beneficiary and Provider Surveys Followed OMB Survey Standards and Generally Addressed Requirements Outlined in the NDAA 2008 In implementing the first year of the beneficiary and provider surveys, TMA followed the OMB standards for statistical surveys that we reviewed. However, TMA did identify an error in the geographic categorization of its participants for both surveys and has taken steps to correct it. In addition, TMA generally addressed the requirements outlined in the NDAA 2008 for both of its surveys but did not give a high priority to selecting areas with a high concentration of Selected Reserve servicemembers. Instead, for both surveys, TMA randomly selected geographic areas to produce results that can be generalized to the populations from which the survey samples were drawn. TMA plans to cover the entire United States at the end of the 4-year survey period, which will include any locations with higher concentrations of Selected Reserve servicemembers. TMA Followed the OMB Standards We Reviewed in Designing and Implementing Its Beneficiary and Provider Surveys In implementing its first round of beneficiary and provider surveys, TMA’s methodology for both of the multiyear surveys is consistent with the OMB standards for statistical surveys that we reviewed. (See app. I for our list of selected OMB standards.) These standards document the professional principles and practices that federal agencies are required to follow and the level of quality and effort expected in statistical activities. For example, OMB standards recommend that agencies develop a survey design that includes a methodology for identifying the target population. OMB standards also suggest that federal agencies ensure that the list of eligible survey participants representing the target population is evaluated for accuracy. In conducting its 2008 beneficiary and provider surveys, TMA representatives noted that they identified an error in the geographic categorization of eligible survey participants for both surveys. Specifically, TMA erroneously categorized some beneficiaries and providers as being located in 11 non-Prime Service Areas when in fact they were in Prime Service Areas. According to a TMA representative, this occurred as a result of a computer programming error. TMA identified this error after it began fielding the beneficiary survey in June 2008 but prior to fielding the provider survey, which began in December 2008. As a result, TMA surveyed approximately 9,000 fewer beneficiaries than intended in those 11 non-Prime Service Areas. TMA plans to correct this error in future years by surveying additional beneficiaries in the affected non-Prime Service Areas. TMA Generally Addressed Requirements Outlined in the Mandate for Its Beneficiary and Provider Surveys TMA generally addressed the requirements outlined in the mandate during the implementation of its 2008 beneficiary and provider surveys, but because of methodological considerations TMA used a different approach for its selection of survey areas. (See app. II for a more detailed description of DOD’s survey methodology.) Overall, the mandate outlined specific survey requirements, including the number and priority of areas to be surveyed each year, the content for each type of survey, and the use of benchmarks, which can be used to assess survey results. (See table 2.) According to a TMA official responsible for implementing the surveys, TMA did not give a high priority to areas where higher concentrations of Selected Reserve servicemembers live, as specified in the mandate, because it decided to randomly select the areas to be surveyed in order to produce results that can be generalized to the populations from which the survey samples are selected. Moreover, at the end of the 4-year survey period for the beneficiary and provider surveys, TMA will have surveyed all areas of the United States, thereby including any locations with a higher concentration of Selected Reserve servicemembers. A TMA official also told us that TMA conducted additional analyses of the 2008 beneficiary survey results for TRICARE Reserve Select beneficiaries to obtain additional information about the Selected Reserve servicemembers, and plans to do so in the remaining 3 years of the survey. TMA found that its 2008 results indicate that TRICARE access is no different for the Selected Reserve servicemembers than for other beneficiaries in the surveyed areas. A Higher Percentage of Nonenrolled Beneficiaries Reported Problems Accessing Certain Providers in Prime Service Areas, Though in General Beneficiaries Rated Their Satisfaction Similarly to Users of Other Health Plans The first year’s results of TMA’s 4-year beneficiary survey are representative of the nonenrolled beneficiary population in the combined Prime Service Areas and combined non-Prime Service Areas that were selected for the 2008 survey. Based on our analysis of these results, we estimated that a higher percentage of nonenrolled beneficiaries in surveyed Prime Service Areas experienced problems accessing care from network or nonnetwork primary care physicians or nurses than beneficiaries in surveyed non-Prime Service Areas. However, we could not determine whether beneficiary access problems were related to TRICARE network providers because the survey did not ask beneficiaries to link problems accessing care with network or nonnetwork providers. However, we did find that the types of access problems beneficiaries experienced, such as providers not accepting TRICARE payments, were similar in Prime Service Areas and non-Prime Service Areas. In addition, we found that nonenrolled beneficiaries in both the surveyed Prime Service Areas and the surveyed non-Prime Service Areas rated their health care satisfaction similarly to each other and to beneficiaries of commercial health care plans, but slightly lower than Medicare beneficiaries. Despite a Low Response Rate, TMA’s 2008 Beneficiary Survey Results Are Representative of the Nonenrolled Beneficiary Population for the Combined Areas Surveyed Despite an overall response rate of about 38 percent, TMA’s 2008 beneficiary survey results are representative of the nonenrolled beneficiary population for the combined Prime Service Areas and combined non-Prime Service Areas surveyed. (See fig. 3.) Because of the low response rate, TMA conducted a nonresponse analysis to determine whether the responses it received were representative of the surveyed population. The nonresponse analysis indicated that there were no differences in demographic characteristics and health coverage between beneficiary survey respondents and nonrespondents. As a result, the results can be generalized to the combined areas surveyed—that is, the survey produced results that are representative of all nonenrolled beneficiaries in the 20 surveyed Prime Service Areas and all nonenrolled beneficiaries in the 20 surveyed non-Prime Service Areas. 2008 Survey Results Indicate That a Higher Percentage of Nonenrolled Beneficiaries in Prime Service Areas Experienced Problems Accessing Care from Primary Care Physicians or Nurses Than Those in Non-Prime Service Areas Based on our analysis of the 2008 survey results, we estimated that a higher percentage of nonenrolled beneficiaries in surveyed Prime Service Areas experienced problems accessing care from network or nonnetwork primary care physicians or nurses than nonenrolled beneficiaries in surveyed non-Prime Service Areas. Specifically, about 30 percent of beneficiaries in Prime Service Areas experienced problems finding a civilian primary care physician or nurse, compared to about 24 percent of beneficiaries in non-Prime Service Areas. (See table 3.) While we found differences in access to care problems between nonenrolled beneficiaries in the surveyed Prime Service Areas and non-Prime Service Areas for other types of providers, these differences were not statistically significant at the 95 percent confidence level. We could not determine the extent to which beneficiaries’ access problems were related to TRICARE network or nonnetwork providers because the 2008 beneficiary survey did not ask beneficiaries to link their problems accessing care with network or nonnetwork providers. However, in our analysis of the 2008 beneficiary survey results, we found that the specific types of access problems experienced by beneficiaries in Prime Service Areas and non-Prime Service Areas are similar. (See table 4.) For example, we found that the problem most commonly reported by nonenrolled beneficiaries in both Prime and non-Prime Service Areas, regardless of the type of provider, was that their provider was not accepting TRICARE payments. Other commonly reported reasons varied by provider type. Nonenrolled Beneficiaries in Prime Service Areas and Non-Prime Service Areas Surveyed in 2008 Rated Satisfaction with Their Health Care Similarly to Each Other and to Beneficiaries of Commercial Health Care Plans Despite differences in the percentage of beneficiaries in surveyed Prime and non-Prime Service Areas reporting problems accessing care from primary care physicians or nurses, our analysis showed that nonenrolled beneficiaries’ ratings for several categories of health care were similar in the surveyed Prime Service Areas and non-Prime Service Areas. Specifically, our analysis of beneficiaries’ ratings for four categories of health care—their satisfaction with providers of primary care and specialty care, their health care, and their health plan—indicates no statistically significant difference between beneficiaries in surveyed Prime Service Areas and non-Prime Service Areas. For example, we estimated that 75 percent of beneficiaries in surveyed Prime Service Areas and 74 percent of beneficiaries in surveyed non-Prime Service Areas rated their health plan “7” or higher on a 0 to 10 scale (with 0 being the worst possible). Estimated ratings for nonenrolled beneficiaries in surveyed areas are also similar to the estimated ratings of beneficiaries in commercial health plans, based on data we analyzed from HHS’s 2008 Consumer Assessment of Healthcare Providers and Systems survey. Specifically, in all four satisfaction categories there are no statistically significant differences in the estimated percentage of beneficiaries who rated their satisfaction “7” or higher. (See fig. 4.) However, estimated ratings for nonenrolled beneficiaries in surveyed areas were slightly lower than estimated ratings of Medicare beneficiaries across three of the satisfaction categories— primary care physician or nurse, specialist physician, and health plan. 2008 Provider Survey Results Are Not Representative of All Providers in Surveyed Areas but Provide Limited Information That Indicates Differences among Respondents’ Awareness and Acceptance of TRICARE Although the first year’s results of TMA’s 4-year provider survey are not representative of all providers in the areas surveyed, the results we analyzed do provide information about access to care based on the specific views of the respondents. According to a TMA official, generalizability of provider survey results to the entire country will likely be possible at the end of the 4-year survey period. Our analysis of the 2008 provider survey results indicate that a lower percentage of respondents from Prime Service Areas were aware of TRICARE and were accepting new TRICARE patients than providers who responded from non-Prime Service Areas. The survey results also indicate that respondents from the additional areas TMA surveyed reported levels of awareness and acceptance of TRICARE that were similar to respondents in non-Prime Service Areas. Additionally, there were differences between the responding physicians (primary care physicians and specialists) and mental health providers (psychiatrists, certified clinical social workers, clinical psychologists, and others) regarding their awareness and acceptance of TRICARE. 2008 Provider Survey Results Are Not Representative of the Provider Population in the Areas TMA Surveyed Unlike the 2008 beneficiary survey, the results of the 2008 provider survey are not representative of all physicians and mental health providers in the areas TMA surveyed. The 2008 provider survey was administered in the same 20 Prime Service Areas and 20 non-Prime Service Areas as the beneficiary survey, as well as 21 additional locations that were identified as having access-to-care problems, with an overall response rate of about 45 percent. (See fig. 5.) Because of the low response rate, TMA conducted a nonresponse analysis, and the results of this analysis indicated that there were differences among those who responded to the provider survey and those who did not. Specifically, those who did not respond to the 2008 provider survey were less likely to be aware of TRICARE and less likely to accept TRICARE reimbursement as payment for services. As a result of these differences—even though the survey sample was randomly selected—the survey results cannot be generalized to all physicians and mental health providers in the areas surveyed and can be presented only as the specific views of the respondents. Similarly, because the 2008 survey results cannot be generalized, we did not compare them with the 2008 beneficiary survey results. A TMA official stated that because more responses will have been obtained by the end of the 4-year survey, generalizability of provider survey results to the entire country will likely be possible. Moreover, the TMA official noted that TMA has decided to redesign the method of selecting mental health providers in the 2009 survey to increase the number of responses and the likelihood that survey results could be generalized. 2008 Provider Survey Results Indicate Differences among Respondents’ Awareness and Acceptance of TRICARE Our review of the 2008 provider survey results indicated differences in awareness and acceptance of TRICARE among respondents in Prime Service Areas, non-Prime Service Areas, and the additional Hospital Service Areas TMA surveyed. (See table 5.) Specifically, a lower percentage of responding providers—physicians and nonphysician mental health providers—from Prime Service Areas were aware of the TRICARE program and were accepting new TRICARE patients, if they were accepting any new patients or any new Medicare patients, than providers who responded from non-Prime Service Areas or Hospital Service Areas. For example, 64 percent of the respondents in the surveyed Prime Service Areas who reported that they are accepting any new patients reported that they would accept nonenrolled TRICARE beneficiaries as new patients, compared to 76 percent of respondents in the surveyed non-Prime Service Areas and 72 percent of respondents in the surveyed Hospital Service Areas. Additionally, survey results indicate that in Hospital Service Areas, respondents reported awareness and acceptance of TRICARE that was similar to that of respondents in non-Prime Service Areas. The reason most often cited by respondents in both Prime Service Areas and non-Prime Service Areas for not accepting nonenrolled beneficiaries as new patients, if they were accepting any new patients at all, was that they were not aware of the TRICARE program. Other reasons included concerns about low reimbursement rates and that the provider did not participate in TRICARE’s provider network. Respondents in Hospital Service Areas reported similar reasons, such as concerns about low reimbursement rates and not being aware of the TRICARE program, with the most cited reason being that they were not participating in TRICARE’s provider network. The physicians and mental health providers who responded to the survey differed in their awareness and acceptance of TRICARE. Specifically, a higher percentage of responding physicians reported awareness and acceptance of TRICARE than the mental health providers who responded. (See table 6.) For example, 81 percent of the physicians who responded reported that they would accept new TRICARE patients, if they were accepting any new patients at all, compared to 50 percent of the mental health providers who responded. Agency Comments We received comments on a draft of this report from DOD. (See app. VI.) DOD concurred with our overall findings and provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Appendix I: Selected Office of Management and Budget Standards for Statistical Surveys The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) directed the Department of Defense (DOD) to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. The NDAA 2008 also directed us to review the processes, procedures, and analyses used by DOD to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients. To evaluate the methodology DOD used to implement the beneficiary and provider surveys, we reviewed the Office of Management and Budget’s (OMB) Standards and Guidelines for Statistical Surveys (2006) to identify key aspects and best practices of statistical survey methodology that result in sound survey design and implementation. We focused our evaluation on standards that address, among other things, designing a survey, developing sampling frames, collecting survey data, and analyzing survey response rates. Table 7 provides a description of these standards. Appendix II: DOD’s Methodology for the 2008 Beneficiary and Provider Surveys The NDAA 2008 directed DOD to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we refer to beneficiaries who are not enrolled in TRICARE Prime—that is, those who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select—as nonenrolled beneficiaries. The mandate also included specific requirements related to the number and priority of areas to be surveyed, including the populations to be surveyed each year, the content for each type of survey, and the use of benchmarks. Within DOD, the TRICARE Management Activity (TMA), which oversees the TRICARE program, had responsibility for designing and implementing the beneficiary and provider surveys. The following information describes TMA’s methodology, including its actions to address the requirements for each of the following: (1) survey area, (2) sample selection, (3) survey content, and (4) the establishment of benchmarks. It also provides information on TMA’s analyses of its 2008 beneficiary and provider surveys. Beneficiary and Provider Survey Area Selection The NDAA 2008 specified that DOD survey beneficiaries and providers in at least 20 TRICARE Prime Service Areas, and 20 geographic areas in which TRICARE Prime is not offered—referred to as non-Prime Service Areas—each fiscal year, 2008 through 2011. The NDAA 2008 also required DOD to consult with representatives of TRICARE beneficiaries and health care and mental health care providers to identify locations where nonenrolled beneficiaries have experienced significant access-to-care problems, and give a high priority to surveying health care and mental health care providers in these areas. Additionally, the NDAA 2008 required DOD to give a high priority to surveying areas in which higher concentrations of Selected Reserve servicemembers live. In designing the 2008 beneficiary and provider surveys, TMA defined 80 Prime Service Areas and 80 non-Prime Service Areas that will allow it to survey the entire country over a 4-year period and to develop estimates of access to health care and mental health care at service area, state, and national levels. TMA identified the 80 Prime Service Areas by collecting zip codes where TRICARE Prime was offered from officials within each of the three TRICARE Regional Offices. TMA grouped these zip codes into 80 nonoverlapping areas so that each area had roughly the same number of TRICARE-eligible beneficiaries. Because non-Prime Service Areas had not previously been defined, TMA sought to define them by grouping all zip codes not in Prime Service Areas into one large area using Hospital Referral Regions, which are groupings of Hospital Service Areas. TMA divided the large area into 80 non-Prime Service Areas so that each area had roughly the same number of TRICARE-eligible beneficiaries. To identify locations where beneficiaries and health care and mental health care providers have identified significant levels of access-to-care problems under TRICARE Standard and Extra, TMA spoke with groups representing beneficiaries and health care and mental health care providers as well as TRICARE Regional Offices. These groups suggested cities and towns where access should be measured, and Hospital Service Areas corresponding to each city and town were then identified. Based on the groups’ recommendations, a list was created and sorted in priority order, resulting in 21 Hospital Service Areas being included in the 2008 provider survey. Additionally, TMA plans to include these 21 Hospital Service Areas in its 2009 beneficiary survey along with additional areas that are identified for 2009. Although the NDAA 2008 required DOD to give a high priority to surveying areas in which higher concentrations of Selected Reserve servicemembers live, TMA decided to randomly select areas for the surveys in order to produce results that could be generalized to the populations in the areas surveyed. Selection of Beneficiary and Provider Survey Sample TMA selected its sample of beneficiaries who met its criteria for inclusion in the beneficiary survey using DOD’s Defense Enrollment Eligibility Reporting System (DEERS), a database of DOD beneficiaries who may be eligible for military health benefits. TMA determined a beneficiary’s eligibility to be included in the 2008 beneficiary survey if DEERS indicated that the individual met five criteria: (1) eligible for military health care benefits as of the date of the sample file extract; (2) age 18 years old or older; (3) not an active duty member of the military; (4) residing in one of the 20 randomly selected Prime Service Areas or 20 randomly selected non-Prime Service Areas; and (5) a user of TRICARE Reserve Select, or not enrolled in TRICARE Prime. From this database, TMA randomly sampled about 1,000 beneficiaries from each Prime Service Area and non- Prime Service Area—a sample size that would achieve TMA’s desired sample error. Specifically, TMA surveyed 48,548 TRICARE beneficiaries representing active duty, retired, and Reserve servicemembers, including the Selected Reserve. TMA began mailing the beneficiary survey in June 2008. After receiving the returned surveys, TMA identified the responses that it considered complete and eligible based on whether the beneficiary had answered at least half of TMA’s identified “key” questions and answered that he or she used “TRICARE Extra or Standard” or “TRICARE Reserve Select” in response to the following question: “Which health plan did you use for all or most of your health care in the last 12 months?” TMA selected the provider sample within the same 20 Prime Service Areas and 20 non-Prime Service Areas that had been randomly selected for the 2008 beneficiary survey. In addition, TMA mailed surveys to physicians and mental health providers in the 21 Hospital Service Areas identified by beneficiary and provider groups as having significant levels of access-to- care problems under TRICARE Standard and Extra. TMA used the American Medical Association Physician Masterfile (Masterfile) to select a sample of 20,030 physicians who were licensed office-based civilian medical doctors or licensed civilian doctors of osteopathy within the specified locations who were engaged in more than 20 hours of patient care each week. The Masterfile is a database of physicians in the United States—Doctors of Medicine and Doctors of Osteopathic Medicine—that includes data on all physicians who have the necessary educational and credentialing requirements. The Masterfile did not differentiate between TRICARE’s network and nonnetwork civilian providers, which TMA deemed acceptable to avoid any potential bias in TMA’s sample selection. As such, TMA selected this file because it is widely recognized as one of the best commercially available lists of providers in the United States and it contains over 940,000 physicians along with their addresses; phone numbers; and information on practice characteristics, such as their specialty. According to TMA, the American Medical Association updates physicians’ addresses monthly and other elements through a rotating census methodology involving approximately one-third of the physician population each year. Although the Masterfile is considered to contain most providers, deficiencies in coverage and inaccuracies in detail remain. Therefore, TMA attempted to update providers’ addresses and phone numbers and to ensure that providers were eligible for the survey by also using state licensing databases, local commercial lists, and professional society and association lists. For its mental health provider sample, TMA selected a sample of 20,386 mental health providers from two sources: the National Plan and Provider Enumeration System database maintained by the Centers for Medicare & Medicaid Services, and from LISTS, Inc., a list of names with contact information assembled from state licensing boards. According to TMA, it selected these sources for mental health providers because they have been identified as the most comprehensive databases for these health care providers. TMA did not include all physician specialist types, such as epidemiologists and pathologists, in its survey. From these data sets, TMA planned to randomly sample about 800 providers (400 each of physicians and mental health providers) from each Prime Service Area, non-Prime Service Area, and Hospital Service Area—a sample size that would achieve TMA’s desired sample error. In those instances where there were not 800 providers in a single area, TMA selected all of the providers in that area to receive surveys. TMA began mailing the provider survey in December 2008. Upon receipt of the returned surveys, TMA identified the responses that it considered completed and eligible based on the following criteria for respondents: (1) if the provider answered “yes” to the questions that asked whether the provider offers care in an office-based location or private practice; (2) for the nonphysician mental health survey, if the provider responded that he or she was in one of the six TRICARE participating specialties—certified clinical social worker, certified psychiatric nurse specialist, clinical psychologist, certified marriage and family therapist, pastoral counselor, or mental health counselor; and (3) if the provider had completed three key questions on the physician survey instrument or three key questions on the nonphysician mental health provider survey instrument. Beneficiary and Provider Survey Content The NDAA 2008 required that the beneficiary survey include questions to determine whether TRICARE Standard and Extra beneficiaries have had difficulties finding physicians and mental health providers willing to provide services under TRICARE Standard or TRICARE Extra. TMA’s beneficiary survey included 91 questions that address, among other things, health care plans used; perceived access to care from a personal doctor, nurse, or specialist; the need for treatment or counseling; and ratings of health plans. TMA based some of its 2008 beneficiary survey questions on those included in the Department of Health and Human Services’ 2006 Consumer Assessment of Healthcare Providers and Systems, a national survey of beneficiaries of commercial health insurance, Medicare, Medicaid, and the Children’s Health Insurance Program. When TMA began mailing the beneficiary survey, it included a combined cover letter and a questionnaire to all beneficiaries in its sample—with the option of having beneficiaries complete the survey by mail or Internet. (See app. III for a copy of the 2008 beneficiary survey instrument.) The cover letter provided information on the options available for completing the survey, as well as instructions for completing the survey by Internet. If the beneficiary did not respond to the mailed questionnaire, TMA mailed a second combined cover letter and questionnaire 4 weeks later encouraging the beneficiary to complete the survey. For the provider survey, the NDAA 2008 required questions to determine (1) whether the provider is aware of TRICARE; (2) the percentage of the provider’s current patient population that uses any form of TRICARE; (3) whether the provider accepts Medicare patients for health care and mental health care; and (4) if the provider accepts Medicare patients, whether the provider would accept new Medicare patients. TMA obtained clearance for its provider survey from OMB as required under the Paperwork Reduction Act. Subsequent to this review, OMB approved an 11-item questionnaire for physicians (including psychiatrists) and a 12-item questionnaire for nonphysician mental health providers to be administered in fiscal year 2008. (See app. IV for a copy of the 2008 provider survey instruments.) The mental health providers’ version of the survey includes an additional question about what type of mental health care the provider practiced. When TMA began mailing the provider survey, it included a combined cover letter and a questionnaire to each provider in the sample. The providers had the option of completing the survey by mail, fax, or Internet. The cover letter provided information on the options available for completing the survey, as well as instructions for completing the survey by Internet. If the provider did not respond to the mailed questionnaire, TMA mailed a second combined cover letter and questionnaire about 4 weeks later encouraging the provider to complete the survey. Beneficiary and Provider Survey Benchmarks In accordance with the NDAA 2008, TMA identified benchmarks for analyzing the results of the beneficiary and provider surveys. Because TMA based some of its 2008 beneficiary survey questions on those included in the Department of Health and Human Services’ 2006 Consumer Assessment of Healthcare Providers and Systems survey, it was able to compare the results of those questions with its 2008 beneficiary survey results. To benchmark its provider survey, TMA compared the results of its 2008 survey with the results of its 2005, 2006, and 2007 provider surveys. A TMA official noted that TMA was unaware of any external benchmarks that would be applicable to its surveys of providers. Analyses of Beneficiary and Provider Survey Results In analyzing the results of the beneficiary survey, TMA representatives conducted a nonresponse analysis because the overall response rate to the survey was about 38 percent. To conduct this analysis, TMA did the following: (1) compared key beneficiary demographic characteristics of respondents to those of nonrespondents (e.g., beneficiary gender and age) and (2) interviewed a sample of 400 beneficiaries who did not respond to the original survey or the follow-up mailing and compared their responses with those of the original survey respondents. The results of TMA’s nonresponse analysis indicated no difference in demographic characteristics and health coverage between beneficiary survey respondents and nonrespondents within the combined Prime Service Areas and combined non-Prime Service Areas surveyed in fiscal year 2008. Therefore, TMA concluded that the survey respondents were representative of the combined Prime Service Areas and combined non- Prime Service Areas surveyed, and the results of the survey can be generalized to the population from which the sample was chosen. TMA weighted each response so that the sampled beneficiaries represented the population in terms of size for the respective Prime Service Area or non- Prime Service Area from which they were selected. In analyzing the results of the provider survey, TMA conducted a nonresponse analysis because the overall response rate to the survey was about 45 percent. To conduct this analysis, TMA did the following: (1) compared key provider demographic characteristics of respondents to those of nonrespondents (for example, provider type and location) and (2) interviewed a sample of 247 providers (140 physicians and 107 mental health providers) who did not respond to the original survey, follow-up mailing, or follow-up telephone calls and compared their responses with those of the original survey respondents. The results of TMA’s nonresponse analysis indicated that there are differences between respondents and those who did not respond to the original 2008 provider survey. Specifically, among both types of providers (physicians and mental health providers), nonrespondents are less likely to be aware of TRICARE and less likely to accept TRICARE as a form of payment for services. Additionally, nonrespondents are less likely to be accepting new patients. Therefore, the survey results cannot be generalized to the population from which the sample was chosen and can only be presented in terms of those civilian providers who responded to the survey. Appendix III: Beneficiary Survey Instrument The NDAA 2008 directed DOD to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we refer to beneficiaries who are not enrolled in TRICARE Prime—that is, those who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select—as nonenrolled beneficiaries. Specifically, the NDAA 2008 specified that DOD conduct surveys of beneficiaries each fiscal year, 2008 through 2011. The NDAA 2008 also required that the beneficiary survey include questions seeking information from nonenrolled beneficiaries to determine whether they have had difficulties finding health care and mental health care providers willing to accept them as patients. Following is the actual survey instrument that DOD used to obtain information from nonenrolled beneficiaries. Appendix IV: Survey Instruments for Health Care and Mental Health Care Providers The NDAA 2008 directed DOD to determine the adequacy of the number of health care and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we refer to beneficiaries who are not enrolled in TRICARE Prime—that is, those who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select—as nonenrolled beneficiaries. Specifically, the NDAA 2008 directed DOD to survey providers each fiscal year, 2008 through 2011. The NDAA 2008 also required that the provider survey include questions seeking information to determine (1) whether the provider is aware of the TRICARE program; (2) the percentage of the provider’s current patient population that uses any form of TRICARE; (3) whether the provider accepts Medicare patients; and (4) if the provider accepts Medicare patients, whether the provider would accept new Medicare patients. DOD implemented two versions of its provider survey, one for physicians, including psychiatrists, and one for nonphysician mental health providers. Following are the actual survey instruments that DOD used to obtain information from physicians and nonphysician mental health care providers. Appendix V: Areas Included in the Fiscal Year 2008 Beneficiary and Provider Surveys The NDAA 2008 directed DOD to determine the adequacy of the number of health care providers and mental health care providers that currently accept nonenrolled beneficiaries as patients under TRICARE, DOD’s health care program. For the purpose of this report, we refer to beneficiaries who are not enrolled in TRICARE Prime—that is, those who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select—as nonenrolled beneficiaries. The NDAA 2008 specified that DOD conduct surveys of TRICARE beneficiaries and health care and mental health care providers in 20 TRICARE Prime Service Areas and in 20 areas in which TRICARE Prime is not offered—referred to as non-Prime Service Areas— for each fiscal year, 2008 through 2011. Additionally, the NDAA 2008 required DOD to consult with representatives of TRICARE beneficiaries and health care and mental health care providers to identify locations where nonenrolled beneficiaries have experienced significant access-to- care problems, and give a high priority to surveying health care and mental health care providers in these areas. For the 2008 beneficiary and provider surveys, DOD selected 20 Prime Service Areas and 20 non-Prime Service Areas to determine the adequacy of the number of health care providers and mental health care providers who currently accept nonenrolled TRICARE beneficiaries as patients. For the 2008 provider survey, DOD also surveyed 21 additional areas identified by beneficiary and provider groups where nonenrolled beneficiaries are experiencing significant levels of access-to care problems—called Hospital Service Areas—to determine the adequacy of access to care in these areas. DOD’s selected Prime Service Areas and non-Prime Service Areas for 2008 are presented in table 8 and table 9, respectively. Table 10 lists those locations identified by representatives of TRICARE beneficiaries and health care and mental health care providers as having significant levels of access-to-care problems, which were included in the 2008 provider survey. Appendix VI: Comments from the Department of Defense Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Bonnie Anderson, Assistant Director; Martha Kelly, Assistant Director; Susannah Bloch; Peter Mangano; Jeff Mayhew; Lisa Motley; Jessica C. Smith; C. Jenna Sondhelm; and Suzanne Worth made key contributions to this report.
The Department of Defense (DOD) provides health care and mental health care through its TRICARE program. Under TRICARE, beneficiaries may obtain care through TRICARE Prime, an option that includes the use of civilian provider networks and requires enrollment. TRICARE beneficiaries who do not enroll in this option may obtain care from nonnetwork providers through TRICARE Standard, or from network providers through TRICARE Extra. In addition, qualified National Guard and Reserve servicemembers may purchase TRICARE Reserve Select, a plan whose care options are similar to those of TRICARE Standard and TRICARE Extra. We refer to servicemembers who use TRICARE Standard, TRICARE Extra, or TRICARE Reserve Select as nonenrolled beneficiaries. The National Defense Authorization Act for Fiscal Year 2008 directed GAO to analyze the adequacy of DOD's surveys of TRICARE beneficiaries and providers and report what the surveys' results indicate about access to care for nonenrolled beneficiaries. To do so, GAO evaluated the surveys' methodology by interviewing DOD officials and reviewing relevant documentation, including the Office of Management and Budget's (OMB) survey standards. GAO also assessed the surveys' results by interviewing DOD officials, obtaining relevant documentation, and analyzing the response rates and data for both surveys. DOD's implementation of beneficiary and provider surveys for 2008, the first of a 4-year survey effort, followed the OMB survey standards for survey design, data collection, and data accuracy. In addition, DOD generally addressed the survey requirements outlined in the mandate in implementing its 2008 beneficiary and provider surveys but did not give a high priority to selecting geographic areas with a high concentration of Selected Reserve servicemembers. Instead, for both of its surveys, DOD randomly selected areas to produce results that can be generalized to the populations from which the survey samples were drawn. DOD plans to cover the entire United States at the end of the 4-year survey period, which will include any locations with higher concentrations of Selected Reserve servicemembers. In its analysis of the 2008 beneficiary survey data, GAO estimated that a higher percentage of nonenrolled beneficiaries in surveyed areas where TRICARE Prime is offered (Prime Service Areas) experienced problems accessing care from network or nonnetwork primary care providers than beneficiaries in surveyed areas where TRICARE Prime is not offered (non-Prime Service Areas)--30 percent and 24 percent, respectively. GAO also found that beneficiaries in the surveyed areas most often experienced access problems related to providers' willingness to accept TRICARE payments, regardless of whether they lived in a Prime or non-Prime Service Area. Additionally, GAO's comparison of this survey data to related data from a 2008 Department of Health and Human Services' survey showed that beneficiaries in the surveyed Prime and non-Prime Service Areas rated their health care satisfaction similarly to each other and to beneficiaries of commercial health care plans, but slightly lower than Medicare beneficiaries. GAO found that the results for the 2008 provider survey are not representative of all physicians and mental health providers in the geographic areas surveyed, but the results do provide information about access to care based on the specific views of the respondents. According to a DOD official, generalizability of provider survey results to the entire country will likely be possible at the end of the 4-year survey period. GAO's review of the 2008 provider survey results indicates that a lower percentage of respondents in Prime Service Areas reported awareness and acceptance of TRICARE than respondents in non-Prime Service Areas. Additionally, there were differences between responding physicians and responding mental health providers, such as psychiatrists and clinical psychologists, regarding their awareness and acceptance of TRICARE. For example, 81 percent of physicians who responded reported that they would accept new TRICARE patients, if they were accepting any new patients at all, compared to 50 percent of mental health providers who responded. In commenting on a draft of this report, DOD concurred with GAO's overall findings and provided technical comments, which GAO incorporated as appropriate.
Background The mission of IRS, a component of the Department of the Treasury (Treasury), is to provide America’s taxpayers top quality service by helping them understand and meet their tax responsibilities and enforcing the federal tax laws with integrity and fairness to all. In carrying out its mission, IRS annually collects over $2 trillion in taxes from millions of individual taxpayers and numerous other types of taxpayers and manages the distribution of more than $300 billion in refunds. To guide its future direction, the agency has two strategic goals: (1) deliver high quality and timely service to reduce taxpayer burden and encourage voluntary compliance and (2) effectively enforce the law to ensure compliance with tax responsibilities and combat fraud. IT plays a critical role in enabling IRS to carry out its mission and responsibilities. For example, the agency relies on information systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for all business activities, including the public’s toll-free access to tax information. IRS’s fiscal year 2014 budget was $11.3 billion. Of this amount, IRS expected to spend about $2.4 billion on IT investments. IRS expected to fund 19 major investments at a cost of about $1.7 billion, or 71 percent, of the total IT request, and 135 non-major investments at a cost of about $700 million, or 29 percent, of the total IT request. For IRS, a major investment is one that costs $10 million in either the current year or budget year, or $50 million over the 5-year period extending from the prior year through 2 years after the budget year. Table 1 provides high-level descriptions of IRS’s 19 major IT investments and appendix II provides detailed profiles of 7 investments critical to IRS’s mission for which we performed in-depth reviews in recent audits (ACA, CADE 2, e-Services, IRDM, IRS.Gov, Modernized e-File, RRP). IRS Is Required to Report Quarterly to Congress on the Status of Its Major IT Investments The conference report accompanying the Consolidated Appropriations Act, 2012, directed IRS to submit quarterly reports on the cost and schedule performance of its major IT investments to the Committees on Appropriations and GAO no later than mid-April 2012. These quarterly reports are to include detailed information on selected investments, including their purpose and life-cycle stage, reasons for cost and schedule variances, risks and mitigation strategies, expected developmental milestones to be achieved, and costs to be incurred in the next quarter. IRS’s current reporting provides detailed information on eight investments, including six major investments that we have included in our reviews: CADE 2, e-Services, IRDM, IRS.Gov, MeF, and RRP. GAO and the Treasury Inspector General for Tax Administration Have Reported on IRS’s Major IT Investments GAO and the Treasury Inspector General for Tax Administration (TIGTA) have previously reported on IRS’s major IT investments. We reported in June 2012 that while IRS reported on the cost and schedule of its major IT investments and provided chief information officer ratings for them, the agency did not have a quantitative We measure of scope–a measure that shows functionality delivered.reported that having such a measure is a good practice as it provides information about whether an investment has delivered the functionality that was paid for. We recommended that the Commissioner of Internal Revenue develop a quantitative measure of scope, at a minimum for its major IT investments, to have more complete information on the performance of these investments. IRS agreed with our recommendation at the time we made it. In March 2014, IRS reported that it had practices and processes in place that addressed this recommendation, including quarterly reports to Congress, and a baseline change request process. However, we did not believe these practices addressed the recommendation, as neither approach included a quantitative measure. For this reason, we believed the recommendation was still warranted. We noted in April 2013 that the majority of IRS’s major IT investments were reportedly within 10 percent of cost and schedule estimates and eight major IT investments reported significant cost and/or schedule variances. We also reported that weaknesses existed, to varying degrees, in the reliability of reported cost and schedule variances, and key risks and mitigation strategies were identified. As result, we made recommendations for IRS to improve the reliability of reported cost and schedule information by addressing the identified weaknesses in future updates of estimates. We also recommended that IRS ensure projects consistently follow guidance for updating performance information 60 days after completion of an activity and develop and implement guidance that specifies best practices to consider when determining projected amounts. IRS agreed with three of our four recommendations and partially disagreed with the fourth recommendation related to guidance on projecting cost and schedule amounts. The agency specifically disagreed with the use of earned value management data as a best practice to determine projected cost and schedule amounts, stating that the technique was not part of IRS’s current program management processes and the cost and burden to use it outweigh the value added. While we disagreed with IRS’s view of earned value management because best practices have found that the value generally outweighs the cost and burden of implementing it, we provided it as one of several examples of practices that could be used to determine projected amounts. We also noted that implementing our recommendation would help improve the reliability of reported cost and schedule variance information, and that IRS had flexibility in determining which best practices to use to calculate projected amounts. For those reasons, we believed our recommendation was still warranted. In September 2013, TIGTA reported on CADE 2 development challenges and changes to the planned schedule for this investment. TIGTA reported, among other things, that the CADE 2 database cross-functional triage team had effectively managed and resolved more than 1,000 data defects.that the downstream system interfaces had not been implemented due to data quality issues and the implementation date of these interfaces was revised to January 2014. However, TIGTA’s review determined We reported in April 2014, that 6 of IRS’s 19 major IT investments were within 10 percent of cost and schedule estimates during fiscal year 2013; however, the reported variances were for the fiscal year only, and we therefore noted that IRS’s reporting would be more meaningful if supplemented with cumulative cost and schedule variances for the investments or investment segments. In addition, the reported variances for selected investments were not always reliable because the projected and actual cost and schedule amounts on which they depend had not been consistently updated in accordance with OMB and Treasury reporting requirements. Further, IRS was not working on developing a quantitative measure of scope (i.e., functionality) as we recommended in 2012, and we noted that reporting qualitatively in congressional reporting until a quantitative measure is developed would help provide Congress with a complete picture of the agency’s performance in managing its major investments. Lastly, IRS continued to lack guidance that included best practices for calculating projected cost and schedule amounts. We made three recommendations for IRS to report more comprehensive and reliable cost and schedule information and improve the transparency of reported scope information for its major investments. IRS agreed with our recommendations and stated it believed it had addressed our recommendation to report cumulative investment and investment segment cost and schedule information in the quarterly reports to Congress, as well as our prior recommendation to develop a quantitative measure of scope; we disagreed, however, and maintained our recommendations. In September 2014, TIGTA reported on challenges faced by IRS in implementing the IRDM Case Management project. More specifically, TIGTA noted that after a year of user acceptance testing, IRS officials acknowledged that the IRDM Case Management project could not effectively process business cases containing underreported income and could not be deployed into the IRS production environment; TIGTA identified insufficient project requirements as contributing to these challenges. In addition, IRS officials stated that budget constraints and difficulties encountered during user acceptance testing resulted in IRS “strategically pausing” development of the IRDM Case Management project. In response to TIGTA’s report, IRS’s Chief Technology Officer stated that in January 2014, IRS decided to strategically pause development of the IRDM Case Management project due to budget constraints and the inability to certify that the ongoing case management functionality deployment would not have an adverse impact on taxpayers. IRS Has Made Limited Progress in Implementing Prior Recommendations to Improve Reliability and Reporting of Cost, Schedule, and Scope Information IRS has made limited progress in improving the reliability and reporting of cost, schedule, and scope performance information: it has partially implemented two of our five related recommendations and not yet addressed the remaining three. IRS’s implementation of these recommendations is critical in ensuring that Congress receives the reliable information it needs for effective oversight and decision making. Table 2 identifies the status of IRS’s efforts to address the recommendations. IRS Has Taken Action to Improve the Timeliness of Reported Performance Information for Completed Investment Activities In April 2013, we reported that the cost and schedule performance information for the completed activities for six selected investments was updated within the 60-day time frame required by Treasury guidance in 77 percent of the cases. While the number of activities expected to be completed was relatively low and IRS had updated the variance calculations for these activities in the majority of the cases, we noted that ensuring that updated actual information is consistently reported within the required 60-day time frame would strengthen the reliability of their variances and provide information that better reflects their performance. Consequently, we recommended that IRS ensure its projects consistently follow guidelines for updating performance information 60 days after completion of an activity. Treasury and IRS subsequently took actions to address our recommendation. Specifically, starting in fiscal year 2014, Treasury addressed the timeliness issue for schedule calculations by having the monthly reporting system automatically calculate a variance based on the current date for any activity where the planned completion date had passed and investment staff have not provided an actual figure within 45 days. For cost, in June 2014, officials in IRS’s Strategy and Planning group—which is responsible for overseeing monthly variance reporting— stated that they have been working closely with investment staff and program managers to ensure that reporting is completed within the 60- day requirement. We reviewed the cost and schedule performance information for the six selected investments for fiscal year 2014 and found that the actions taken have resulted in actual cost and schedule amounts for completed activities being updated within the 60-day time frame required in 86 percent of the cases. While this is an improvement from the 77 percent we previously reported, IRS should continue its efforts to ensure full compliance with Treasury’s guidance and thereby provide reliable information on which to gauge its performance in meeting cost and schedule goals. IRS Has Begun to Take Steps to Address the Reporting of Performance Information for In-Process Investment Activities In April 2014, we reported that IRS did not consistently report updated variances for in-process investment activities for six investments in fiscal year 2013 even though OMB and Treasury require cost and schedule variances to be updated on a monthly basis. This was partly due to an inconsistent understanding among investment staff of the information that was to be included in the monthly reporting. As a result, we recommended that IRS ensure that projected cost and schedule variances for in-process activities are updated monthly consistent with OMB and Treasury reporting requirements by ensuring investment staff have a consistent understanding of the information to be included in monthly reporting. In response to our recommendation, IRS’s Investment Management and Control office provided training in October 2014, which focused on, among other things, the monthly update of investment performance information. We believe this training will help to ensure investment staff have a consistent understanding of the information to be included in monthly reporting as the training outlines the specific information that is to be reviewed or updated for in-process activities. However, since the training was provided in October 2014, there have not yet been enough monthly reports to determine the extent to which this training has improved monthly reporting of variances for in-process activities. Adherence to IRS’s training on monthly performance reporting should help to ensure investments’ cost and schedule variances are updated in accordance with OMB and Treasury guidance, and contribute to producing reliable information on which to gauge IRS’s performance. IRS Has Not Developed Guidance for Determining Projected Cost and Schedule Amounts for In- Process Investment Activities In April 2013, we found that IRS had determined variances using —which comprised projected cost and schedule for in-process activities75 percent of all its activities. However, Treasury’s guidance, which IRS follows, did not specify how projected amounts should be determined when actual amounts are not available. We therefore recommended that IRS develop guidance for determining projected amounts. In response, IRS stated that the estimate variance reporting performed by its Estimation Program Office applies the best practices we previously recommended, and the practices used are documented in its July 2014 cost and schedule variance reporting procedure. We reviewed this document and found that, while it described the methodology for revising an estimate, it does not address the calculation of projected cost and schedule amounts used for the monthly reporting of cost and schedule variances for in-process activities, which was the subject of our recommendation. At the conclusion of our review, officials sought clarification on what was needed to address our recommendation and agreed that the action taken did not address it. Developing and implementing the recommended guidance should provide greater assurance that projected amounts, when reported, are determined consistent with best practices and therefore more reliable. This is particularly important given the high percentage of reported investment activities that we noted were in-process. IRS Has Not Taken Steps to Report Cumulative Investment Performance Information In April 2014, we reported that IRS’s reporting of cost and schedule information in the quarterly reports to Congress would be more meaningful for determining whether the agency is effectively managing its investments if it included cumulative cost and schedule variances for the investments or investment segments, consistent with OMB’s guidance for measuring progress towards meeting investment goals. We noted that cost and schedule variances were for the fiscal year only in that they provide cost and schedule variance information for all projects and activities underway in any portion of the fiscal year.year focus did not provide cumulative cost and schedule information at the investment or investment segment level because it did not account for activities that were completed in previous fiscal years. Accordingly, we recommended that IRS report cumulative performance information at the However, the fiscal investment or investment segment level. At that time, the IRS Commissioner stated that the agency agreed with our recommendation but believed it had already been addressed in quarterly reports to Congress. We noted that while the reports provide cumulative information, it is for the fiscal year only, not for the investment as recommended, and we therefore maintained our recommendation. In June 2014, IRS officials stated they believed the investment information reported in the Office of Management and Budget exhibit 300 addressed our recommendation and, therefore, they had not taken additional steps. However, the reported cost and schedule variances in the exhibit 300 are for the fiscal year only, and as a result, we believe our recommendation is still warranted. Providing Congress with cost and schedule information at the useful segment level—in addition to the current fiscal year reporting—in the quarterly reports would provide a more meaningful gauge of whether investments are meeting cost and schedule performance goals. IRS Has Not Taken Steps to Provide Scope Information for Selected Investments In 2012, we reported that IRS did not have a quantitative measure of scope (i.e. functionality delivered) that would provide a measure of whether an investment delivered the functionality that was paid for and recommended that the agency develop the measure, at a minimum, for its major IT investments. At the time, IRS agreed with the recommendation but stated that it had other methods in place to document delivered functionality of a project throughout the life cycle. We agreed that the methods identified addressed project functionality, but they did not provide a quantitative measure of performance. In April 2014, seeing that IRS had not made progress on developing a quantitative measure of scope, we recommended the agency report qualitative scope information in the interim. IRS responded that it agreed with the recommendation and had practices and processes in place to assess and report on the delivery of scope in conjunction with cost and schedule management, and therefore, IRS had not taken any additional steps to address our recommendation; however, we did not believe that these practices and processes addressed our recommendation. As of June 2014, IRS continued to assert that it had addressed the recommendation and therefore did not take any additional steps. Officials noted that the information reported in the Office of Management and Budget exhibit 300 included information on changes in investment scope. However, this reporting does not provide a quantitative measure of scope or qualitative information showing how delivered scope compares to what was planned. Until IRS reports on progress in meeting scope in its quarterly reporting to Congress, Congress may lack important information that it needs to determine the extent to which the investments are delivering the functionality that was paid for. This is particularly important given the major changes in development highlighted in the latter portion of this report. IRS Reported Most Investments Meeting Cost, Schedule, and Operational Performance Goals, but Facing Increased Risks Most of IRS’s major IT investments reportedly met cost and schedule goals, with 11 of 17 investments within 10 percent of cost estimates, and 13 of 17 investments within 10 percent of schedule estimates. It is important to note that the cost and schedule information was not updated for two investments however, IRS did not consistently indicate so in its reports to Congress. Consistently disclosing when reported information is not updated would provide Congress and other decision makers with improved information for oversight and decision-making purposes. IRS also reported “green” ratings for investments instead of their previous “yellow” ratings for Chief Technology Officer summary-level risk assessments. However, IRS does not provide these ratings for the six investments for which it provides detailed information in the quarterly reports to Congress. Providing summary-level risk ratings for all major investments would improve the visibility into changes in investment risk, and provide Congress with the information to more easily determine the investments requiring greater attention. Finally, of the 85 operational performance metrics associated with the 17 major investments reporting operational performance information, IRS reported meeting approximately 73 (86 percent) of these metrics. Most Investments Were Reportedly Within Cost and Schedule Goals According to IRS, 11 of 17 IT investments were within 10 percent of cost estimates between October 2013 and September 2014, and 13 of 17 investments were within 10 percent of schedule estimates between October 2013 and September 2014. While IRS reports on the cost and schedule variance for its 19 major investments, the reports for two investments (IRDM and RRP) were not updated to reflect actual performance throughout the fiscal year. As illustrated in figure 1, of the six investments that reported significant cost variances (equal to plus or minus 10 percent variance from cost goals), four were significantly under planned costs for at least 1 month during fiscal year 2014, one investment reported being over cost, and one investment reported being, at different times, both under and over cost during this period. Three investments–ACA, e-Services, and IRS Telecommunications Systems and Support–reported significant cost variances for a period of 3 or more consecutive months. IRS reported several reasons for these variances, including refinement of processes for allocating costs, fewer investment staff working on the investment during the 2013 government shutdown, overestimation of required contractor support, and reduction of planned funding. In addition, as illustrated in figure 2, one investment reported being significantly ahead of schedule for at least 1 month during fiscal year 2014, while three investments reported being significantly behind schedule during this period. As previously mentioned, Treasury and OMB guidance require cost and schedule variances to be updated on a monthly basis. However, IRS did not update information on cost and schedule variances to reflect actual performance for their RRP and IRDM investments in its reports to Congress. Officials said that updated cost and schedule performance information for these investments was not included following pauses in their development (which occurred in January 2014 for IRDM and February 2014 for RRP) and during approval of baseline change requests. IRS officials stated they did not yet know how to include the pauses in development in their reports and that they had been instructed by Treasury not to update monthly performance information until the change requests had been approved. However, instances where information was not updated were not disclosed in a consistent manner for all investments. Specifically, while IRS identified such instances for RRP, it did not provide similar disclosure for IRDM following its development pause. Consistently disclosing reasons for why monthly updates are not being made (such as during the baseline change request approval process) would be helpful in providing decision makers with the information they need for oversight purposes. IRS Reported Increased Risks for Selected Investments During the third quarter of fiscal year 2014, IRS reported increased risks for the 13 investments for which it provides summary-level Chief Technology Officer risk assessments to Congress. Specifically, while the 13 investments had a risk rating of “green” during the second quarter of fiscal year 2014, 12 of these investments reported a risk rating of “yellow” during the third quarter of fiscal year 2014, and 1 investment reported a risk rating of “red.” According to the Deputy Chief Information Officer for Strategy and Modernization, the Chief Technology Officer and Deputy Chief Information Officers meet quarterly to make a broad assessment of the major IT investments, and as a result, assign summary-level risk ratings for 13 of the major IT investments. This assessment is based on these officials’ knowledge of each of the major investments, as well as an assessment of six key performance indicators (cost, schedule, scope, risk, organizational readiness, and technical). A reason IRS provided for the change in risk ratings for its major IT investments was funding constraints as a result of additional legislative mandates, such as the ACA and FATCA investments, which IRS noted it In addition, IRS does not receive funding from Congress to implement.noted that it has had to reallocate staffing to these investments, which has created a skill set gap for other investments. To address this, IRS stated that it is currently creating a skill set inventory to specifically identify gaps between available and required skill sets. It is important to note that, while IRS identified increased risks for the 13 major IT investments via its Chief Technology Officer risk ratings for the first time in quarter three of fiscal year 2014, the assessments were not indicative of new risks. Rather, they better reflected risks IRS had previously shared with us during quarterly briefings. During the fourth quarter of fiscal year 2014, the risk rating for 6 of the investments improved from “yellow” to “green.” IRS’s Deputy Chief Information Officer for Strategy and Modernization explained that this happened because the agency was able to draw resources from infrastructure investments deemed less critical for the upcoming filing season to address the risks associated with most of the investments previously rated “yellow.” This explains the “red” rating for the infrastructure investments in the fourth quarter, as illustrated in figure 3 below. We have previously reported on the importance of providing summary- level risk ratings for major IT investments. Specifically, we have noted that such ratings improve the visibility into changes in the risk level of investments over time. While IRS provides summary-level Chief Technology Officer risk assessment ratings for 13 investments in quarterly reporting to Congress, it does not provide such ratings for the 6 investments for which it reports detailed information–CADE 2; e-Services; IRDM; IRS.Gov; MeF; and RRP. While the detailed information on the 6 investments is consistent with congressional reporting requirements, supplementing it with Chief Technology Officer summary-level risk assessment ratings would improve the visibility into risks faced by these investments, and provide Congress with the information to more easily determine the investments requiring greater attention. Figure 3 shows the Chief Technology Officer risk assessment ratings for the four quarters of fiscal year 2014. Most Major IT Investments Reported Meeting Operational Performance Goals According to OMB, operational performance metrics are used to examine the performance of an investment in operation and demonstrate that the investment is meeting the needs of the agency, delivering expected value, or being modernized and replaced consistent with the agency’s enterprise architecture. As of September 2014, IRS had reported on the operational performance for 17 of its 19 major investments. IRS establishes operational metrics and associated targets for its investments, and on a quarterly, monthly, or annual basis reports on its performance in meeting the targets. The operational metrics established for investments include, for example, percentage of scheduled system availability, percentage of individual tax returns processed electronically, and the percentage of refunds processed daily. As illustrated in figure 4, of the 85 operational performance metrics reported with associated actuals, IRS reported meeting approximately 73 (86 percent) of these metrics. With respect to the 12 operational performance metrics that were not met, the difference between the target and actual performance was generally insignificant. For example, half of the metrics were within 5 percent of the target. Variances from Selected Investments’ Initial Cost, Schedule, and Scope Goals Have Not Been Transparent and Reporting of ACA Testing Status Is Not Comprehensive Selected investments experienced variances from initial cost, schedule, and scope goals that were not transparent in congressional reporting because IRS has yet to address our prior recommendations for reporting at the investment level and on progress in delivering scope. Specifically, RRP has so far exceeded planned costs by $86.5 million and has yet to deliver functionality that was scheduled for September 2012, in large part due to the need to implement new technology and a lack of adequate resources, including contracting expertise and staff; a key phase of CADE 2 was developed 10 months late and at $183.6 million more than planned; and the IRDM Case Management project was cancelled. However, these variances were not all included in congressional reporting. In addition, the reports on the status of testing for the ACA investment are not comprehensive, making it difficult to determine whether all required testing is being performed. IRS Delivered Less RRP Functionality at Higher Cost and Delayed Schedule IRS delivered less functionality than planned for the RRP investment, and did so at a higher than planned cost and behind schedule. Specifically, IRS exceeded initial planned costs for this investment by approximately $86.5 million and has yet to complete the first phase of the investment, which was originally planned to be delivered in September 2012. As early as May 2010, IRS issued several contracts to, among other things, plan and develop four transition states to complete the RRP investment; these contracts had a total planned cost of $57.5 million. Figure 5 identifies the current and historical development plans for the RRP investment. The planned schedule and functionality for the four RRP transition states are identified in table 3. In March 2012, a baseline change request was approved for RRP that included a revision to the planned completion dates for Transition States 1 and 2 to December 2013 and 2014, respectively. In addition, the planned cost for the RRP investment was revised to $136.2 million, an increase of approximately $79 million. According to IRS, these changes to initial plans were a result of IRS’s decision to implement new technology for delivering the RRP investment. More specifically, IRS began implementation of the RRP investment using existing technologies; however, IRS determined that new technology would be better suited to meet the goals of the investment. In February 2014, after developing most of the planned functionality for Transition State 1–a senior RRP official estimates about 70 percent– IRS’s Executive Steering Committee made a decision to pause further development of this investment. According to IRS officials, factors contributing to this decision included budget constraints, as well as uncertainty about next steps from a business and a technology perspective, and the need to ensure alignment of RRP with the new senior leadership’s strategic vision for identity theft and fraud detection. In March 2014, IRS reported delivering the following Transition State 1 functionality: Improvements in data analytics and linked return analysis above current EFDS capabilities in order to detect more fraud. Leveraged new Massive Parallel Processing technology, which IRS noted has proven itself in data analysis, performance, and scoring improvements in analyzing 3 years of taxpayer data. Entity-based Data Model with a 3-year view of tax filer’s data. Ability to add or modify rules and models in current processing year based on current fraud patterns. In addition, in April 2014, IRS launched a limited deployment of one of RRP’s planned fraud detection capabilities–the capability to detect identity theft in filed tax returns. IRS plans to use the RRP identity theft functionality in conjunction with the Electronic Fraud Detection System (the fraud detection system RRP is expected to eventually replace) for all tax returns filed during the 2015 tax filing season. IRS also reported beginning requirements development activities for RRP Transition State 2. In September 2014, IRS proposed additional changes to the RRP investment. More specifically, it revised the planned completion dates for Transition States 1 and 2 to March 2015 and 2016, respectively. In addition, the planned cost for the RRP investment was revised to $226.9 million, an increase of approximately $91 million. IRS identified several reasons for these changes in plans to include, among other things: lack of experience in integrating new technology required for RRP implementation; the need for higher levels of contracting expertise; and lack of staff to support the entire planned scope of RRP due to budgetary constraints and increased costs. As illustrated in figure 6, IRS reported spending approximately $144 million for the RRP investment through fiscal year 2014. Thus far, this amount exceeds the initial planned cost for the investment by $86.5 million. With respect to future development of the RRP investment, IRS stated that it has begun work on a plan for re-starting development which is heavily influenced by IRS’s Small Business/Self Employed and Wage and Investment Concept of Operations (issued in July 2014), and an IT technical roadmap that is currently being developed. IRS’s Small Business/Self Employed and Wage and Investment Concept of Operations identifies refund fraud and identity theft, as key drivers for transforming the agency’s compliance efforts and services. Although IRS has thus far exceeded the initial planned cost for the RRP investment by $86.5 million, the agency reported a zero percent cost variance for this investment in its fiscal year 2014 fourth quarter reporting to Congress. Further, while IRS noted that it had delivered about 70 percent of the planned functionality for Transition State 1 of the RRP investment that was planned for September 2012 in March 2014, this was not identified in congressional reporting. If IRS implemented our prior recommendations relative to cumulative reporting of performance information, and reporting of quantitative scope information, as previously mentioned, the variances from cost, schedule, and scope plans identified for RRP would be more transparent in congressional reporting. IRS Has Delivered a Key Phase of CADE 2; however, Development of this System Has Been More Costly and Taken Longer than Planned IRS has delivered a key phase of its modernized tax processing system; however, in doing so, the agency exceeded planned costs by $183.6 million and fell behind schedule by 10 months; this included an unplanned transition state with an associated cost of $101.1 million. Figure 7 identifies the current and historical development plans for the CADE 2 investment. In 2008, IRS began defining a new strategy–CADE 2–that was intended to deliver improved individual tax processing sooner, and move to a single tax processing database. As shown in table 4, IRS planned to deliver the CADE 2 investment through the completion of two transition states and a target state. In 2012, IRS completed a cost estimate for Transition State 1 of the CADE 2 investment; this cost estimate was $315 million. IRS reported completing functionality for the daily processing of individual taxpayer returns in January 2012, and completing Transition State 1 in November 2012, at a cost of $397.5 million; Transition State 1 was completed 10 months behind planned schedule, and in excess of planned costs by $82.5 million. Further, while IRS reported the completion of Transition State 1, this transition state completed “conditionally” meaning that the investment was approved to proceed to the next phase with outstanding issues remaining to be addressed. In June 2013, IRS submitted a baseline change request to create a new transition state–Transition State 1.5–to address unfinished work from Transition State 1. More specifically, this unfinished work included ongoing data assurance, performance tuning, and downstream systems efforts to prepare the CADE 2 database for filing season 2014 production; IRS completed this transition state in July 2014. IRS officials stated that the creation of this transition state did not affect the overall schedule for the CADE 2 investment; however, it was accompanied by $101.1 million in unplanned costs–$69.7 million in fiscal year 2013, and $31.4 million planned for fiscal year 2014. IRS officials stated that investment funding allocated for future work on Transition State 2 was used to fund the unplanned Transition State 1.5 activities. IRS began work on Transition State 2 in October 2010, and as of September 2014, expected to complete this transition state by March 31, 2015. However, IRS noted that this planned completion date is likely to change as soon as a revised schedule estimate is completed for this transition state. IRS’s delivery of CADE 2 Transition State 1 10 months behind its initial planned completion date and in excess of initial planned costs by $183.6 million is not identified in congressional reporting. More specifically, IRS’s congressional reporting identifies cost and schedule performance for a 12-month period of time, and does not compare current investment performance to initial plans, as we have done in this report. Further, while IRS’s fiscal year 2014 fourth quarter reporting to Congress identifies the scope delivered for CADE 2 Transition State 1 during fiscal years 2009 through 2012, the reporting does not include a quantitative measure of scope, or qualitatively show how the delivered scope compares to what was planned for this transition state. Similar to RRP, the CADE 2 schedule delays and challenges in meeting planned costs would be more transparent in congressional reporting if it contained cumulative reporting of performance information and reporting of quantitative scope information. IRS Has Cancelled the IRDM Case Management Project IRS has cancelled its IRDM Case Management project—one of five projects that make up the IRDM investment—due to budget constraints, and is instead considering using an enterprisewide case management solution. Table 5 identifies the initial planned cost, schedule, and scope for the IRDM Case Management project. According to IRS, the IRDM Case Management project began beta testing in January 2013; however, further execution of the IRDM Case Management project was cancelled in January 2014, and IRS noted that this project would be shut down after the existing cases being worked within the application were completed. According to officials, IRS made a decision to investigate an off-the-shelf system for case management that could be used as an enterprise-wide common service at IRS. IRS noted that it has held three technical demonstrations to identify the extent to which a vendor-provided, off-the-shelf solution would meet the enterprise- wide need, and future development of a case management tool will be done using EntelliTrak technology. IRS officials stated they plan to execute enterprise case management solutions as soon as budget resources become available. As previously mentioned, TIGTA identified challenges during user acceptance testing of the IRDM Case Management project; however, IRS officials stated that these challenges were not a contributing factor in the agency’s decision to pause development of this project. As of October 2014, IRS reported spending $16.2 million on the IRDM Case Management project—$8.8 million for IRDMCM and $7.4 million for IRDMCM R2/Release Content Management Plan. IRS Is Performing Testing of ACA Releases; However, Reporting of Efforts Is Not Comprehensive ACA encompasses the planning, development, and implementation of IT systems needed to support IRS’s tax administration responsibilities associated with certain provisions of the Patient Protection and Affordable Care Act. IRS is developing this investment in 24 releases–12 which are in production, 1 that is in production/in progress, 6 that are in progress, and 5 that are in planning. IRS’s release plan for this investment is shown in table 5. Releases 5.0 and 6.0 (shaded in table 6) include development work that is critical in implementing ACA requirements for the 2015 tax filing season. The work associated with these releases impacts 66 IRS systems via a system modification or by building a new system. According to best practices, software testing should be guided by an organizational test strategy that defines different levels of testing required such as component, system, integration, and acceptance level testing. In addition, the strategy should address how testing is to be managed and results reported.that defines various levels of testing for ACA and has also assigned responsibility for testing to various organizations within IRS. Consistent with these practices, IRS has a test strategy ACA systems testing is performed by each of the following organizations within IRS, depending on the type of system work required. According to IRS officials, these organizations coordinate testing activities during systems integration testing. The Enterprise Systems Testing group is responsible for performing testing on systems that require modification to existing system functionality. According to the Enterprise Systems Testing Director, the group performs (1) systems acceptability testing, (2) integration testing, and (3) final integration testing. The Implementation and Testing group is responsible for performing project and integration testing on new and modified ACA systems, and coordinates integration tests with Enterprise Systems Testing for ACA and existing tax return processing systems. In addition, Implementation and Testing ensures testing for non-functional requirements such as performance, security, and accessibility through partnership with experts. IRS has performed various levels of testing for the ACA releases that are now in production. In addition, testing for systems currently in progress is underway. According to the Carnegie Mellon University Software Engineering Institute (SEI), a consolidated report drawing information from many sources is key to providing decision makers with the information they need to make timely and informed decisions. This suggests that consolidated reporting would be critical for a complex process such as testing, where there are several organizations involved and a large number of systems and requirements being tested at different levels. In addition, SEI practices suggest that the status of all impacted systems and requirements should be accounted for in overall status reporting— whether or not they are tested. Although reports on the overall status of ACA testing activities are provided to IRS senior management via ACA Testing Review Checkpoint reports and filing season status reports, these reports are not comprehensive because they do not identify the status of testing for all systems impacted by ACA Releases 5.0 and 6.0. For example, IRS’s October and December 2014 ACA Testing Review Checkpoint reports did not identify the status of testing for 26 and 24 of the 66 impacted systems, respectively. When asked about this, IRS officials stated that all systems do not undergo the Enterprise Systems Testing and Implementation and Testing group tests identified above. Specifically, the two organizations responsible for testing collectively identify systems deemed critical for testing and only those systems are included in the reports we reviewed. Nevertheless, including all impacted systems in reporting, including those that are not tested, as suggested by best practices, would ensure accountability for all systems. It is important to note that IRS’s Testing Review Checkpoint reports and filing season status reports are not always aligned with the manner in which ACA testing is being performed. For example, while IRS noted that ACA testing is conducted on requirements, the reports did not provide a status of requirements tested, making it difficult to determine whether all requirements have been tested. Without status reports that account for all impacted systems and are aligned with the manner in which IRS performs testing, it will be difficult to determine whether all required testing is being performed to ensure ACA is ready for the filing season. Conclusions IRS has made limited progress in improving the reliability and reporting of cost, schedule, and scope performance information. Until the agency fully implements the prior recommendations highlighted in our review, the information Congress receives will not be reliable for effective decision making and oversight. While IRS is required to provide monthly updates on the cost and schedule performance of its major investments, the information for two investments (RRP and IRDM) was not always updated, and IRS did not always disclose when this was the case in congressional reporting. In addition, IRS reports summary-level risk assessment ratings for 13 of its major investments in its reporting to Congress. Providing similar ratings for its remaining 6 major investments would allow Congress to more easily determine the ones requiring greater attention. Three selected investments had exceeded initial planned costs, fallen behind initial planned schedule, and had not produced all the expected functionally; and two had been paused or cancelled. However, these deviations were not transparent in congressional reporting because IRS has yet to implement our prior recommendations regarding cumulative performance and scope reporting. The magnitude of some of the changes to plans we identified underscores the criticality of implementing our prior recommendations in improving the transparency of congressional reporting so Congress has the appropriate information needed to make informed decisions. Finally, the reporting of testing activities for the ACA investment segments which are critical for the 2015 filing season showed that impacted systems were not all captured in overall status reports. In addition, these reports were not aligned with the manner in which ACA testing is being performed. Addressing these two issues would improve IRS’s and key decision makers’ ability to determine whether all required testing to ensure readiness for the filing season is being performed. Recommendations for Executive Action To improve the reliability and reporting of investment performance information and management of selected major investments, we recommend that the Commissioner of the IRS direct the Chief Technology Officer to take the following three new actions: For major investments included in congressional reporting, disclose instances where cost and schedule performance information reported to Congress is not updated. Provide summary-level Chief Technology Officer risk assessment ratings for all major investments in the quarterly reporting to Congress. Modify reporting of ACA testing status to senior management to include a comprehensive report on all impacted systems—including an explanation for why impacted systems were not tested at a particular level—and ensure this reporting is aligned with the manner in which testing is being performed. Agency Comments and Our Evaluation We obtained written comments on a draft of this report from the Commissioner of the IRS, which are reprinted in appendix III. In his written comments, the Commissioner stated that IRS appreciated the acknowledgment of progress it had made to address two prior year recommendations to improve the consistency and timeliness in reporting cost, schedule and scope information for its major information technology (IT) investments, but disagreed with our assessment of its efforts to address three prior recommendations for improving the reliability and reporting of cost, schedule, and scope information. Finally, he stated that IRS agreed with our two recommendations related to disclosing instances where performance information is not updated in quarterly reporting to Congress and expanding summary-level risk assessment ratings to all major investments. Further, the Commissioner stated the agency would provide a detailed corrective action plan addressing these recommendations. The Commissioner also stated that IRS disagreed with our third recommendation to modify the reporting of testing for the Affordable Care Act Administration (ACA) investment to senior management. Regarding our prior recommendation to develop and implement guidance that specifies best practices to consider when determining projected cost and schedule amounts for in-process activities in the monthly reporting, the Commissioner stated that this continues to be a work in progress for IRS. Specifically, he stated that IRS’s Information Technology Strategy and Planning organization and members of various investment teams are currently collaborating on best practices and a centralized process for determining project cost and schedules for in-process activities. As noted in our report, we reviewed a July 2014 cost and schedule variance reporting procedure that IRS stated addressed our recommendation. However, while the document described the methodology for revising an estimate, it did not address the calculation of projected cost and schedule amounts used for the monthly reporting of cost and schedule variances for in-process activities, which was the subject of our recommendation. As a result, we believe the status of this recommendation stands as not addressed. Regarding our prior recommendation to report cumulative investment and investment segment cost and schedule information in the quarterly reports to Congress, the Commissioner stated that IRS believed the recommendation was satisfied through its reporting of performance information in the Department of the Treasury’s SharePoint Investment Knowledge Exchange (SPIKE) tool, which is also included in IRS’s quarterly reporting to Congress. However, as noted in our report, this performance information is for the fiscal year only and is not cumulative for the investment or investment segment, as recommended, and therefore does not account for activities that were completed in previous fiscal years. As a result, we believe the status of this recommendation stands as not addressed. Regarding our prior recommendation to develop a quantitative measure of scope for IRS’s major investments, the Commissioner identified several practices and processes that he stated are currently in place to assess and report on the delivery of scope. He mentioned (1) IRS’s quarterly reporting to Congress, and (2) the OMB exhibit 300 baseline change request process as examples of such practices and processes. However, as noted in this and prior reports, while these methods address project functionality, they do not provide a quantitative measure of progress in delivering this functionality. In addition, the Commissioner also mentioned the post implementation review process; however, the post implementation review process does not provide a measure of progress in delivering scope as IRS has noted that this process is performed at the close of each segment. For these reasons, we continue to believe the status of this recommendation stands as not addressed. Regarding our recommendation to modify reporting of ACA testing status to senior management, the Commissioner stated that IRS followed a rigorous risk-based process for planning the tests of ACA-impacted systems, including the types and levels of testing. In addition, he stated that IRS had comprehensive reporting for the filing season 2015 release, which included ACA impacted systems. We acknowledge the various levels and types of ACA testing that IRS has performed and have noted this in our report. However, as also noted in our report, our review of ACA Testing Review Checkpoint reports and filing season reports which officials stated were used to provide comprehensive reports to senior managers did not identify the status of testing for all systems impacted by ACA Releases 5.0 and 6.0. For example, we found that IRS’s October and December 2014 ACA Testing Review Checkpoint reports did not identify the status of testing for 26 and 24 of the 66 impacted systems, respectively. Including all impacted systems in reporting, including those that are not tested, as best practices suggest, would ensure accountability for all systems. Accordingly, we believe our recommendation is still warranted. IRS also provided us with technical comments that we have incorporated in the report as appropriate. We are sending copies of this report to interested congressional committees, the Commissioner of the IRS, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) evaluate IRS’s efforts to address our recommendations for improving the reliability and reporting of cost, schedule, and scope information; (2) summarize the reported cost, schedule, and performance of IRS’s major IT investments; and (3) assess the status and plans of selected investments. For the first objective, we determined the status of actions taken to address each of five prior recommendations to improve the reliability and reporting of cost, schedule, and scope information we made in our 2013 and 2014 reviews of IRS’s major IT investments. They address (1) the timely reporting of cost and schedule variance information for completed activities; (2) consistently updating cost and schedule information for in- process activities; (3) developing guidance on best practices to consider when determining cost and schedule variances for in-process activities; (4) reporting cost and schedule information at the investment or investment segment level (rather than by fiscal year only); and (5) reporting qualitatively on how delivered scope compares to what was planned for investments until a quantitative measure is developed. For the first recommendation, we calculated the 60-day reporting time frame required by Treasury for completed activities. We then analyzed the four quarterly reports on the performance of IT investments submitted by IRS to the appropriations committees and us between December 2013 and September 2014 to determine whether completed activities showed updated cost and schedule information within those time frames. For the second recommendation, we reviewed materials related to training that IRS officials stated were provided to investment staff to ensure a consistent understanding of the information to be included in the monthly reports. For the third recommendation, we reviewed the July 2014 cost and schedule variance reporting procedure and other guidance IRS stated it was using to determine projected cost and schedule amounts to determine whether best practices were being included. For the last two recommendations related to reporting cumulative performance information and progress in meeting scope expectations, we reviewed IRS’s reporting through the Office of Management and Budget (OMB) exhibit 300 process that IRS stated addressed the recommendations. We assessed a recommendation as being fully addressed if IRS provided evidence that it fully addressed our recommendation; partially addressed if IRS provided evidence that it addressed our recommendation to some extent; and not addressed if IRS did not provide any evidence that it addressed our recommendation. For our second objective, we obtained from IRS a list of the investments classified as “major” during fiscal year 2014. We reviewed monthly cost and schedule variance reports for these investments from October 2013 through September 2014, and followed up with IRS officials to identify the reasons for investment-level variances that were significant (equal to plus or minus 10 percent variance from cost or schedule goals) and recurring (reported for 3 consecutive months or more). We assessed the reliability of the reported information by confirming our understanding of IRS’s process for reporting monthly cost and schedule variances, and by determining the extent to which IRS had taken action to improve the reliability and reporting of this information. We reviewed operational performance information reported for IRS’s major IT investments as of September 2014, to determine the extent to which each investment met its operational performance goals; this information included, where reported, the performance target and actual results for each metric. We compared this information to information reported for IRS’s major IT investments on OMB’s IT Dashboard website. Lastly, we reviewed the four quarterly reports on the performance of IT investments submitted by IRS to the appropriations committees and GAO between December 2013 and September 2014, to identify the Chief Technology Officer summary-level risk ratings assigned to major IT investments. We analyzed these risk ratings to identify trends, and interviewed IRS officials (including the Deputy Chief Information Officer for Strategy and Modernization) to identify IRS’s methodology for deriving these ratings. For our third objective, we selected Return Review Program (RRP), Customer Account Data Engine 2 (CADE 2), and Information Reporting and Document Matching (IRDM) because the cost, schedule, or scope of these investments had changed from initial plans; and the Affordable Care Act Administration (ACA) investment due to the investment’s criticality to the 2015 tax filing season and the significant amount of resources expected to be expended. For RRP, CADE 2, and the IRDM Case Management project, we interviewed program officials and analyzed documentation such as performance work statements, business cases, baseline change requests, and the four quarterly reports on the performance of IT investments submitted by IRS to the appropriations committees and us between December 2013 and September 2014. From this documentation, we determined the initial cost, schedule, and scope plans for these investments, as well as any revisions to these plans, and the functionality delivered. For ACA, we obtained documentation and interviewed key officials– including those from the ACA Program Management Office, and IRS’s systems testing organizations–to determine the plan for deployment of the investment. Further, we identified the plans and status of testing for Releases 5.0 and 6.0, which are expected to be implemented for the 2015 tax filing season. Specifically, we analyzed the ACA system architecture for Releases 5.0 and 6.0 to identify associated systems impacted by the development of ACA. We then reviewed testing documentation, such as testing status reports and test plans to determine the extent to which these systems were tested. Lastly, we reviewed various test reports to determine the extent to which IRS had a mechanism in place to comprehensively report on the status of testing for all systems related to ACA Releases 5.0 and 6.0. We compared the information against best practices for software testing promulgated by the International Organization for Standardization/International Electrotechnical Commission/Institute of Electrical and Electronics Engineers. We conducted this performance audit from June 2014 to February 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Investment Profiles This appendix contains the profiles for seven investments critical to IRS’s mission which we examined in greater detail in our prior reviews of IRS’ major IT investments. Information contained within these profiles includes, but is not limited to: Current life-cycle phase: Life-cycle phases can be represented as planning; development, modernization, and enhancement; operations and maintenance; or mixed. Planning refers to preparing, or acquiring the information used to design the asset; assess the benefits, risks, and risk-adjusted costs of alternative solutions; and establish realistic cost, schedule, and performance goals for the selected alternative, before proceeding to full acquisition or termination of a project. Development, modernization, and enhancement refers to projects and activities that result in new assets/systems or projects and activities that result in changes or modifications to existing assets that lead to substantive improvements, implement legislative or regulatory requirements, or meet an agency leadership request. Operations and maintenance refers to those projects and activities that are operating in a production environment. Finally, mixed refers to projects and activities that are a combination of development, modernization, and enhancement and operations and maintenance. Having detailed information allows for clear tracking of a program’s costs as it moves through its various life-cycle phases. Development methodology: This is a framework that is used to structure, plan, and control the process of developing an information system. There are a number of approaches that can be utilized by an investment. IRS’s Enterprise Lifecycle methodology includes the following approaches: waterfall, planned maintenance, iterative, and managed services. Waterfall is a sequential development of a solution with planned reviews and formal approvals required before continuation of work. The planned maintenance approach manages change in an organized manner, minimizes the disruption caused by frequent system changes, and increases the efficiency and effectiveness of the system change process. Additionally, the iterative approach is an adaptive development approach in which projects start with a conceptual vision of the solution and end with deployment, with repeated cycles of requirements discovery, development, and testing in between. Finally, the managed services approach is designed to capitalize on the benefits of managed services provided by either an outside service, internal business processes, and/or existing infrastructure service provider. This provides useful information on the requirements of how a project is to progress through the life cycle. Contract type: For purposes of this report, this can be broken down into two categories. The first is firm, fixed price contracts in which the price is not subject to any adjustments. The second is cost reimbursement contracts which provide for the payment of allowable incurred costs, to the extent prescribed in the contract. Types of cost reimbursement contracts include, but are not limited to (1) a cost plus fixed fee in which actual costs and a fixed fee can be charged; however, costs are not allowed to exceed the agreed upon estimate without approval; and (2) a cost plus incentive fee that provides for an initially negotiated fee to be adjusted later by a formula based on the relationship of total allowable costs to total target costs. Number of rebaselines: Rebaselines are changes to projects’ cost, schedule, and performance goals (i.e., baselines). According to officials, scope changes must go through a baseline change request process and be approved by Treasury and OMB. Affordable Care Act Administration According to IRS, the Affordable Care Act Administration (ACA) investment encompasses the planning, development, and implementation of IT systems needed to support IRS’ tax administration responsibilities associated with certain provisions of the Affordable Care Act. Initiatives that have already been deployed include the initial release of the Branded Prescription Drug Industry Fee project; an effort intended to secure connection between IRS and the Department of Health and Human Services/Centers for Medicare and Medicaid Services (CMS) to support health insurance exchange open enrollment for the Fall of 2013; and 2014 Non-Marketplace Provisions. Releases of the ACA investment that are critical to the 2015 tax filing season include Release 5.0 for filing season 2015, and Release 6.0 which includes compliance activities. Customer Account Data Engine 2 The Customer Account Data Engine 2 (CADE 2) investment began in 2010 as a new strategy for accelerating completion of a modernized database and converting to a single processing system sooner than was expected under CADE (which was the predecessor investment to CADE 2, intended to provide a modernized system of taxpayer accounts, with the ultimate goal of eventually replacing the Individual Master File). CADE 2 is expected to deliver its functionality incrementally through transition states. Transition State 1 includes: 1. Daily batch processing of individual taxpayer returns provided by modifying the IMF to run on a daily, rather than weekly, basis. 2. A comprehensive database for housing all individual taxpayer accounts and loaded with data from CADE and IMF to provide more timely updates of taxpayer information for use by IRS employees for compliance and customer service. IRS reported completing functionality for the daily processing of individual taxpayer returns in January 2012, and completing Transition State 1 in November 2012, at a cost of $397.5 million. In July 2014, IRS completed Transition State 1.5, which included ongoing data assurance, performance tuning, and downstream systems efforts to prepare the CADE 2 database for filing season 2014 production. IRS began work on Transition State 2 in October 2010, and expects to complete this transition state by March 31, 2015; however, IRS noted that this planned completion date is likely to change. Transition State 2 includes re-writing IRS’s legacy core tax processing applications in modern programming language, and is intended to increase flexibility, scalability, reliability, and security. Full operational capability: Not applicable Life-cycle costs: 193.644 million Actual spent to date: $182.837 million Current life-cycle phase: Mixed (development, modernization and enhancement, and operations and maintenance) The e-Services investment is a suite of web-based products that are intended to allow tax professionals and payers to conduct business with IRS electronically. These services are only available to tax practitioners, registered agents, and other third parties and are not available to the general public. The program is available via the Internet 24 hours a day, 7 days a week, and it contains products such as registration, an e-file application, a Transcript Delivery System (a system which tax professionals may use to request and receive account transcripts, wage and income documents, tax return transcripts, and verification of non- filing letters), and Taxpayer Identification Number Matching (a pre-filing service which allows authorized payers to match up to 25 payee taxpayer identification number and name combinations against IRS records prior to submitting an information return). The Information Reporting and Document Matching (IRDM) investment is aimed at helping close the tax gap—the difference between what business taxpayers should have paid and actually did. It is intended to improve voluntary compliance and accurate reporting of income by establishing a new business tax return and information returns that focus on merchant card payments and securities basis reporting. IRDM supports IRS business using information systems that sort, match, identify, manage, and report on returns that are likely sources of tax gap- reducing revenue. To accomplish this, IRS requires operational resources and systems to be put in place to implement business and technology changes that are intended to expand and improve its automated matching of data on information returns to the data submitted on tax returns filed. The investment consists of the following four projects. As detailed in this report, this investment previously included a case management project that was cancelled in January 2014. Data Assimilation: Identifies the link between tax forms and information returns filed for the same taxpayer to identify potential under-reporter cases. The project then groups these into specific categories to support IRS compliance programs associated with merchant card payments, securities cost basis, and government payments. Data Correlation: Matches tax return and information return data and applies business rules to identify potential under-reporter cases for use in the IRDM case selection process. After case selection, data correlation builds a complete case record for analysis by a tax examiner to support IRS compliance programs. Business Master File analytics: Provides IRS users the ability to define and execute logic for the intelligent selection of business taxpayer case inventory to ensure cases selected result in the largest financial return. Case Inventory Selection and Analytics: Provides IRS users the ability to define and execute logic for the intelligent selection of individual taxpayer case inventory and creates an analytical environment that offers a greater ability to evaluate case data to improve the selection of cases worked. The IRS.Gov investment consists of a public user portal—IRS.Gov, a registered user portal, and an employee user portal. The key goals of the program include simplifying and transforming the user web experience, consolidating and advancing IRS web technology to industry standards, implementing a high-performing contract structure and terms, and marketing competitive costs throughout the program’s life cycle. Actual spent to date: $560.118 million Current life-cycle phase: Mixed (development, modernization and enhancement, and operations and maintenance) provide a cost effective and affordable program cost structure; and transition successfully from the old programs to the new program. The Modernized e-File (MeF) investment is the primary system to receive and process all tax returns submitted electronically. When MeF receives an electronic tax return, the system determines if it satisfies the acceptance rules required for further processing. MeF is intended to benefit the tax preparation community and enables the IRS to answer questions quickly and helps to resolve issues. MeF is also intended to benefit corporations and tax-exempt organizations that must file tax returns or annual information returns electronically and is intended to reduce the handling/mailing of voluminous paper returns. Actual spent to date: $417.871 million Current life-cycle phase: Mixed (development, modernization and enhancement, and operations and maintenance) MeF stores all tax return data in Extensible Markup Language format in a Modernized Tax Return Database, allowing authorized IRS viewers (IRS Help Desk personnel and tax examiners) to see tax returns securely online. According to IRS, as of August 2014, taxpayers used MeF to submit over 228 million individual returns and over 14 million business returns. IRS deployed MeF Release 9.5 in May 2014, for filing season 2015. According to IRS, Release 9.0 and 9.5 add the employment/unemployment tax family of forms (forms 94x) and the U.S. Income Tax Return for Estates and Trusts (Form 1041) to the MeF environment, as well as a new RRP interface, Affordable Care Act and other legislative changes. The Return review Program (RRP) investment is a web-based automated system that is intended to replace the legacy Electronic Fraud Detection System (EFDS) built in the mid-1990s. It is intended to deliver functionality incrementally through transition states. In September 2013, IRS officials adopted a risk mitigation approach that split Transition State 1 into two releases. The first release —called Transition State 1 Release 1.0—occurred in March 2014 and contained functionality needed for processing filing season returns. The second release—called Transition State 1 Release 1.1—is planned to occur after filing season. RRP is to, among other things: enable more effective routing of returns, detect noncompliant and fraudulent returns, ensure timely issuance of refunds and credits, prevent issuance of refunds and credits not legally due to filers, and streamline business processes used by the IRS criminal investigative staff. The new system is comprised of three major activities: Detection. Intended to incorporate several existing models as well as new models to enhance detection of probable noncompliance. Using algorithms and business rule sets, the system is intended to detect questionable information on each return as the return is processed. The system is also intended to detect returns with potential fraud characteristics, thereby allowing criminal investigators to link and analyze groups of returns to identify schemes for potential criminal prosecution. Resolution. Intended to accommodate existing treatment streams and new treatment streams. Returns will be routed systemically to the best treatment stream, opened into the treatment stream’s inventory and, if applicable, the system will send an initial contact letter to the taxpayer. Prevention. Intended to automatically integrate the results of each return’s resolution into the detection models. The results can be used to help target education and outreach efforts to taxpayers and preparers on how to avoid unintentional noncompliance. The system is also intended to allow analysis and identification of fraud and noncompliance not identified by the predictive detection models. Appendix III: Comments from the Internal Revenue Service Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Sabine Paul, assistant director; Chris Businsky; Mary Evans; Rebecca Eyler, Nancy Glover, James MacAulay; Paul Middleton; Bradley Roach; and Karl Seifert.
IRS relies extensively on IT systems to annually collect more than $2 trillion in taxes, distribute more than $300 billion in refunds, and carry out its mission of providing service to America's taxpayers in meeting their tax obligations. For fiscal year 2014, IRS expected to spend about $2.4 billion on IT. Given the size and significance of IRS's IT investments and the challenges inherent in successfully delivering these complex IT systems, it is important that Congress be provided reliable cost, schedule, and scope information to assist with its oversight responsibilities. Accordingly, GAO's objectives were to (1) evaluate IRS's efforts to address prior GAO recommendations for improving the reliability and reporting of cost, schedule, and scope information; (2) summarize the reported cost, schedule, and performance of IRS's major IT investments; and (3) assess the status and plans of selected investments. To do so, GAO analyzed quarterly reports and reviewed information on cost and schedule from October 2013 to September 2014, interviewed program officials, and analyzed documentation for selected investments. The Internal Revenue Service (IRS) has made limited progress in improving the reliability and reporting of cost, schedule, and scope performance information–the agency has partially implemented two of GAO's five prior recommendations, but not yet addressed the remaining three (see table). IRS's implementation of these recommendations is critical in ensuring that Congress receives the reliable information it needs for effective oversight and decision making. Key: ● Fully—the agency provided evidence that it fully addressed the recommendation. ◐ Partially—the agency provided evidence that it has addressed the recommendation to some extent. ◌ Not addressed—the agency did not provide any evidence that it addressed the recommendation. Most of IRS's major information technology (IT) investments were reported as meeting cost and schedule goals. Specifically, 11 of 17 investments were reportedly within 10 percent of cost estimates, and 13 were within 10 percent of schedule estimates between October 2013 and September 2014. In addition, the agency reported “green” ratings for investments instead of their previous “yellow” ratings for Chief Technology Officer summary-level risk assessments. It is important to note that these ratings are not provided for 6 investments for which IRS provides detailed reporting to Congress. Providing summary-level risk ratings for all major investments would improve the visibility into changes in investment risk, and provide Congress with the information to more easily determine the investments requiring greater attention. Selected investments experienced variances from initial cost, schedule, and scope plans that were not transparent in congressional reporting because IRS has yet to address GAO's prior recommendations. Specifically, the Return Review Program has so far exceeded planned costs by $86.5 million and has yet to deliver functionality that was scheduled for September 2012, and a key phase of Customer Account Data Engine 2 was developed 10 months late and at $183.6 million more than planned. However, none of these variances were clearly identified in congressional reporting. In addition, the consolidated reports on the status of testing for the Affordable Care Act Administration investment are not comprehensive, making it difficult to determine whether all required testing is being performed.
FEMA Has Made Limited Progress in Measuring Preparedness by Assessing Capabilities and Addressing Long- Standing Challenges DHS Developed Plans for Assessing Capabilities, but Did Not Fully Implement Them In July 2005, we reported that DHS had established a draft Target Capabilities List that provides guidance on the specific capabilities and levels of capability that FEMA would expect federal, state, local, and tribal first responders to develop and maintain. We reported that DHS defined these capabilities generically and expressed them in terms of desired operational outcomes and essential characteristics, rather than dictating specific, quantifiable responsibilities to the various jurisdictions. DHS planned to organize classes of jurisdictions that share similar characteristics—such as total population, population density, and critical infrastructure—into tiers to account for reasonable differences in capability levels among groups of jurisdictions and to appropriately apportion responsibility for development and maintenance of capabilities among levels of government and across these jurisdictional tiers. According to DHS’s Assessment and Reporting Implementation Plan, DHS intended to implement a capability assessment and reporting system based on target capabilities that would allow first responders to assess their preparedness to identify gaps, excesses, or deficiencies in their existing capabilities or capabilities they will be expected to access through mutual aid. In addition, this information could be used to measure the readiness of federal civil response assets and the use of federal assistance at the state and local level and to provide a means of assessing how federal assistance programs are supporting national preparedness. In implementing this plan, DHS intended to collect preparedness data on the capabilities of the federal government, states, local jurisdictions, and the private sector to provide information about the baseline status of national preparedness. DHS’s efforts to implement these plans were interrupted by the 2005 hurricane season. In August 2005, Hurricane Katrina—the worst natural disaster in our nation’s history—made final landfall in coastal Louisiana and Mississippi, and its destructive force extended to the western Alabama coast. Hurricane Katrina and the following Hurricanes Rita and Wilma— also among the most powerful hurricanes in the nation’s history— graphically illustrated the limitations at that time of the nation’s readiness and ability to respond effectively to a catastrophic disaster, that is, a disaster whose effects almost immediately overwhelm the response capacities of affected state and local first responders and require outside action and support from the federal government and other entities. In June 2006, DHS concluded that target capabilities and associated performance measures should serve as the common reference system for preparedness planning. In September 2006, we reported that numerous reports and our work suggest that the substantial resources and capabilities marshaled by federal, state, and local governments and nongovernmental organizations were insufficient to meet the immediate challenges posed by the unprecedented degree of damage and the resulting number of hurricane victims caused by Hurricanes Katrina and Rita. We also reported that developing the capabilities needed for catastrophic disasters should be part of an overall national preparedness effort that is designed to integrate and define what needs to be done, where, based on what standards, how it should be done, and how well it should be done. In October 2006, Congress passed the Post-Katrina Act that required FEMA, in developing guidelines to define target capabilities, ensure that such guidelines are specific, flexible, and measurable. In addition, the Post-Katrina Act calls for FEMA to ensure that each component of the national preparedness system, which includes the target capabilities, is developed, revised, and updated with clear and quantifiable performance metrics, measures, and outcomes. We recommended, among other things, that DHS apply an all- hazards, risk management approach in deciding whether and how to invest in specific capabilities for a catastrophic disaster; DHS concurred, and FEMA said it planned to use the Target Capabilities List to assess capabilities to address all hazards. FEMA Issued the Target Capabilities List in September 2007 but Has Made Limited Progress in Developing Preparedness Measures and Addressing Long-standing Challenges in Assessing Capabilities In September 2007, FEMA issued the Target Capabilities List to provide a common perspective to conduct assessments to determine levels of readiness to perform critical tasks and to identify and address any gaps or deficiencies. According to FEMA, policymakers need regular reports on the status of capabilities for which they have responsibility to help them make better resource and investment decisions and to establish priorities. Further, FEMA officials said that emergency managers and planners require assessment information to help them address deficiencies; to identify alternative sources of capabilities (e.g., from mutual aid or contracts with the private sector); and to identify which capabilities should be tested through exercises. Also, FEMA said that agencies or organizations that are expected to supplement or provide capabilities during an incident need assessment information to set priorities, make investment decisions, and position capabilities or resources, if needed. In April 2009, we reported that establishing quantifiable metrics for target capabilities was a prerequisite to developing assessment data that can be compared across all levels of government. At the time of our review, FEMA was in the process of refining the target capabilities to make them more measurable and to provide state and local jurisdictions with additional guidance on the levels of capability they need. Specifically, FEMA planned to develop quantifiable metrics—or performance objectives—for each of the 37 target capabilities that are to outline specific capability targets that jurisdictions (such as cities) of varying size should strive to meet, being cognizant of the fact that there is not a “one size fits all” approach to preparedness. However, FEMA has not yet completed these quantifiable metrics for its 37 target capabilities, and it is unclear when it plans to do so. In October 2009, in responding to congressional questions regarding FEMA’s plan and timeline for reviewing and revising the 37 target capabilities, FEMA officials said they planned to conduct extensive coordination through stakeholder workshops in all 10 FEMA regions and with all federal agencies with lead and supporting responsibility for emergency support-function activities associated with each of the 37 target capabilities. The workshops were intended to define the risk factors, critical target outcomes, and resource elements for each capability. The response stated that FEMA planned to create a Task Force comprised of federal, state, local, and tribal stakeholders to examine all aspects of preparedness grants, including benchmarking efforts such as the Target Capabilities List. FEMA officials have described their goals for updating the list to include establishing measurable target outcomes, providing an objective means to justify investments and priorities, and promoting mutual aid and resource sharing. In November 2009, FEMA issued a Target Capabilities List Implementation Guide that described the function of the list as a planning tool and not a set of standards or requirements. We reported in July 2005 that DHS had identified potential challenges in gathering the information needed to assess capabilities, including determining how to aggregate data from federal, state, local, and tribal governments and others and integrating self-assessment and external assessment approaches. In reviewing FEMA’s efforts to assess capabilities, we further reported in April 2009 that FEMA faced methodological challenges with regard to (1) differences in data available, (2) variations in reporting structures across states, and (3) variations in the level of detail within data sources requiring subjective interpretation. We recommended that FEMA enhance its project management plan to include milestone dates, among other things, a recommendation to which DHS concurred. In October 2010, we reported that FEMA had enhanced its project management plan. Nonetheless, the challenges we reported in July 2005 and April 2009 faced by DHS and FEMA, respectively, in their efforts to measure preparedness and establish a system of metrics to assess national capabilities have proved to be difficult for them to overcome. We reported that in October 2010, in general, FEMA officials said that evaluation efforts they used to collect data on national preparedness capabilities were useful for their respective purposes, but that the data collected were limited by data reliability and measurement issues related to the lack of standardization in the collection of data. For example, FEMA’s Deputy Director for Preparedness testified in October 2009 that the “Cost-to-Capabilities” (C2C) initiative developed by FEMA’s Grant Programs Directorate (at that time already underway for 18 months) had a goal as a multiyear effort to manage homeland security grant programs and prioritize capability-based investments. We reported in October 2010, that as a result of FEMA’s difficulties in establishing metrics to measure enhancements in preparedness capabilities, officials discontinued the C2C program. Similarly, FEMA’s nationwide, multiyear Gap Analysis Program implementation, proposed in March 2009, was “to provide emergency management agencies at all levels of government with greater situational awareness of response resources and capabilities.” However, as we reported in October 2010, FEMA noted that states did not always have the resources or ability to provide accurate capability information into its Gap Analysis Program response models and simulation; thus, FEMA had discontinued the program. FEMA officials reported that one of its evaluation efforts, the State Preparedness Report, has enabled FEMA to gather data on the progress, capabilities, and accomplishments of a state’s, the District of Columbia’s, or a territory’s preparedness program, but that these reports included self- reported data that may be subject to interpretation by the reporting organizations in each state and not be readily comparable to other states’ data. The officials also stated that they have taken steps to address these limitations by, for example, creating a Web-based survey tool to provide a more standardized way of collecting state preparedness information that will help FEMA officials validate the information by comparing it across states. We reported in October 2010 that FEMA officials said they had an ongoing effort to develop measures for target capabilities—as planning guidance to assist in state and local assessments —rather than as requirements for measuring preparedness by assessing capabilities; FEMA officials had not yet determined how they plan to revise the list and said they are awaiting the completed revision of Homeland Security Presidential Directive 8, which is to address national preparedness. As a result, FEMA has not yet developed national preparedness capability requirements based on established metrics to provide a framework for national preparedness assessments. Until such a framework is in place, FEMA will not have a basis to operationalize and implement its conceptual approach for assessing federal, state, and local preparedness capabilities against capability requirements to identify capability gaps for prioritizing investments in national preparedness. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions that your or other Members of the Committee may have at this time. Contacts and Staff Acknowledgments For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, at (202) 512- 8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals from GAO’s Homeland Security and Justice Team also made major contributions to this testimony: Chris Keisling, Assistant Director; John Vocino, Analyst-In-Charge; C. Patrick Washington, Analyst, and Lara Miklozek, Communications Analyst. Appendix I: National Preparedness Guidelines and Critical Practices for Performance Measurement This appendix presents additional information on the Federal Emergency Management Agency’s National Preparedness Guidelines as well as key steps and critical practices for measuring performance and results.
This testimony discusses the efforts of the Federal Emergency Management Agency (FEMA)--a component of the Department of Homeland Security (DHS)--to measure and assess national capabilities to respond to a major disaster. According to the Congressional Research Service, from fiscal years 2002 through 2010, Congress appropriated over $34 billion for homeland security preparedness grant programs to enhance the capabilities of state, territory, local, and tribal governments to prevent, protect against, respond to, and recover from terrorist attacks and other disasters. Congress enacted the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act) to address shortcomings in the preparation for and response to Hurricane Katrina that, among other things, gave FEMA responsibility for leading the nation in developing a national preparedness system. The Post-Katrina Act requires that FEMA develop a national preparedness system and assess preparedness capabilities--capabilities needed to respond effectively to disasters--to determine the nation's preparedness capability levels and the resources needed to achieve desired levels of capability. Federal, state, and local resources provide capabilities for different levels of "incident effect" (i.e., the extent of damage caused by a natural or manmade disaster). FEMA's National Preparedness Directorate within its Protection and National Preparedness organization is responsible for developing and implementing a system for measuring and assessing national preparedness capabilities. The need to define measurable national preparedness capabilities is a well-established and recognized issue. For example, in December 2003, the Advisory Panel to Assess Domestic Response Capabilities noted that preparedness (for combating terrorism) requires measurable demonstrated capacity by communities, states, and private sector entities throughout the United States to respond to threats with well-planned, well-coordinated, and effective efforts. This is consistent with our April 2002 testimony on national preparedness, in which we identified the need for goals and performance indicators to guide the nation's preparedness efforts and help to objectively assess the results of federal investments. We reported that FEMA had not yet defined the outcomes of where the nation should be in terms of domestic preparedness. Thus, identifying measurable performance indicators could help FEMA (1) track progress toward established goals, (2) provide policy makers with the information they need to make rational resource allocations, and (3) provide program managers with the data needed to effect continual improvements, measure progress, and to enforce accountability. In September 2007, DHS issued the National Preparedness Guidelines that describe a national framework for capabilities-based preparedness as a systematic effort that includes sequential steps to first determine capability requirements and then assess current capability levels. According to the Guidelines, the results of this analysis provide a basis to identify, analyze, and choose options to address capability gaps and deficiencies, allocate funds, and assess and report the results. This proposed framework reflects critical practices we have identified for government performance and results. This statement is based on our prior work issued from July 2005 through October 2010 on DHS's and FEMA's efforts to develop and implement a national framework for assessing preparedness capabilities at the federal, state, and local levels, as well as DHS's and FEMA's efforts to develop and use metrics to define capability levels, identify capability gaps, and prioritize national preparedness investments to fill the most critical capability gaps. As requested, this testimony focuses on the extent to which DHS and FEMA have made progress in measuring national preparedness by assessing capabilities and addressing related challenges. In summary, DHS and FEMA have implemented a number of efforts with the goal of measuring preparedness by assessing capabilities and addressing related challenges, but success has been limited. DHS first developed plans to measure preparedness by assessing capabilities, but did not fully implement those plans. FEMA then issued the target capabilities list in September 2007 but has made limited progress in developing preparedness measures and addressing long-standing challenges in assessing capabilities, such as determining how to aggregate data from federal, state, local, and tribal governments. At the time of our review of FEMA's efforts in 2008 and in 2009, FEMA was in the process of refining the target capabilities to make them more measurable and to provide state and local jurisdictions with additional guidance on the levels of capability they need. We recommended in our April 2009 report that FEMA enhance its project management plan with, among other things, milestones to help it implement its capability assessment efforts; FEMA agreed with our recommendation. We reported in October 2010 that FEMA had enhanced its plan with milestones in response to our prior recommendation and that officials said they had an ongoing effort to develop measures for target capabilities--as planning guidance to assist in state and local assessments--rather than as requirements for measuring preparedness by assessing capabilities; FEMA officials had not yet determined how they plan to revise the list.
Background Evolving Threats in Iraq and Afghanistan Highlighted Need for DOD to Fill Capability Gaps Rapidly As evidenced by evolving threats in Iraq and Afghanistan, enemy forces have exploited capability gaps in the technology, systems, and equipment used by U.S. forces. Such tactics made it evident that U.S. warfighters were not always equipped to deal with the fast-changing tactics, techniques, and procedures of the enemy. For example, one of the most publicized of these adversarial capabilities was the use of IEDs. While U.S. forces responded initially by changing tactics and techniques by purchasing equipment locally, the department then determined it needed to more quickly develop and deploy new capabilities. Some of DOD’s efforts to rapidly address counter-IED and other significant capability gaps include the following: Counter-IED Solutions—Congress provides funding for joint urgent needs related to countering IEDs through the Joint Improvised Explosive Device Defeat Organization (JIEDDO), an organization that reports directly to the Deputy Secretary of Defense. Congress has appropriated nearly $16 billion through fiscal year 2009 to JIEDDO. JIEDDO has funded many counter- IED solutions to support the warfighter, including electronic jammers to block radio-frequency signals that detonate IEDs. However, in our prior work, we found that JIEDDO lacked full visibility over all counter-IED initiatives throughout DOD, faced difficulties with transitioning its counter-IED initiatives to the military services, and lacked criteria for selecting which counter-IED training initiatives it will fund, which affect its training investment decisions. We recommended that DOD improve its visibility over all of DOD’s counter-IED efforts, work with the military services to develop a complete transition plan for initiatives, and define criteria for funding training initiatives. DOD agreed with these recommendations and identified several actions it had taken or planned to take to address them. Intelligence, Surveillance, and Reconnaissance (ISR) Technology—DOD’s ISR systems—including manned and unmanned airborne, space-borne, maritime, and terrestrial systems—play critical roles in supporting military operations and national security missions. Effective ISR data can provide early warning of enemy threats as well as enable U.S. military forces to increase effectiveness, coordination, and lethality, and demand has increased for ISR capabilities to support ongoing military operations. To meet this growing demand, DOD is making sizeable investments in ISR systems and related ISR capabilities. We have reported since 2005 that DOD’s ISR activities are not always well integrated and efficient, effectiveness may be compromised by lack of visibility into operational use of ISR assets, and agencies could better collaborate in the acquisition of new capabilities. In January 2010, we recommended that DOD develop overarching guidance for sharing intelligence information and that the military services develop plans with timelines that prioritize and identify the types of ISR data they will share. DOD agreed with these recommendations and noted actions it planned to take to address them. Command and Control Equipment—Urgently needed assets may include, but are not limited to, satellite communication equipment for military personnel who require a method for communicating with each other in remote areas without established infrastructure, or distributed tactical communication systems for warfighters in Afghanistan because current handset devices do not operate adequately in the mountainous terrain. To meet this demand, solutions are being sought from various sources that include commercial off-the-shelf technology, other types of technology, and other sources. We have reported on the challenges associated with availability of such technology, including lengthy delays in the approval and order processes. To address these and other urgent needs–related challenges, we made several recommendations to improve DOD’s ability to assess how well its processes are meeting critical warfighter needs, address challenges with training, make decisions about when to use its rapid acquisition authority, and make reprogramming decisions to expedite fielding of solutions. DOD generally concurred with our recommendations and agreed to take several actions to address them. The Department’s Processes to Fulfill Urgent Needs Have Evolved Over the past two decades, the fulfillment of urgent needs has evolved as a set of complex processes—within the Joint Staff, the Office of the Secretary of Defense (OSD), each of the military services, as well as the combatant commands—to rapidly develop, equip, and field solutions and critical capabilities to the warfighter. DOD’s experience in Iraq and Afghanistan led to the expanded use of existing urgent needs processes, the creation of new policies, and the establishment of new organizations intended to be more responsive to urgent warfighter requests. As shown in table 1 below, significant events in the expansion of DOD’s efforts to respond to and fulfill urgent operational needs began in the late 1980s but increased rapidly after the onset of the Global War on Terrorism in late 2001. As table 1 indicates, many of these newly established entities and processes were created, in part, because the department had not anticipated the accelerated pace of change in enemy tactics and techniques that ultimately heightened the need for a rapid response to new threats in Afghanistan and Iraq. According to the Defense Science Board, while DOD, the military services, and combatant commands took actions to respond more quickly to demands to fulfill urgent needs, it became apparent that within the last half decade the department, as well as the acquisition community it depends on, has struggled in their ability to field new capabilities in a disciplined, efficient, and effective way. While many entities started as ad hoc organizations, several have been permanently established. Meeting Urgent Needs Involves a Breadth of Activities Although each of the services’ and Joint Staff’s urgent needs processes is distinct, we identified six broad activities involved after the submission of an urgent need statement. These activities are shown in table 2 below. Congressional Interest in DOD’s Approach to Urgent Operational Needs and the Need for Improvement Over the past 5 years, there have been several reviews of the department’s ability to rapidly respond and field urgently needed capabilities in the 21st century security environment. Some of these studies were initiated at the direction of Congress. In fiscal year 2009, the House Armed Services Committee approved the department’s designation of a process improvement officer who was tasked with applying Lean Six Sigma process improvement techniques to the business practices of the department. The committee recommended that the process improvement officer examine the processes for rapid acquisition activities that have been established since the wars in Iraq and Afghanistan began and determine whether there were lessons learned that might be integrated into the department’s main acquisition process. The department conducted the study and found (1) significant variability in response time at the beginning of the process, indicating unnecessary delays; (2) senior leadership involvement in the process enables rapid decision making; (3) shorter decision processes and focused organizations enable quicker response than under normal requirements; and (4) reprogramming authority is cumbersome and adds time to the urgent needs process. Furthermore, the National Defense Authorization Act for fiscal year 2009 included a provision that would require best practices and process improvements to ensure that urgent operational needs statements and joint urgent operational needs statements are presented to appropriate authorities for review and validation not later than 60 days after the documents are submitted. Specifically, the committee report noted that over the last several years, operational commanders in Iraq had identified urgent operational needs for MRAP vehicles, nonlethal laser dazzlers, and other critical equipment. Further, the committee stated it was aware of allegations that requests for some of these items not only went unmet, but were not even presented for more than a year to the senior officials responsible for validating the requests. In 2009, Congress required the Secretary of Defense to commission a study by an independent commission or a federally funded research and development center to assess and report on the effectiveness of the processes used by DOD for the generation of urgent operational need requirements, and the acquisition processes used to fulfill such requirements. In response to this requirement, the Under Secretary of Defense, Acquisition, Technology and Logistics, asked the Defense Science Board to establish a task force to conduct a study on the effectiveness of the processes used by the department for the generation of urgent operational needs requirements and the acquisition processes used to fulfill such requirements. In July 2009, the Defense Science Board released its report with recommendations on potential consolidations necessary to rapidly field new capabilities for the warfighter in a systematic and effective manner. Moreover, Section 803 of the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 (the FY 2011 NDAA) amended the existing rapid acquisition authority. Previously, the authority could be used to eliminate deficiencies that resulted in combat fatalities. The amended section now permits the use of the authority to acquire and deploy certain supplies to eliminate deficiencies that result in combat casualties, rather than just combat fatalities. The amendment also increased from $100 million to $200 million the amount that can be used annually to acquire the supplies necessary to address such deficiencies. Additionally, Section 804 of the FY 2011 NDAA, among other things, requires the Secretary of Defense to review the processes for the fielding of capabilities in response to urgent operational needs, consider improvements to those processes, and report to the congressional defense committees in January 2012. Fulfillment of Urgent Needs Involves a Number of Entities and Processes, Resulting in Fragmentation and Some Overlap of Efforts The fulfillment of urgent needs involves numerous joint, OSD and military service entities, which have increased over time. We identified areas where some overlap exists among urgent needs entities, such as the submission, validation, and processing of urgent needs requirements. In addition, our analysis identified multiple entities with a role in responding to similar types of urgently needed capabilities, such as ISR and counter- IED, resulting in the potential for duplication of efforts. Numerous DOD and Military Service Entities Play a Key Role in the Fulfillment of Urgent Needs The department has many entities that respond to the large number of urgent needs requests submitted by the combatant commands and military services. As previously reported by us and DOD, a complex set of processes has evolved over time, involving numerous joint, OSD, and military service entities over the past decade as the department seeks to fulfill the capability gaps identified by warfighters. On the basis of DOD’s and our analysis, we have identified at least 31 entities that play a significant role in the various urgent needs processes. Table 3 below shows the 31 entities we identified and when they were established. Further analysis shows that these entities have three different missions with respect to fulfilling urgent needs. First, some entities identify and provide a quick response to threats presented by adaptive enemies, but not always in support of urgent needs. Often these entities engage in experimentation and rapid prototyping to accelerate the transition of technologies to the warfighter. For example, the Rapid Reaction Technology Office does not directly receive or validate joint or service urgent needs, but rather anticipates disruptive threats and in response funds solutions and new capabilities, some of which have fulfilled validated joint urgent operational needs. Second, some entities specifically process urgent needs and are generally involved from validation to sourcing. For example, the joint urgent operational needs process is overseen by Joint Staff J8, which receives and validates urgent need requests, and the Joint Rapid Acquisition Cell, which facilitates a rapid solution. In the Army, Navy, and Marine Corps, various entities exist to validate, facilitate, and source urgent needs for their respective processes. Third, some entities focus on developing solutions in response to urgent needs requests that have been validated, facilitated, and sourced by other entities. These solution-development entities are mostly acquisition program offices, such as Program Executive Office Night Vision / Reconnaissance, Surveillance, and Target Acquisition, which also develop solutions in response to nonurgent needs as well as manage existing systems. Finally, some entities are involved in two or more of the three types of missions described above. For example, JIEDDO anticipates threats, processes urgent needs requests, and develops solutions. Overlap Exists among the Numerous Entities Involved in Processing Urgent Requirements and Expediting Solutions Our analysis shows that overlap exists among urgent needs entities in the roles they play as well as the capabilities for which they are responsible. Table 4 shows the roles played by the various organizations in relation to the activities involved in meeting urgent needs identified earlier. DOD entities at the joint level, and each of the services, also have their own policies for meeting urgent needs. These policies result in seven different processes for the fulfillment of urgent needs; additionally, the Army Rapid Equipping Force also has an urgent needs process. For example, warfighters may submit urgent needs, depending on their military service and the type of need, to Joint Staff J8, JIEDDO, Army Deputy Chief of Staff G-3/5/7, Army Rapid Equipping Force, Navy Fleet Forces Command or Commander Pacific Fleet, Marine Corps Deputy Commandant for Combat Development and Integration, Air Force Major Commands, or Special Operations Command J8. These entities then validate the submitted urgent need request and thus allow it to proceed through their specific process. This contrasts with traditional requirements and needs, which are generally processed under the Joint Capabilities Integration and Development System (JCIDS). JCIDS was established to provide the department with an integrated, collaborative process to identify and guide development of a broad set of new capabilities that address the current and emerging security environment. Moreover, within some of the services, multiple processes and validation points exist. For example, in the Army, urgent needs can be submitted via two routes: (1) the warfighter can make a request to the Rapid Equipping Force for approval by its Director; or (2) the warfighter can submit an operational needs statement, documenting the urgent need to the Deputy Chief of Staff for the Army G-3/5/7, Current and Future Warfighting Capabilities Division, for validation and prioritization. In the Air Force, urgent needs are handled by the various major commands; however, Air Force headquarters also has a process and an entity that can process urgent needs that do not get fulfilled by the major commands. Furthermore, at the joint level, six entities facilitate urgent needs requests and five entities provide sourcing support for urgent needs requests. Officials from two combatant commands have expressed frustration with the number of entities involved in the processing of urgent needs requests and suggested that streamlining of the validation, facilitation, sourcing, and funding processes would improve the timeliness of solutions. Additionally, many entities track the fulfillment of urgent needs requests and their solutions; however, most entities with a role in tracking focus only on specific requests they process or solutions they developed. The overlap created by numerous entities involved in processing urgent requirements and expediting solutions may create fragmented efforts and overall inefficiencies within DOD. Multiple Entities Respond to Requests for Similar Capabilities, Resulting in Potential Duplication of Efforts Multiple entities we surveyed reported a role in responding to similar categories of urgently needed capabilities. We identified eight entities with a role in responding to ISR capabilities, five entities with a role in responding to counter-IED capabilities, and six entities with a role in responding to communications, command and control, and computer technology, among others. Over the course of the wars in Iraq and Afghanistan, multiple organizations have been created to handle specific types of urgently needed capabilities for urgent operational needs and these organizations also experience overlap. For example, JIEDDO initially was established as an Army task force and was changed to a DOD task force to meet urgent counter-IED needs; however, counter-IED is not handled exclusively by JIEDDO, and we have previously reported that JIEDDO and the services lack full visibility over counter-IED initiatives throughout DOD and are at risk of duplicating efforts. Similarly, we previously reported that many biometrics activities are dispersed throughout DOD at many organizational levels and that DOD has been focusing most of its efforts on quickly fielding biometrics systems, particularly in Iraq and Afghanistan, to address DOD’s immediate warfighting needs without guidance to prevent duplication of biometrics-related efforts. In 2010, the Army Biometrics Task force was institutionalized as the Biometrics Identity Management Agency to lead DOD activities to program, coordinate, integrate, and synchronize biometrics technologies and capabilities. However, our ongoing work has identified instances of potential duplication. For example: Both the Army and the Marine Corps continue to develop their own counter-IED mine rollers with full or partial JIEDDO funding. The Marine Corps’ mine roller per unit cost is about $85,000 versus a cost range of $77,000 to $225,000 per unit for the Army mine roller. However, officials disagree about which system is most effective, and DOD has not conducted comparative testing and evaluation of the two systems. Further, JIEDDO officials said that JIEDDO cannot compel the services to buy one solution over the other. The Navy developed a directed-energy technology to fill a critical theater capability gap, yet JIEDDO later underwrote the Air Force’s development of the same technology to create a more powerful and faster-moving equipment item than the Navy had developed. However, the Air Force has now determined that its system will not meet requirements and has deferred fielding the technology pending further study. This may have a negative effect on the continued development of this technology by the Navy or others for use in theater. For example, according to DOD officials, during the recent testing of the Air Force’s system, safety concerns were noted unique to that system that may limit the warfighter’s willingness to accept the technology. However, according to Navy officials, the Navy plans to begin fielding its system in 2011. While our review did find the potential for duplication, we also found some cases where various entities took the initiative to work together, resulting in collaboration to satisfy urgent needs. For example, the Joint Rapid Acquisition Cell received eight validated joint urgent operational needs requirements, and facilitated the integration of the eight separate, but very much related, ISR and force-protection needs. Specifically, this coordination involved the Joint Rapid Acquisition Cell, U.S. Central Command, JIEDDO, and the Army to consolidate the validated requirements, find a sponsor, and develop a solution. Approximately 6 months from the date of funding, the Army PEO–Intelligence, Electronic Warfare & Sensors, specifically Night Vision/Reconnaissance, Surveillance and Target Acquisition, developed and fielded the Base Expeditionary Targeting and Surveillance Sensors–Combined, a flexible, moveable, adjustable, scalable, and expeditionary base defense system for persistent ground targeting and surveillance. DOD Does Not Have Comprehensive Guidance and Full Visibility to Effectively Manage and Oversee Its Urgent Needs DOD has taken several steps to improve the management and oversight of its urgent needs. While these efforts have shown some progress, the department does not have comprehensive policy and guidance for directing efforts across DOD, the military services, and combatant commands to effectively manage and oversee the fulfillment of its urgent needs. Moreover, the department lacks full visibility over the full range of urgent needs efforts from funding to measuring results achieved. DOD Has Taken Some Steps to Improve Management and Oversight of Urgent Needs Requests In response to our April 2010 finding that DOD’s urgent needs guidance was fragmented, Joint Staff officials stated that they were in the process of revising the Joint Staff instruction on the joint urgent needs process to better align with the department’s strategic plan for urgent needs. Moreover, OSD has been drafting Directive-Type Memorandum 10-002 to establish policy, assign responsibilities, and outline procedures for the resolution of joint urgent operational needs. The draft directive-type memorandum seeks to provide guidance on a range of issues, including rapid-acquisition authority, the Joint Rapid Acquisition Cell’s role as the DOD focal point for tracking and coordinating joint urgent operational needs resolution, as well as clearly defining the responsibilities of those involved in the processing of urgent needs. A senior DOD official explained that after review by the Under Secretary of Defense for Acquisition, Technology and Logistics, senior DOD officials decided to expand the draft memorandum to include the services’ urgent operational needs—as well as joint urgent operational needs—to increase visibility. According to senior DOD officials, the department expects the memorandum to be issued in 2011. Furthermore, in 2009, the department established the Rapid Fielding Directorate within the office of the Director, Defense Research and Engineering, Under Secretary of Defense for Acquisition, Technology, and Logistics, and reorganized the Joint Rapid Acquisition Cell, the Rapid Reaction Technology Office, and the Joint Capability Technology Demonstrations under this new office to better align similar missions related to accelerating capabilities to the warfighter. Rapid Fielding Directorate officials stated that one of the first imperatives is to accelerate the delivery of capabilities to the warfighter, emphasizing the ability to efficiently collaborate directly with the military services. Additionally, officials from the Joint Rapid Acquisition Cell stated that they are working to address a number of challenges, including applying their definition of urgent need to validate requirements, prioritizing the urgency of needs identified by the warfighter, developing universal metrics to track and evaluate urgent needs, and formalizing the department’s urgent needs processes. Finally, to address concerns of senior-level leadership regarding the management of its urgent needs, the department is planning to establish a senior-level oversight council in Directive-Type Memorandum 10-002. According to a senior OSD official, this council may include three- and four-star-level representatives from OSD, the Joint Staff, and the military services to ensure that all efforts across the department are synchronized to rapidly acquire and field materiel solutions to urgent needs. DOD Does Not Have a Comprehensive Policy for Guiding All Parts of the Process for Addressing Warfighters’ Urgent Needs Requests Despite these actions, DOD does not have departmentwide guidance that provides a common departmentwide approach for how all urgent needs are to be addressed. Guidance for issues that affect all the defense components originates at the DOD level, typically either through a directive or instruction. A directive is a broad policy document that assigns responsibility and delegates authority to the DOD components. Directives establish policy that applies across all the services, combatant commands, and DOD components. An instruction implements the policy, or prescribes the manner for carrying out the policy, for operating a program or activity, and for assigning responsibilities. According to federal best practices reported in GAO’s Standards for Internal Control in the Federal Government, management is responsible for developing detailed policies, procedures, and practices to help program managers achieve desired results through effective stewardship of public resources. However, DOD has not issued any such directives or instructions that provide policy and guidance over all of its urgent needs processes. DOD is in the process of developing guidance concerning its urgent needs processes through the Directive-Type Memorandum 10-002. However, it remains in draft form, so it is not clear to what extent this guidance will establish such a common approach for service and other urgent needs processes. Additionally, our analysis found that DOD has a fragmented approach in managing all of its urgent needs submissions and validated requirements. For example, the Joint Staff, JIEDDO, the military services, and the Special Operations Command have issued their own guidance outlining activities involved in processing and meeting their specific urgent needs. Through comparative analysis of policies issued by the Joint Staff, each military service, JIEDDO, and the Special Operations Command for managing the various urgent needs processes, we identified that the policies often varied. Moreover, we found that Joint Staff, Navy, and Air Force policies do not define roles and responsibilities for some activities involved, as shown in table 5. As indicated in table 5, some policies include each of the activities involved in the processing and fulfillment of urgent needs. However, Special Operations Command, Joint Staff, Navy, and Air Force policies do not include guidance on all the activities included in the process. For example, we determined the following: Joint Staff policy did not address how to provide feedback on urgent needs that are not validated. Officials from one combatant command expressed frustration that they received no feedback as to why joint urgent operational needs they submitted were not validated and lacked adequate insight to understand the decision process. However, other policies addressed this issue. For example, Navy guidance stated that urgent needs that were not validated would be returned to the requester with rationale as to why or with recommendations on how to revise the request, or both. The Joint Staff, Navy, and Air Force policies did not define roles and responsibilities involved in the decision to transition, transfer, or terminate the capability solution provided. Furthermore, Special Operations Command, Joint Staff, and Navy policies did not address how validated requirements would be tracked as a capability solution was being developed. Also, DOD’s urgent needs policies varied for transitioning or transferring capabilities. For example: It is JIEDDO’s policy to decide within 2 years whether to transition or transfer the capability over to a service or agency or to terminate it. The Special Operations Command determines at the 1-year mark whether the capability is still needed in-theater, and if so, defines out- year funding requirements and how the funding will be obtained. While the Army has a process in place for transitioning urgent needs, it is applicable only to those urgent needs that are nominated to go through the Army’s Capabilities Development for Rapid Transition process. However, this process identifies and approves only certain capabilities that have been nominated for sustainment, rather than tracking all capabilities fielded for the Army’s urgent needs. During our review, numerous officials stated the need for overarching, uniform guidance to all entities involved in urgent needs processes. Senior officials we spoke with stated that the department needs to provide more comprehensive management and oversight over all of its urgent needs. Additionally, combatant command, Joint Staff, and service officials stated a need for policies to be explicit regarding the necessary activities that must be addressed within the urgent needs process. For example, officials at one combatant command stated that when submitting an urgent need through the joint urgent operational needs process, they lacked insight into the validation process and metrics used by the Joint Staff, as well as guidance on how joint urgent operational needs are evaluated across the combatant commands. An official at a different combatant command emphasized the importance of defining which requests truly qualify as an urgent need, and noted that the Joint Staff’s requirements process lacks a method to verify that requirements are properly defined. Moreover, Joint Staff officials discussed the importance of defining a joint urgent operational need, as well as criteria for what qualifies as an urgent need in their guidance that is currently undergoing revisions. Army officials noted that inconsistency exists regarding rapid acquisition guidance between the Joint Staff, Army, and Air Force policies. And finally, Air Force officials stated that urgent needs policy should include guidance on which steps within the acquisition process should and can be waived, deferred, or tailored in order to rapidly acquire capabilities, which would allow acquisition personnel to more quickly address urgent needs. Because DOD does not have baseline DOD-wide guidance that applies to urgent operational needs processes across the department clearly defining the roles and responsibilities of how urgent needs should be assessed, processed, and managed—including activities such as tracking the status of a validated requirement—the department continues to maintain a fragmented approach to managing its urgent needs processes. As a result, the department risks inefficiently responding to urgent needs and potentially duplicating efforts. DOD Lacks Full Visibility over Urgent Needs Efforts, Challenging DOD’s Ability to Manage and Oversee Its Processes DOD lacks full visibility over the full range of urgent needs efforts—from funding to measuring results. This includes the lack of a single senior-level focal point to help bring cohesion to DOD’s urgent needs processes. It also includes the lack of a system and metrics to facilitate coordinating, monitoring, and tracking progress and measuring results. Funding Estimated at More Than $76 Billion over 6 Years The department lacks full visibility to readily identify the total cost of its urgent needs efforts. However, we obtained data from the majority of entities in our analysis on how much funding was made available to them for the fulfillment of urgent needs. On the basis of the data submitted to us in response to our data-collection instrument, the total funding for the fulfillment of urgent needs is at least $76.9 billion from fiscal years 2005 through 2010. As indicated in figure 1 below, funding is spread unevenly among many urgent needs entities because the entities have different roles in the fulfillment of urgent needs. In addition, some entities like JIEDDO and the Rapid Reaction Technology Office have access to special funds for the fulfillment of urgent needs, while others rely on different sources such as funding through the annual budget process or the reprogramming or transfer of funds from other DOD programs and activities. Of the $76.9 billion in urgent needs funds represented in figure 1, $67.1 billion or 87.2 percent has been assigned to OSD entities, $9.5 billion or 12.4 percent to Army entities, $259 million or less than 1 percent to Navy entities, and $33.0 million or less than 1 tenth of 1 percent to Air Force entities. The amounts reported in figure 1 may underestimate the actual total amounts expended on urgent needs for the given years because the list of entities is not exhaustive. Further, the data are self-reported and not all entities we identified provided funding data. Without full visibility of its urgent needs efforts and costs, the department is not fully able to identify key improvements and is inhibited in its ability to build agile, adaptive, and innovative structures capable of quickly identifying emerging gaps and adjusting program and budgetary priorities to rapidly equip and field capabilities that will mitigate those gaps. Disparate Tracking Systems Limit DOD’s Visibility over Its Urgent Needs Process and Can Hamper Improvement Efforts DOD cannot readily identify the totality of its urgent needs efforts as well as the cost of such efforts because it has limited visibility over all urgent needs submitted by warfighters—both from joint and service-specific sources. DOD and service officials cited two impediments to full visibility: the lack of a comprehensive tracking system to manage and oversee all urgent needs identified by the warfighter and a lack of clearly defined roles. Specifically, DOD and the services have disparate ways of tracking urgent needs; some have formal databases to input information while others use more informal methods such as e-mailing to solicit feedback. For example, the Joint Chiefs of Staff and each of the military services utilize electronic databases to track capability solutions as they move through the urgent needs process. However, more than a third of the entities involved in the process did not collect or provide the necessary information for the joint or service-based systems to track those solutions. Rather, there was confusion over whose role it was to collect and input data into these tracking systems. For example, one program office that develops urgent needs solutions uses a metric of operational readiness levels to track the effectiveness of its solutions. However, the program office does not provide these data to the joint or services’ electronic databases. Rather, program office officials stated they believed it was the responsibility of the combatant command, Joint Staff, or service offices that maintain the databases to maintain this information. However, officials from the Joint Rapid Acquisition Cell, which maintains the joint database, stated they obtain data from the other individual databases based on what the DOD components input. DOD and military service officials stated the need for improvements to tracking urgent needs. For example, some senior DOD officials stated that they would like senior acquisition executives and other oversight officials to review every 4 to 6 weeks how joint and service urgent needs are progressing. Combatant command officials stated that while they have visibility into the database for tracking joint urgent operational needs, they do not have the same visibility into the services’ databases. Specifically, officials at one combatant command, who stated they have zero visibility into the urgent needs being addressed by the services, cited the value in having a global database of all service and joint urgent needs as they develop and transition, transfer, or terminate fielded solutions. Additionally, Army officials recognized the need for improved visibility. Specifically the Vice Chief of Staff of the Army issued a memorandum in April 2010 to develop a rapid acquisition / rapid equipping common operating picture and collaboration tool as a means to increase efficiency and transparency of Army urgent need processes. Stakeholders include various Army entities as well as numerous other entities involved in the process. Without full visibility into all of its urgent needs, the department, military services, and combatant commands risk the potential for overlap or duplication in developing capabilities to respond to urgent needs. This reinforces the need for a single focal point at a sufficiently high level to bring greater cohesion to these disparate efforts. According to DOD officials, the need for improved oversight was an important factor in the decision to revise the Directive-Type Memorandum 10-002. Furthermore, Joint Rapid Acquisition Cell officials stated the draft Directive-Type Memorandum 10-002 would require DOD components to provide visibility to the Joint Rapid Acquisition Cell of urgent needs managed through the DOD entities’ processes. DOD Has Not Established a Universal Set of Metrics for Evaluating the Effectiveness and Tracking the Status of Solutions Provided to the Warfighter Our analysis found that the feedback mechanisms across DOD, the Joint Staff, the military services, JIEDDO, and the Special Operations Command are varied and fragmented. In April 2010, we recommended that DOD develop an established, formal feedback mechanism or channel for the military services to provide feedback to the Joint Chiefs of Staff and Joint Rapid Acquisition Cell on how well fielded solutions met urgent needs. The department concurred with the recommendation and stated that it would develop new DOD policy and that the Joint Chiefs of Staff would update the Chairman’s instruction to establish requirements for oversight and management of the fulfillment of urgent needs. The majority of DOD urgent needs entities we surveyed reported that they do not collect all the data needed to determine how well these solutions are performing. For example, one entity reported that information on whether a deployed solution was successful is largely anecdotal and there is no uniformity in the way such data are collected and reported. Additionally, while the Air Force uses its requirements database to track the progress of systems or solutions under development, it has not formalized metrics to assess the performance of deployed systems or solutions, or for reporting such performance to senior leadership. In April 2010, we also recommended that DOD develop and implement standards for accurately tracking and documenting key process milestones such as funding, acquisition, fielding, and assessment, and for updating data management systems to create activity reports to facilitate management review and external oversight of the process. DOD agreed with these recommendations and noted actions it planned to take to address them. However, our analysis found that the department lacked a method or metric to track the status of a validated urgent requirement across the services and DOD components, such as whether a requirement currently in development could be applicable to another service. Specifically, officials from one combatant command stated that they do not have visibility into the urgent needs being addressed at the service level, which could be beneficial to have so that the combatant command would have awareness of capabilities being developed and could communicate with that particular service if the combatant command saw it as a solution to an urgent need. In addition, officials within the Joint Staff recognize the importance of establishing tracking in an urgent needs system and plan to include language in revisions to their policy on joint urgent operational needs. With the establishment of a metric or mechanism to track the status of a validated requirement, the department would gain improved awareness of urgent needs as they move through the process. DOD Lacks a Focal Point Responsible for Managing, Overseeing, and Maintaining Full Visibility over All the Department’s Urgent Needs Efforts DOD’s lack of visibility over all urgent needs requests is due in part to the lack of a senior-level focal point (i.e., gatekeeper) that has the responsibility to manage, oversee, and have full visibility to track and monitor all emerging capability gaps being identified by warfighters in- theater. At present, the department has not established a senior-level focal point to (1) lead the department’s efforts to fulfill validated urgent needs requirements, (2) develop and implement DOD-wide policy on the processing of urgent needs or rapid acquisition, or (3) maintain full visibility over its urgent needs efforts and the costs of those efforts. We have previously testified and reported on the benefits of establishing a single point of focus at a sufficiently senior level to coordinate and integrate various DOD efforts to address concerns, such as with counterterrorism and the transformation of military capabilities. Moreover, the 2010 Quadrennial Defense Review seeks to further reform the department’s institutions and processes to support the urgent needs of the warfighter, buy weapons that are usable, affordable, and truly needed, and ensure that taxpayer dollars are spent wisely and responsibly. Similarly, the Secretary of Defense initiated major efforts in August 2010 to significantly reduce excess costs and apply savings achieved by reducing duplication and overhead, and set a goal to find $100 billion in savings over a 5-year period. Without establishment of a senior-level focal point, DOD officials may be unable to identify areas for improvement, including consolidation, to prioritize validated but unfunded requirements, to identify funding challenges and a means to address such challenges, or ensure collaboration to modify capabilities in development to meet several similar urgent needs requirements—and may be unable to reduce any overlap or duplication that may exist as solutions are developed or modified. Opportunities Exist for Consolidating Urgent Needs Processes and Entities In addition to not having a comprehensive approach for managing and overseeing its urgent needs efforts, DOD has not conducted a comprehensive evaluation of its urgent needs processes and entities to identify opportunities for consolidation. Given the overlap and potential for duplication we identified in this review, coupled with similar concerns raised by other studies, there may be opportunities for DOD to further improve its urgent needs processes through consolidation. On the basis of our discussions with DOD officials as well as our analysis of prior reports and studies and the responses from our data-collection instrument, we identified several options that the department might consider in an effort to evaluate the merits of consolidating its urgent needs processes and entities. DOD Has Not Comprehensively Evaluated Opportunities for Consolidation across the Department Despite various reports by the Defense Science Board, GAO, and others— that raised concerns about the numbers and roles of the various entities and processes involved and the potential of overlap and duplication—DOD has not comprehensively evaluated opportunities for consolidation across the department. For example, the Defense Science Board Task Force found that DOD has done little to adopt urgent needs as a critical, ongoing DOD institutional capability essential to addressing future threats, and it twice provided recommendations to the department about potential consolidations. Specifically, in July 2009, the task force identified a number of critical actions to address the situation, including a dual acquisition path that separates “rapid” and “deliberate” acquisitions as well as the establishment of a new agency to implement this separation, called the Rapid Acquisition and Fielding Agency. Further, the Task Force stated that this new agency should (1) be focused on acquiring new solutions to joint urgent operational needs; (2) work with the combatant commands to anticipate future needs; and (3) oversee and coordinate tracking of all urgent need statements in conjunction with the services and the service components. Contrary to these recommendations, some DOD officials whom we interviewed across the department expressed their concern with the creation of a new agency since the Secretary of Defense publicly questioned why it was “necessary to bypass existing institutions and procedures to get the capabilities needed to protect U.S. troops and fight ongoing wars.” According to senior OSD officials, the department has conducted studies, including a Lean Six Sigma study, to determine lessons learned from several independent urgent needs processes that might be integrated into the department’s main acquisition process. Briefings have been presented to the Under Secretary for Acquisition, Technology and Logistics, making the business case to standardize the department’s urgent needs processes, improve support to the warfighter, and achieve greater collaboration across the department. However, DOD has not developed or implemented any courses of action to address the findings of these studies. Many DOD and military service officials stated that higher-level senior leadership needs to take decisive action to improve and formalize its urgent needs processes, thus reducing unnecessary duplication in staff, IT, support, and funding. Until the department comprehensively evaluates its strategic direction on urgent needs, it will be unaware of opportunities for consolidation across the department as well as opportunities for improved coordination, or other actions to achieve savings or increased efficiencies in its fulfillment of urgent needs. DOD Directive 5105.02 directs the Deputy Secretary of Defense to serve as the Chief Management Officer of the department with the responsibility and function, among others, to establish performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness and monitor and measure the progress of the department. Moreover, the department’s Strategic Management Plan outlines the five top-level business priorities of DOD, including “Reform the DoD Acquisition and Support Processes” as its third business priority. A goal of this priority is to focus research and development to address warfighting requirements in an effort to speed technology transitions focused on warfighting needs. Furthermore, GAO’s Business Process Reengineering Assessment Guide establishes that a comprehensive analysis of alternative processes should include a performance-based, risk- adjusted analysis of benefits and costs for each alternative. Our prior work on business process reengineering has demonstrated the importance of exploring available options, including the potential of each option to achieve the desired goals as well as to determine the benefits, costs, and risks of each. Other Options Aimed at Consolidation and Increased Efficiencies Given the overlap and potential for duplication we identified in this review, coupled with similar concerns raised by other studies, we identified and analyzed a number of options aimed at potential consolidations in an effort to provide ideas for the department to consider in streamlining its urgent needs entities and processes. The options are presented in table 6 below. Using information and documentation provided by DOD officials, prior reports and studies, and the responses from our data-collection instrument, we analyzed each option in terms of its potential capacity to (1) reduce overlap or duplication or both, if any, in the mission, roles, and key activities; (2) reduce fragmentation and potential gaps in the processes; (3) increase coordination and visibility; and (4) increase efficiencies. We also assessed the advantages and disadvantages of each option. Additionally, while title 10, U.S. Code, provides that the military services are responsible for equipping and training their own forces, DOD officials indicated that title 10 would not preclude consolidating or otherwise streamlining the processing of urgent operational needs to maximize efficiency and response to the warfighter. The options we identified are not meant to be exhaustive or mutually exclusive. Rather, DOD would need to perform its own analysis, carefully weighing the advantages and disadvantages of options it identifies to determine the optimal course of action. Additionally, it must be recognized that many entities involved in the fulfillment of urgent needs have other roles as well. For example, while the Biometrics Information Management Agency may respond directly to an urgent need, it also has the mission to lead the department’s activities to program, integrate, and synchronize biometric technologies and capabilities. Furthermore, several DOD officials also pointed out that although efficiency is important, the speed of development and effectiveness of solutions are generally a higher priority for urgent needs. In sharing our analysis of options with DOD and military service officials, they agreed that such an analysis considering all the advantages and disadvantages of consolidation is a necessary step to improving the department’s fulfillment of urgent needs. Given the increasing number of urgent needs and the escalating fiscal challenges, it is critical for DOD to reevaluate the current status of how it fulfills its urgent needs and whether there is potential to reduce duplication, fragmentation, and overlap to achieve increased efficiencies or cost savings, or both. Without a comprehensive evaluation of its urgent needs entities and processes, DOD will not be in a position to know if it is fulfilling urgent needs in the most efficient and effective manner as well as accomplishing its strategic management objectives. Conclusions DOD has issued guidance that addresses several aspects of the process for warfighter needs, but the entities aiding warfighters’ needs do not have DOD-wide guidance in such areas as clearly defining roles and responsibilities and minimum requirements for processing requests. Additionally, DOD and military service officials have limited awareness of all urgent needs—including how well those needs are being met—which can hamper their ability to effectively manage and identify areas where overlap and duplication exist, in accordance with the department’s strategic and long-term goals. Yet DOD does not have a focal point to provide visibility into the totality of these urgent needs activities. Without DOD-wide guidance on the department’s urgent needs processes and a focal point to lead its overall efforts on urgent operational needs and to act as an advocate within the department for issues related to the department’s ability to rapidly respond to urgent needs, DOD is likely to continue to risk duplicative, overlapping, and fragmented efforts, which contributes to inefficiency and loss of potential financial savings. Additionally, without full visibility and the establishment of a metric or mechanism to track the status of a validated requirement, including its transition, the department may not be able to identify key improvements. Moreover, without a formal feedback mechanism or channel for the military services to provide feedback, the department is likely to be unaware of how well fielded solutions are performing. Finally, we acknowledge that rapid response to urgent needs has a high priority, but on the basis of our analyses we believe there are still opportunities to achieve efficiencies without sacrificing response to the warfighter. Without one DOD office—such as the Chief Management Officer—taking a leadership role to analyze options for consolidating urgent needs processes and entities, there are both real and potential risks of duplication, overlap, and fragmentation in the efforts, as well as the risk that DOD may not address urgent warfighter needs in the most efficient and cost-effective manner. Recommendations for Executive Action To promote a more comprehensive approach to planning, management, and oversight of the department’s fulfillment of urgent operational needs, we recommend that the Secretary of Defense take the following five actions: Direct the Under Secretary of Defense for Acquisition, Technology and Logistics to develop and promulgate DOD-wide guidance across all urgent needs processes that establishes baseline policy for the fulfillment of urgent operational needs; clearly defines common terms as well as the roles, responsibilities, and authorities of the OSD, Joint Chiefs of Staff, combatant commands, and military services for all phases of the urgent needs process, including, but not limited to, generation, validation, funding, execution, tracking, and management of the transition, termination, or transfer process and that incorporates all available expedited acquisition procedures; designates a focal point within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (such as the Rapid Fielding Directorate, or other entity as deemed appropriate) with the appropriate authority and resources, dedicated to leading the department’s urgent needs efforts, including, but not limited to: (1) acting as an advocate within the department for issues related to DOD’s ability to rapidly respond to urgent needs; (2) improving visibility across all urgent needs entities and processes; and (3) ensuring tools and mechanisms are used to track, monitor, and manage the status of urgent needs, from validation through the transition, including a formal feedback mechanism or channel for military services to provide feedback on how well fielded solutions met urgent needs; and directs the DOD Components to establish minimum processes and requirements for each of the above phases of the process. Direct DOD’s Chief Management Officer to evaluate potential options for consolidation to reduce overlap, duplication, and fragmentation, and take appropriate action. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD fully concurred with all five of our recommendations. However, DOD stated that specific actions it will take to address these recommendations will be identified in a report on its urgent needs processes required by the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 and due to Congress in January 2012. This act requires DOD to review its processes for the fielding of capabilities in response to urgent operational needs and consider such improvements as providing a streamlined and expedited approach, clearly defining the roles and responsibilities for carrying out all phases of the process, and establishing a formal feedback mechanism. Although DOD noted in its comments that actions to be taken would be identified in its subsequent congressionally mandated report, it did provide some actions it planned to take. For example, DOD agreed to issue guidance to address our recommendations that DOD develop and promulgate DOD-wide guidance across all urgent needs processes that establishes a baseline policy and directs DOD components to establish minimum processes and requirements across the urgent needs process. DOD stated this policy will permit DOD components to operate their own processes, but would maintain a sufficient baseline commonality to maintain DOD oversight. We agree that nothing in our recommendations preclude the DOD components from maintaining their own urgent needs processes, but as we reported, these processes should be part of a comprehensive DOD-wide approach for how all urgent needs should be addressed. Additionally, with regard to our recommendation that DOD develop guidance that identifies a focal point to lead the department’s urgent needs efforts, DOD stated that the Director of the JRAC would act in this capacity pending the outcome of the congressionally mandated study. We agree that this would be a good step towards addressing our recommendation until DOD completes its review. Finally, in concurring with our recommendation that DOD evaluate potential options for consolidation, DOD stated that the Deputy Chief Management Officer and the military services’ Chief Management Officers would provide oversight and assistance in DOD’s review of the end-to-end process with regard to utilizing process improvement techniques and tools. Providing this DOD review specifically includes an evaluation of potential consolidation options, we agree that it would address our recommendation. Technical comments were provided separately and incorporated as appropriate. The department’s written comments are reprinted in appendix II. We are sending copies of this report to interested congressional committees and the Secretary of Defense. This report will be available at no charge on GAO’s website, http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8365 or by e-mail at SolisW@gao.gov. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To determine what entities exist within the Department of Defense (DOD) for responding to urgent operational needs and to what extent, if any, there is fragmentation, overlap, or duplication in their missions, roles, and responsibilities, we reviewed the Defense Science Board Task Force report and used it as our starting point to identify the joint and service entities involved in the fulfillment of urgent operational needs. We interviewed officials from the Defense Science Board Task Force to gain an understanding of their methodology, their findings, and their recommendations. We developed a 46-question data-collection instrument to collect information from the urgent needs entities identified by the Defense Science Board report to determine the entities’ roles and the extent of their involvement in the various activities of the urgent needs processes. For example, for each entity, we collected general information on the mission, role, and responsibility, organizational structure, and impetus for creation; the roles and processes the entity employs with respect to urgent needs; and specifically how the entity is involved in the vetting, funding, tracking, and transitioning of urgent needs. Prior to fielding the data-collection instrument, we tested it with two entities and adjusted the questions and layout based on the feedback we received. Moreover, in an effort to identify any additional urgent needs entities not captured by the Defense Science Board Task Force or by us in our background research, we employed a “snowball” sampling technique, whereby we included our list of urgent needs entities and asked each entity (1) if it was aware of any others that are involved in the response and fulfillment of urgent operational needs and (2) if it interfaces with any other organizations or programs with regards to managing the urgent operational needs process. We then contacted those entities that the respondents had identified to better understand the population of urgent needs related entities. After analyzing data provided as well as interviews with DOD, military service, selected combatant command, and entity officials, we judgmentally selected the entities included in our analysis to exclude entities that did not meet our definition of an urgent needs organization. For example, we did not include the department’s Commander’s Emergency Response Program after reviewing its mission and purpose. After the urgent needs entities responded to the data- collection instruments, we created a database and analyzed the variables to gain an understanding of the mission, roles, and responsibilities, as well as the organizational structure, and impetus for creation of the entity, and the roles and processes the entity employs with respect to urgent needs. On the basis of this data as well as our analyses of DOD’s urgent needs policies and guidance, the Defense Science Board Task Force report, and other relevant documents, we identified six broad urgent needs activities involved after the submission of an urgent needs statement: validation, facilitation, sourcing, execution, tracking, and transition, transfer, or termination. We then analyzed the data obtained through the data- collection instrument and other documentation to identify the prevalence of fragmentation, overlap, or duplication in response to urgent needs between and among the entities and within DOD more generally. In order to present the cost analysis for each urgent needs entity in consistent terms, all cost data in this report are in fiscal year 2010 dollars. We converted cost information to fiscal year 2010 dollars using conversion factors from the DOD Comptroller’s National Defense Budget Estimates for Fiscal Year 2010. To determine the extent to which DOD has a comprehensive approach for managing and overseeing its various activities to address urgent needs identified by warfighters in-theater, we reviewed key documents including the Quadrennial Defense Review, DOD’s Strategic Management Plan, prior National Defense Authorization Acts, and other public laws. We examined these documents to gain an understanding of the department’s strategic goals as well as to examine any potential effects each had on the department’s rapid acquisition and urgent needs processes. We analyzed joint and military service policies pertaining to the fulfillment of urgent operational needs, including Chairman of the Joint Chiefs of Staff Instruction 3470.01; Army Regulation 71-9; Air Force Instructions 63-114 and 10-601; Secretary of the Navy Instruction 5000.2C and Secretary of the Navy Notice 5000; Marine Corps Order 3900.17; DOD Joint Improvised Explosive Device Defeat Organization Instruction 5000.01; and U.S. Special Operations Command Directives 70-1 and 71-4, to gain an understanding of roles and responsibilities involved in fulfilling urgent needs, what constitutes an urgent need, and to assess whether the department has comprehensive departmentwide policy for establishing a baseline on how urgent needs are to be addressed, including key aspects of the process such as generation, validation, or tracking. Likewise, we analyzed forthcoming DOD policies, including the department’s Directive-Type Memorandum 10-002, which seeks to establish policy, assign responsibilities, and outline procedures for the resolution of joint urgent operational needs. We conducted comparative analysis of the policies to identify the differences between the varying policies to identify the extent of any fragmentation. We interviewed relevant DOD officials, including senior defense officials within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics, including the Rapid Fielding Directorate, and the Office of the Secretary of Defense, Cost Assessment and Program Evaluation to gain an understanding of the totality of the department’s efforts to satisfy urgent warfighter requirements as well as on the metrics used to evaluate the effectiveness of the capability solutions developed to address urgent needs. Likewise, we interviewed officials from the Joint Staff, selected combatant commands, and each military service, including acquisition and Program Management / Program Executive Officials to further our understanding of how urgent needs are fulfilled; how the processes are managed and overseen; and what improvements, if any, are warranted. In addition, we interviewed officials at each entity we identified to gain an understanding of their mission, role, and responsibilities, how data on their joint or service-specific fulfillment of urgent needs is tracked and reported to senior level officials, and what improvements, if any, are warranted. To determine the extent to which DOD comprehensively evaluated its urgent needs entities and processes and identified potential for consolidations, we contacted senior defense officials within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics, the Office of the Secretary of Defense, Cost Assessment and Program Evaluation, selected combatant commands, and the military services to identify and obtain any studies, reports, or analysis conducted by the department on its fulfillment of urgent needs. Using this information, together with analysis of prior reports and studies and the responses from our data-collection instrument, we developed several options that DOD may wish to consider, including a variety of consolidation options for the entities and processes responsible for responding to urgent operational needs. We tested and analyzed these options in terms of their potential capacity to gain increased efficiencies in the visibility, coordination, management, and oversight of the department’s urgent needs process as well as to reduce duplication, overlap, and fragmentation, if any. We visited or contacted the following offices during our review: Intelligence, Surveillance, and Reconnaissance Task Force, Washington, D.C. Joint Improvised Explosive Device Defeat Organization, Washington, D.C. Mine Resistant Ambush Protected vehicle Task Force, Washington, D.C. Office of the Joint Chiefs of Staff, Force Structure, Resources, and Assessment Directorate (J8), Washington, D.C. Office of the Secretary of Defense, Cost Assessment and Program Evaluation, Washington, D.C. Office of the Under Secretary of Defense, Acquisition, Technology and Logistics, Washington, D.C. Defense Procurement and Acquisition Policy Office Joint Rapid Acquisition Cell Rapid Fielding Directorate Complex Systems, Joint Capability Technology Demonstration U.S. Coalition Warrior Interoperability Demonstration Office, Washington, D.C. Defense Science Board, Washington, D.C. Deputy Chief of Staff, Department of the Army, G-3/5/7, Washington, D.C. Asymmetric Warfare Group, Fort Meade, Maryland Biometrics Identity Management Agency, Washington, D.C. Deputy Chief of Staff, Department of the Army, G-3/5/7 Capability Integration Division, Washington, D.C. Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology, Army Science Board, Arlington, Virginia Program Executive Office—Command, Control, and Communications– Tactical, Counter Rocket, Artillery, and Mortar Program Directorate, Fort Monmouth, New Jersey Program Executive Office—Intelligence, Electronic Warfare, and Sensors, Night Vision/Reconnaissance, Surveillance, and Target Acquisition, Fort Belvoir, Virginia Program Executive Office—Soldier, Directorate of Logistics (G4) (formerly known as Rapid Fielding Initiative Directorate), Fort Belvoir, Virginia Rapid Equipping Force, Fort Belvoir, Virginia Research, Development, and Engineering Command, Aberdeen Proving Task Force Observe, Detect, Identify, Neutralize, Washington, D.C. Training and Doctrine Command, Fort Monroe, Virginia Army Capabilities Integration Center Human Terrain System Chief of Naval Operations, N81D, Washington, D.C. Deputy Assistant Secretary of the Navy, Expeditionary Warfare, Washington, D.C. Navy Comptroller’s Office, Washington, D.C. Office of Naval Research, Office of Transition, Rapid Development and Deployment Program, Arlington, Virginia Combat Development Command, Capabilities Development Directorate, Office of the Assistant Secretary of the Air Force for Acquisition, Washington, D.C. Requirements Policy & Process Division, Directorate of Operational Capability Requirements, Washington, D.C. Air Force Comptroller’s Office, Washington, D.C. Air Mobility Command, Scott Air Force Base, Illinois 645th Aeronautical Systems Group (Big Safari), Wright-Patterson Air Force Base, Dayton, Ohio Rapid Capabilities Office, Washington, D.C. U.S. Central Command, MacDill Air Force Base, Tampa, Florida U.S. European Command, Stuttgart, Germany U.S. Northern Command, Peterson Air Force Base, Colorado Springs, U.S. Special Operations Command, MacDill Air Force Base, Tampa, U.S. Transportation Command, Scott Air Force Base, Illinois We conducted this performance audit from February 2010 to March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Cary B. Russell (Assistant Director), Usman Ahmad, Laura G. Czohara, Lonnie McAllister II, John Ortiz, Richard Powelson, Steve Pruitt, Amie Steele, Ryan Stott, John Strong, Tristan To, Nicole Vahlkamp, Elizabeth Wood, Delia P. Zee, and Karen Zuckerstein made key contributions to this report.
Forces in Iraq and Afghanistan have faced significant risks of mission failure and loss of life due to rapidly changing enemy threats. In response, the Department of Defense (DOD) established urgent operational needs processes to rapidly develop, modify, and field new capabilities, such as intelligence, surveillance and reconnaissance (ISR) technology, and counter-improvised explosive devices (IED) systems. However, GAO, the Defense Science Board, and others have raised concerns about the effectiveness, efficiency, and oversight of DOD's various urgent needs processes. GAO conducted this review to determine (1) what various entities exist within DOD for responding to urgent operational needs, and the extent to which there is fragmentation, overlap, or duplication; (2) the extent to which DOD has a comprehensive approach for managing and overseeing its urgent needs activities; and (3) the extent to which DOD has evaluated the potential for consolidations. To conduct this review, GAO examined DOD's urgent needs processes and collected and analyzed data from urgent needs entities. Over the past two decades, the fulfillment of urgent needs has evolved as a set of complex processes within the Joint Staff, the Office of the Secretary of Defense, each of the military services, and the combatant commands to rapidly develop, equip, and field solutions and critical capabilities to the warfighter. GAO identified at least 31 entities that manage urgent needs and expedite the development of solutions to address them. Moreover, GAO found that some overlap exists. For example, there are numerous points of entry for the warfighter to submit a request for an urgently needed capability, including through the Joint Staff and each military service. Additionally, several entities have focused on developing solutions for the same subject areas, such as counter-IED and ISR capabilities, potentially resulting in duplication of efforts. For example, both the Army and the Marine Corps had their own separate efforts to develop counter-IED mine rollers. DOD has taken steps to improve its fulfillment of urgent needs, but the department does not have a comprehensive approach to manage and oversee the breadth of its activities to address capability gaps identified by warfighters in-theater. Steps DOD has taken include developing policy to guide joint urgent need efforts and working to establish a senior oversight council to help synchronize DOD's efforts. Federal internal control standards require detailed policies, procedures, and practices to help program managers achieve desired results through effective stewardship of public resources. However, DOD does not have a comprehensive, DOD-wide policy that establishes a baseline and provides a common approach for how all joint and military service urgent needs are to be addressed. Moreover, DOD lacks visibility over the full range of its urgent needs efforts. For example, DOD cannot readily identify the cost of its departmentwide urgent needs efforts, which is at least $76.9 billion based on GAO's analysis. Additionally, DOD does not have a senior-level focal point to lead the department's efforts to fulfill validated urgent needs requirements. Without DOD-wide guidance and a focal point to lead its efforts, DOD risks having duplicative, overlapping, and fragmented efforts, which can result in avoidable costs. DOD also has not comprehensively evaluated opportunities for consolidation across the department. GAO's Business Process Reengineering Assessment Guide establishes that such a comprehensive analysis of alternative processes should be performed, to include a performance-based, risk-adjusted analysis of benefits and costs for each alternative. In an effort to examine various ways the department might improve its fulfillment of urgent needs, GAO identified and analyzed several potential consolidation options, ranging from consolidation of all DOD urgent needs entities to more limited consolidation of key functions. Until DOD comprehensively evaluates its strategic direction on urgent needs, it will be unaware of opportunities for consolidation as well as opportunities for increased efficiencies in its fulfillment of urgent needs.
Key Practices for Effective Performance Management We identified specific practices that leading public sector organizations both here in the United States and abroad have used in their performance management systems to create a clear linkage—“line of sight”—between individual performance and organizational success. Federal agencies should consider these practices as they develop and implement the modern, effective, and credible performance management systems with the adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, needed to effectively link pay to performance. The key practices include the following. 1. Align individual performance expectations with organizational goals. An explicit alignment of daily activities with broader results helps individuals see the connection between their daily activities and organizational goals and encourages individuals to focus on their roles and responsibilities to help achieve those goals. To this end, for example, the Federal Aviation Administration (FAA) was able to show in fiscal year 2000 how the Department of Transportation’s strategic goal to promote public health and safety was cascaded through the FAA Administrator’s performance expectation to reduce the commercial air carrier fatal accident rate to a program director’s performance expectation to develop software to help aircraft maintain safe altitudes in their approach paths. 2. Connect performance expectations to crosscutting goals. As public sector organizations shift their focus of accountability from outputs to results, they have recognized that the activities needed to achieve those results often transcend specific organizational boundaries. High- performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on fostering the necessary collaboration, interaction, and teamwork across organizational boundaries to achieve these results. In this regard, the Veterans Health Administration’s Veterans Integrated Service Network (VISN) headquartered in Cincinnati implemented performance agreements in 2000 for the “care line” directors, such as primary care or mental health directors, that included improvement goals related to that care line for the entire VISN. To make progress towards these goals, the mental health care line director had to work collaboratively with the corresponding mental health care line managers at each of the four medical centers to establish consensus among VISN officials and external stakeholders on the strategic direction for the services provided by the mental health care line across the VISN, among other things. 3. Provide and routinely use performance information to track organizational priorities. High-performing organizations provide objective performance information to individuals to show progress in achieving organizational results and other priorities and help them to manage during the year, identify performance gaps, and pinpoint improvement opportunities. Having this performance information in a useful format also helps individuals track their performance against organizational goals and compare their performance to that of other individuals. For example, the Bureau of Land Management’s (BLM) Web- based data system, called the Director’s Tracking System, collects and makes available on a real-time basis data on each senior executive’s progress in his or her state office towards BLM’s organizational priorities, such as the wild horse and burro program, and the resources expended on each priority. 4. Require follow-up actions to address organizational priorities. High-performing organizations require individuals to take follow-up actions based on performance information available to them. By requiring and tracking such follow-up actions on performance gaps, these organizations underscore the importance of holding individuals accountable for making progress on their priorities. For example, the Federal Highway Administration required senior executives to use 360-degree feedback instruments to solicit employees’ views on their leadership skills in 2001. The senior executives were to identify action items based on the feedback and incorporate them into their individual performance plans for the next fiscal year. While the 360-degree feedback instrument was intended for developmental purposes to help senior executives identify areas for improvement and is not included in the executives’ performance evaluations, executives were held accountable for taking some action on the 360-degree feedback results and responding to the concerns of their peers, customers, and subordinates. 5. Use competencies to provide a fuller assessment of performance. High-performing organizations use competencies, which define the skills and supporting behaviors that individuals need to effectively contribute to organizational results, and are based on valid, reliable, and transparent performance management systems. To this end, the Internal Revenue Service (IRS) implemented a performance management system in fiscal year 2000 that requires executives and managers to include critical job responsibilities with supporting behaviors (broad actions and competencies) in their performance agreements each year. The critical job responsibilities and supporting behaviors are intended to provide executives and managers with a consistent message about how their daily activities are to reflect the organization’s core values. 6. Link pay to individual and organizational performance. High- performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. At the same time, these organizations recognize that valid, reliable, and transparent performance management systems with adequate safeguards for employees are the precondition to such an approach. In the Canadian Province of Ontario, an individual executive’s performance pay is based on the performance of the provincial government as a whole, the executive’s home ministry, the ministry’s contribution to governmentwide results, as well as the individual’s own performance. The amount of the award can range up to 20 percent of base salary. 7. Make meaningful distinctions in performance. Effective performance management systems seek to achieve three key objectives to help make meaningful distinctions in performance: (1) they strive to provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization, (2) they seek to provide management with the objective and fact-based information it needs to reward top performers, and (3) they provide the necessary information and documentation to deal with poor performers. For example, IRS established an executive compensation plan for determining base salary, performance bonuses, and other awards for its senior executives that is intended to explicitly link individual performance to organizational performance. As part of this plan, IRS converts senior executive performance appraisal ratings into points to help ensure realistic and consistent performance ratings. Each IRS business unit has a “point budget” for assigning performance ratings, which is the total of four points for each senior executive in the unit. For fiscal year 2001, an “outstanding” rating converted to six points; an “exceeded” rating to four points, which is the baseline; a “met” rating to two points; and a “not met” rating to zero points. If the business unit exceeded its point budget, it had the opportunity to request additional points from the Deputy Commissioner. IRS officials indicated that none of the business units requested additional points for the fiscal year 2001 ratings. The senior executive performance appraisal ratings and bonuses for fiscal year 2001 show that IRS is beginning to make distinctions in pay related to performance. For fiscal year 2001, 31 percent of the senior executives received a rating of outstanding compared to 42 percent for fiscal year 2000, 49 percent received a rating of exceeded expectations compared to 55 percent, and 20 percent received a rating of met expectations compared to 3 percent. In fiscal year 2001, 52 percent of senior executives received a bonus, compared to 56 percent in fiscal year 2000. IRS officials said that IRS is still gaining experience using the new compensation plan and will wait to establish trend data before it evaluates the link between performance and bonus decisions. 8. Involve employees and stakeholders to gain ownership of performance management systems. High-performing organizations have found that actively involving employees and stakeholders in developing performance management systems and providing ongoing training on the systems helps increase their understanding and ownership of the organizational goals and objectives. As one of the single most important safeguards that they can put in place, these leading organizations consulted a wide range of employees and stakeholders early in the process, obtained direct feedback from them, and engaged employee unions or associations. For example, in New Zealand, an agreement between government and the primary public service union created a “Partnership for Quality” framework that provides for ongoing, mutual consultation on issues such as performance management. Specifically, the Department of Child, Youth, and Family Services and the Public Service Association entered into a joint partnership agreement that emphasizes the importance of mutual consideration of each other’s organizational needs and constraints. 9. Maintain continuity during transitions. The experience of successful cultural transformations and change management initiatives in large public and private organizations suggests that it can often take 5 to 7 years until such initiatives are fully implemented and cultures are transformed in a substantial manner. Because this time frame can easily outlast the tenures of top political appointees, high-performing organizations recognize that they need to reinforce accountability for organizational goals during times of leadership transitions through the use of performance agreements as part of their performance management systems. For example, the Ontario Public Service institutionalized the use of performance agreements in its performance management system to withstand organizational changes and cascaded the performance agreements from top leadership to front line employees. Creating a Results- Oriented Approach to Federal Pay With the performance management practices of leading organizations in mind, we need to fundamentally rethink our approach to federal pay and develop an approach that places a greater emphasis on a person’s knowledge, skills, position, and performance rather than the passage of time, the rate of inflation, and geographic location. Under the current federal pay system, the overwhelming majority of each year’s increase in federal employee pay is largely unrelated to an employee’s knowledge, skills, position, or performance. In fact, over 80 percent of the cost associated with the annual increases in federal salaries is due to longevity and the annual pay increase. In addition, current federal pay gaps vary by the nature of the person’s position; yet the current method for addressing the pay gap assumes that it is the same throughout government. We must move beyond this outdated, “one size fits all approach” to paying federal employees. Under authorities granted by the Congress, a number of agencies are at various stages in using approaches in their pay and award systems that are designed to be more flexible and results-oriented. U.S. General Accounting Office. We at GAO believe it is our responsibility to lead by example. Our people are our most valuable asset, and it is only through their combined efforts that we can effectively serve our clients and country. By managing our workforce strategically and focusing on results, we are helping to maximize our own performance and ensure our own accountability. By doing so, we also hope to demonstrate to other federal agencies that they can make similar improvements in the way they manage their people. We have identified and made use of a variety of tools and flexibilities, some of which were made available to us through the GAO Personnel Act of 1980 and our human capital legislation enacted in 2000, but most of which are available to federal agencies. The most prominent change in human capital management that we implemented as a result of the GAO Personnel Act of 1980 was a broadbanded pay-for-performance system. The primary goal of this system is to base employee compensation primarily on the knowledge, skills, and performance of individual employees. It provides managers flexibility to assign and use employees in a manner that is more suitable to multi-tasking and the full use of staff. Importantly, careful design and effective implementation is crucial to obtaining the benefits of broadbanding in an equitable and cost-effective manner. Under our current broadbanded system, analyst and analyst-related staff in grades 7 through 15 were placed in three bands. High-performing organizations continually review and revise their performance management systems to support their strategic goals. In that spirit, we expect to modify our banded system in the future based on our experience to date. In January 2002, we implemented a new competency-based performance management system that is intended to create a clear linkage between employee performance and our strategic plan and core values. It includes 12 competencies that our employees overwhelmingly validated as the keys to meaningful performance at GAO. The competencies are maintaining client and customer focus, presenting information in writing, facilitating and implementing change, leading others. These competencies are the centerpiece of our other human capital programs, such as promotions, pay decisions, and recognition and rewards. Under our revised system, pay-banded employees are placed in one of five pay categories based on their demonstrated competencies, performance, and contributions to organizational goals. Merit pay increases across these five categories range from up to about $5,700 for some of those in the top pay category to no merit increases for those in the lowest category. In addition, those in the top two categories receive bonuses, referred to as “Dividend Performance Awards,” of $1,000 and $500, respectively. As a result of GAO's implementation of its new competency-based performance management system and other changes to key human capital programs, GAO has been able to achieve greater dispersion in its performance appraisals and merit pay decisions. For example, for fiscal year 2002, the GAO-wide average performance appraisal rating was 2.19 (out of 5) compared with 4.26 (out of 5) for fiscal year 2001. Similarly, under the new system, no employees received a score of 4.7 or higher, while 19 percent of employees received a score of 4.7 or higher for fiscal year 2001. Federal Aviation Administration. The Congress granted FAA wide- ranging personnel authorities in 1996 by exempting the agency from key parts of Title 5. Among the initiatives FAA subsequently introduced were a pay system in which compensation levels are set within pay bands and a performance management system intended to improve employees’ performance through more frequent feedback with no summary rating. The pay band system includes plans tailored to specific employee segments: a core compensation plan for the majority of nonunion employees and negotiated versions of the core compensation plan for employees represented by unions; a unique pay plan for air traffic controllers and air traffic managers; and an executive pay plan for nonpolitical executives, managers, and some senior professionals. Under its core compensation plan, all eligible employees can receive permanent pay increases, called organizational success increases, based on the FAA Administrator’s assessment of the extent to which the entire agency has achieved its annual goals. In addition, notably high-performing individuals may receive additional permanent pay increases, called superior contribution increases, based on supervisory recommendation. The criteria for awarding a superior contribution increase include collaboration, customer service, and impact on organizational success. At the end of the performance evaluation cycle, employees receive a narrative performance summary instead of a year end rating that defines employees’ performance in specific categories. That is, FAA's performance management system does not use a multi-tiered rating system to rate individual employee performance. We have previously raised concerns that such approaches may not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. Moreover, FAA employee performance summaries reflect an assessment of achievements based on outcomes and expectations, while professional competencies such as collaboration and customer service are elements of the compensation system. As a result, the performance management system is not directly linked to pay elements in FAA’s compensation systems. In February 2003, we reported that FAA’s human capital reform efforts were still in progress. While FAA has established preliminary linkages between its reform goals and the agency’s program goals, we found that the lack of explicit linkage will make it difficult to assess the effects of the reform initiatives on the program goals of the organization even after data, measurable goals, and performance measures for human capital management efforts are established. FAA has acknowledged the importance of establishing these elements and has repeatedly said that it is working to collect and analyze data and develop performance goals and measures. However, it has not completed these critical tasks, nor has it established specific steps and time frames by which it will do so. Internal Revenue Service. IRS was granted broad authority related to its human capital management through the IRS Restructuring and Reform Act of 1998. The Restructuring and Reform Act gave the Secretary of the Treasury various pay and hiring flexibilities not otherwise available under Title 5, such as the authority to establish new systems for hiring and staffing, compensation, and performance management. Some of these flexibilities are intended to allow IRS managers more discretion in rewarding good performers and in making employees accountable for their performance. IRS implemented new performance management systems for executives and managers for fiscal year 2000 and for the front line employees for fiscal year 2001. As an initial step, IRS implemented a pay for performance system for senior executives beginning in fiscal year 2001, which emphasizes performance in determining compensation and makes meaningful distinctions in senior executive performance. In July 2002, we reported that IRS had not completed all the elements of the redesign that it envisioned. IRS said that it expects to integrate the new systems with its overall human resources systems linking evaluations to decisions about developmental needs, rewards and recognition, and compensation. IRS anticipates that the complete redesign and implementation of the performance management systems will take about 5 years. OPM Personnel Demonstration Projects. Personnel demonstration projects, authorized by OPM under the authority provided by the Civil Service Reform Act of 1978, provide a means for testing and introducing beneficial change in governmentwide human resources management systems. Over the past 25 years, 17 demonstration projects have been implemented across the federal government. Twelve of these demonstration projects have implemented some form of pay for performance compensation system. OPM reports that demonstration projects that have implemented pay for performance have shown increased retention of high performers. To become a demonstration project, a federal agency obtains authority from OPM to waive existing federal human resources management law and regulations in Title 5 and propose, develop, test, and evaluate interventions for its own human resources management system that shape the future of federal human resource management. Under the demonstration project authority, OPM approves project plans and regulations, approves project evaluation plans, provides technical assistance to agencies, publishes plans, and disseminates results. The agencies are responsible for designing and implementing project plans and regulations; consulting with unions and employees about project design; and designing, conducting, and funding evaluations. For example, the Department of Defense (DOD) implemented a personnel demonstration project covering members of its civilian acquisition, technology, and logistics workforce in 1999. Recognizing the need to reform and modernize its acquisition performance management system in order to perform efficiently and effectively, DOD designed the project to provide incentives and rewards to multi-skilled personnel, allow managers to compete with the private sector for the best talent and make timely job offers, and provide an environment that promotes employee growth and improves local managers’ ability and authority to manage their workforces. The project replaced 22 occupational families with 3 career paths; reduced the 15 General Schedule grades to 3 to 5 pay bands; and implemented a contribution-based compensation and appraisal system, which measures an employee’s contribution to the mission and goals of the organization. This compensation system is designed to enable the organization to motivate and equitably compensate employees based on their contribution to the mission. Salary adjustments and contribution awards are to be based on an individual’s overall annual contribution when compared to all other employees and their current level of compensation. Contribution is to be measured using a standard set of competencies that apply to all career paths. These competencies are (1) problem solving, (2) teamwork/ cooperation, (3) customer relations, (4) leadership/supervision, (5) communication, and (6) resource management. A detailed evaluation of project results is due to OPM in May of this year that is to assess such fundamental issues as the extent to which the demonstration project improved the link between pay and contribution to organizational goals and objectives. Preliminary data indicate that the attrition rate for high contributors is declining while the attrition rate for low contributors is increasing. DOD officials we spoke with told us that increased pay setting flexibility has allowed organizations to offer more competitive salaries, which in turn has improved recruiting. Next Steps for Results- Oriented Pay Reform We believe that as part of the exploration now under way of using more market- and performance-based approaches to federal pay, we need to continue to experiment with providing agencies with the flexibility to pilot alternative approaches to setting pay and linking pay to performance. In the short term, the Congress may wish to explore the benefits of broadbanding by (1) giving OPM additional flexibility that would enable it to grant governmentwide authority for all agencies (i.e., class exemptions) to use broadbanding for certain critical occupations and/or (2) allowing agencies to apply to OPM (i.e., case exemptions) for broadbanding authority for their specific entities or occupations. However, agencies should be required to demonstrate to OPM’s satisfaction that they have modern, effective, credible, and validated performance management systems in place before they are allowed to use broadbanding or related pay for performance initiatives. This is consistent with the approach that the Congress took with raising the increase of the total annual compensation limit for senior executives as part of the Homeland Security Act. The Congress may also want to consider providing guidance on the criteria that OPM should use in making judgments about individual agencies’ performance management systems. We believe that the practices we described today could serve as a starting point for that consideration. In summary, there is widespread agreement that the basic approach to federal pay is broken and we need to move to a more market- and performance-based approach. Doing so will be essential if we expect to maximize the performance and assure the accountability of the federal government for the benefit of the American people. Reasonable people can and will debate and disagree about the merits of individual reform proposals. However, all should be able to agree that a performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, must serve as the fundamental underpinning of any fair, effective, and appropriate results-oriented pay reform. The practices that have been used by leading organizations in developing and using their performance management systems to link organizational goals to individual performance and create a line of sight between an individual’s activities and organizational results show the way in how to implement performance management systems with the necessary attributes. Chairwoman Davis and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have. Contact and Acknowledgments For further information regarding this statement, please contact J. Christopher Mihm, Director, Strategic Issues, on (202) 512-6806 or at mihmj@gao.gov. Individuals making key contributions to this testimony included Anne Kidd, Janice Lichty, Lisa Shames, Marti Tracy, and Andrew White.
There is widespread agreement that the basic approach to federal pay is broken and that it needs to be more market- and performance-based. Doing so will be essential if the federal government is to maximize its performance and assure accountability for the benefit of the American people. While there will be debate and disagreement about the merits of individual reform proposals, all should be able to agree that a performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, must serve as the fundamental underpinning of any fair, effective, and appropriate pay reform. At the request of the Subcommittee on Civil Service and Agency Organization, House Committee on Government Reform, GAO discussed the key practices for effective performance management that federal agencies should consider as they develop and implement performance management systems as part of any pay reform. The need for results-oriented pay reform is one of the most pressing human capital issues facing the federal government today. To implement results-oriented pay reform, commonly referred to as "pay for performance," agencies must have modern, effective, credible, and validated performance management systems that are capable of supporting pay and other personnel decisions. Pay for performance works only with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, to ensure its fair, effective, and responsible implementation. Modern performance management systems are the centerpiece of those safeguards and accountability. Most federal agencies are a long way from meeting this test. All too often, agencies' performance management systems are based on episodic and paper intensive exercises that are not linked to the strategic plan of the organization and have only a modest impact on the pay, use, development, and promotion potential of federal workers. Leading organizations, on the other had, use their performance management systems to accelerate change, achieve desired organizational results, and facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Effective performance management systems are not merely used for once- or twice-yearly individual expectation setting and ratings processes, but are tools to help the organization manage on a day-to-day basis. GAO identified key practices leading public sector organizations both here in the United States and abroad have used in their performance management systems to link organizational goals to individual performance and create a "line of sight" between an individual's activities and organizational results. These practices can help agencies develop and implement performance management systems with the attributes necessary to effectively support pay for performance.
Fragmented Investment Decision Making, Unexecutable Programs, and Lack of Accountability Underlie Poor Acquisition Outcomes Over the past several years our work has highlighted a number of underlying systemic causes for cost growth and schedule delays at both the strategic and program levels. At the strategic level, DOD’s processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems—which together define DOD’s overall weapon system investment strategy—are fragmented. As a result, DOD fails to effectively address joint warfighting needs and commits to more programs than it has resources for, thus creating unhealthy competition for funding. At the program level, a military service typically establishes and DOD approves a business case containing requirements that are not fully understood and cost and schedule estimates that are based on overly optimistic assumptions rather than on sufficient knowledge. Once a program begins, it too often moves forward with inadequate technology, design, testing, and manufacturing knowledge, making it impossible to successfully execute the program within established cost, schedule, and performance targets. Furthermore, DOD officials are rarely held accountable for poor decisions or poor program outcomes. DOD Lacks an Integrated Approach to Balance Weapon System Investments At the strategic level, DOD largely continues to define warfighting needs and make investment decisions on a service-by-service and individual platform basis, using fragmented decision-making processes. This approach makes it difficult for the department to achieve a balanced mix of weapon systems that are affordable and feasible and that provide the best military value to the joint warfighter. In contrast, we have found that successful commercial enterprises use an integrated portfolio management approach to focus early investment decisions on products collectively at the enterprise level and ensure that there is a sound basis to justify the commitment of resources. By following a disciplined, integrated process—during which the relative pros and cons of competing product proposals are assessed based on strategic objectives, customer needs, and available resources, and where tough decisions about which investments to pursue and not to pursue are made—companies minimize duplication between business units, move away from organizational stovepipes, and effectively support each new development program they commit to. To be effective, integrated portfolio management must have strong, committed leadership; empowered portfolio managers; and accountability at all levels of the organization. DOD determines its capability needs through the Joint Capabilities and Integration Development System (JCIDS). While JCIDS provides a framework for reviewing and validating needs, it does not adequately prioritize those needs from a joint, departmentwide perspective; lacks the agility to meet changing warfighter demands; and validates almost all of the capability proposals that are submitted. We recently reviewed JCIDS documentation related to new capability proposals and found that most— almost 70 percent—were sponsored by the military services with little involvement from the joint community, including the combatant commands, which are responsible for planning and carrying out military operations. Because DOD also lacks an analytic approach to determining the relative importance of the capabilities needed for joint warfighting, all proposals appear to be treated as equal priorities within the JCIDS process. By continuing to rely on capability needs defined primarily by the services, DOD may be losing opportunities for improving joint warfighting capabilities and reducing the duplication of capabilities in some areas. The JCIDS process has also proven to be lengthy and cumbersome—taking on average up to 10 months to validate a need—thus undermining the department’s efforts to effectively respond to the needs of the warfighter, especially those needs that are near term. Furthermore, the vast majority of capability proposals that enter the JCIDS process are validated or approved without accounting for the resources or technologies that will be needed to acquire the desired capabilities. Ultimately, the process produces more demand for new weapon system programs than available resources can support. The funding of proposed programs takes place through a separate process, the department’s Planning, Programming, Budgeting, and Execution (PPBE) system, which is not synchronized with JCIDS. While JCIDS is a continuous, need-driven process that unfolds in response to capability proposals as they are submitted by sponsors, PPBE is a calendar-driven process comprising phases occurring over a 2-year cycle, which can lead to resource decisions for proposed programs that may occur several years later. In addition, because PPBE is structured by military service and defense programs and not by the joint capability areas being used in JCIDS, it is difficult to link resources to capabilities. The PPBE process also largely allocates resources based on historical trends rather than on a strategic basis. Service shares of the overall budget have remained relatively static for decades, even though DOD’s strategic environment and warfighting needs have changed dramatically in recent years. Because DOD’s programming and budgeting reviews occur at the back end of the PPBE process—after the services have developed their budgets—it is difficult and disruptive to make changes, such as terminating programs to pay for new, higher-priority programs. We recently reviewed the impact of the PPBE process on major defense acquisition programs and found that the process does not produce an accurate picture of the department’s resource needs for weapon system programs, in large part because it allows too many programs to go forward with unreliable cost estimates and without a commitment to fully fund them. The cost of many of the programs we reviewed exceeded the funding levels planned for and reflected in the Future Years Defense Program (FYDP)—the department’s long-term investment strategy (see fig. 1). DOD’s failure to balance its needs with available resources promotes an unhealthy competition for funding that encourages sponsors of weapon system programs to pursue overly ambitious capabilities and underestimate costs to appear affordable. Rather than limit the number and size of programs or adjust requirements, DOD opts to push the real costs of programs to the future. With too many programs under way for the available resources and high cost growth occurring in many programs, the department must make up for funding shortfalls by shifting funds from one program to pay for another, reducing system capabilities, cutting procurement quantities, or in rare cases terminating programs. Such actions not only create instability in DOD’s weapon system portfolio, they further obscure the true future costs of current commitments, making it difficult to make informed investment decisions. Initiating Programs with Inadequate Knowledge of Requirements and Resources Often Results in Poor Outcomes At the program level, the key cause of poor outcomes is the approval of programs with business cases that contain inadequate knowledge about requirements and the resources—funding, time, technologies, and people—needed to execute them. Our work in best practices has found that an executable business case for a program demonstrated evidence that (1) the identified needs are real and necessary and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources. Over the past several years, we have found no evidence of the widespread adoption of such an approach for major acquisition programs in the department. Our annual assessments of major weapon systems have consistently found that the vast majority of programs began system development without mature technologies and moved into system demonstration without design stability. The chief reason for these problems is the encouragement within the acquisition environment of overly ambitious and lengthy product developments—sometimes referred to as revolutionary or big bang acquisition programs—that embody too many technical unknowns and not enough knowledge about the performance and production risks they entail. The knowledge gaps are largely the result of a lack of early and disciplined systems engineering analysis of a weapon system’s requirements prior to beginning system development. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides the knowledge a product developer needs to identify and resolve performance and resource gaps before product development begins by either reducing requirements, deferring them to the future, or increasing the estimated cost for the weapon system’s development. Because the government often does not perform the proper up-front requirements analysis to determine whether the program will meet its needs, significant contract cost increases can and do occur as the scope of the requirements changes or becomes better understood by the government and contractor. Not only does DOD not conduct disciplined systems engineering prior to the beginning of system development, it has allowed new requirements to be added well into the acquisition cycle. We have reported on the negative impact that poor systems engineering practices have had on several programs, such as the Global Hawk Unmanned Aircraft System, F-22A, Expeditionary Fighting Vehicle, and Joint Air-to-Surface Standoff Missile. With high levels of uncertainty about requirements, technologies, and design, program cost estimates and related funding needs are often understated, effectively setting programs up for cost and schedule growth. We recently assessed the service and independent cost estimates for 20 major weapon system programs and found that while the independent estimates were somewhat higher, both estimates were too low in most cases. In some of the programs we reviewed, cost estimates have been off by billions of dollars. For example, the initial service estimate for the development of the Marines’ Expeditionary Fighting Vehicle was about $1.1 billion. The department’s Cost Analysis and Improvement Group (CAIG) estimated the development cost of the program to be $1.4 billion, but development costs for the program are now expected to be close to $3.6 billion. In the case of the Future Combat System (FCS), the Army’s initial estimate for the development cost was about $20 billion, while CAIG’s estimate was $27 billion. The department began the program using the program office’s estimate of $20 billion, but development costs for the FCS are now estimated to be $28 billion and the program is still dealing with significant technical risk. Estimates this far off the mark do not provide the necessary foundation for sufficient funding commitments and realistic long-term planning. The programs we reviewed frequently lacked the knowledge needed to develop realistic cost estimates. For example, program Cost Analysis Requirements Description documents—used to build the program cost estimate—often lack sufficient detail about planned program content for developing sound cost estimates. Without this knowledge, cost estimators must rely heavily on parametric analysis and assumptions about system requirements, technologies, design maturity, and the time and funding needed. A cost estimate is then usually presented to decision makers as a single, or point, estimate that is expected to represent the most likely cost of the program but provides no information about the range of risk and uncertainty or level of confidence associated with the estimate. Lack of Accountability for Making Weapon System Decisions Hinders Achieving Successful Outcomes DOD’s requirements, resource allocation, and acquisition processes are led by different organizations, thus making it difficult to hold any one person or organization accountable for saying no to a proposed program or for ensuring that the department’s portfolio of programs is balanced. DOD’s 2006 Defense Acquisition Performance Assessment study observed that these processes are not connected organizationally at any level below the Deputy Secretary of Defense and concluded that this weak structure induces instability and inhibits accountability. Furthermore, a former Under Secretary of Defense for Acquisitions, Technology and Logistics has stated that weapon system investment decisions are a shared responsibility in the department and, therefore, no one individual is accountable for these decisions. Frequent turnover in leadership positions in the department exacerbates the problem. The average tenure, for example, of the Under Secretary of Defense for Acquisition, Technology and Logistics over the past 22 years has been only about 20 months. When DOD’s strategic processes fail to balance needs with resources and allow unsound, unexecutable programs to move forward, program managers cannot be held accountable when the programs they are handed already have a low probability of success. Program managers are also not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, and have little authority over staffing. At the same time, program managers frequently change during a program’s development, making it difficult to hold them accountable for the business cases that they are entrusted to manage and deliver. The government’s lack of control over and accountability for decision making is further complicated by DOD’s growing reliance on technical, business, and procurement expertise supplied by contractors. This reliance may reach the point where the foundation upon which decisions are based may be largely crafted by individuals who are not employed by the government, who are not bound by the same rules governing their conduct, and who are not required to disclose any financial or other personal interests they may have that conflict with the responsibilities they have performing contract tasks for DOD. For example, while the total planned commitments to major acquisition programs have doubled over recent years, the size of the department’s systems engineering workforce has remained relatively stable, leading program offices to rely more on contractors for systems engineering support. Further, in systems development, DOD typically uses cost-reimbursement contracts in which it generally pays the reasonable, allocable, and allowable costs incurred for the contractor’s best efforts, to the extent provided by the contract. The use of these contracts may contribute to the perpetuation of an acquisition environment that lacks incentives for contractors to follow best practices and keep costs and schedules in check. Recent DOD Policy Changes Could Improve Future Performance of Weapon System Programs The department understands many of the problems that affect acquisition programs and has recently taken steps to remedy them. It has revised its acquisition policy and introduced several initiatives based in part on direction from Congress and recommendations from GAO that could provide a foundation for establishing sound, knowledge-based business cases for individual acquisition programs. However, to improve outcomes, DOD must ensure that its policy changes are consistently implemented and reflected in decisions on individual programs—not only new program starts but also ongoing programs as well. In the past, inconsistent implementation of existing policy has hindered DOD’s efforts to execute acquisition programs effectively. Moreover, while policy improvements are necessary, they may be insufficient unless the broader strategic issues associated with the department’s fragmented approach to managing its portfolio of weapon system investments are also addressed. In December 2008, DOD revised its policy governing major defense acquisition programs in ways intended to provide key department leaders with the knowledge needed to make informed decisions before a program starts and to maintain disciplined development once it begins. The revised policy recommends the completion of key systems engineering activities before the start of development, includes a requirement for early prototyping, establishes review boards to identify and mitigate technical risks and evaluate the impact of potential requirements changes on ongoing programs, and incorporates program manager agreements to increase leadership stability and management accountability. The policy also establishes early milestone reviews for programs going through the pre–systems acquisition phase. In the past, DOD’s acquisition policy may have encouraged programs to rush into systems development without sufficient knowledge, in part, because no formal milestone reviews were required before system development. If implemented, these policy changes could help programs replace risk with knowledge, thereby increasing the chances of developing weapon systems within cost and schedule targets while meeting user needs. Some aspects of the policy were first pilot- tested on selected programs, such as the Joint Light Tactical Vehicle program, and indications are that these programs are in the process of acquiring the requisite knowledge before the start of systems development. Some key elements of the department’s new acquisition policy include a new materiel development decision as a starting point for all programs regardless of where they are intended to enter the acquisition process, a more robust Analysis of Alternatives (AOA) to assess potential materiel solutions that address a capability need validated through JCIDS, a cost estimate for the proposed solution identified by the AOA, early program support reviews by systems engineering teams, competitive prototyping of the proposed system or key system elements as part of the technology development phase, certifications for entry into the technology development and system development phases (as required by congressional legislation), preliminary design review that may be conducted before the start of configuration steering boards to review all requirements and technical changes that have potential to affect cost and schedule. As part of its strategy for enhancing the roles of program managers in major weapon system acquisitions, the department has established a policy that requires formal agreements among program managers, their acquisition executives, and the user community setting forth common program goals. These agreements are intended to be binding and to detail the progress the program is expected to make during the year and the resources the program will be provided to reach these goals. DOD also requires program managers to sign tenure agreements so that their tenure will correspond to the next major milestone review closest to 4 years. The department acknowledges that any actions taken to improve accountability must be based on a foundation whereby program managers can launch and manage programs toward successful performance, rather than focusing on maintaining support and funding for individual programs. DOD acquisition leaders have also stated that any improvements to program managers’ performance depend on the department’s ability to promote requirements and resource stability over weapon system investments. Over the past few years, DOD has also been testing portfolio management approaches in selected capability areas—command and control, net- centric operations, battlespace awareness, and logistics—to facilitate more strategic choices for resource allocation across programs. The department recently formalized the concept of capability portfolio management, issuing a directive in 2008 that established policy and assigned responsibilities for portfolio management. The directive established nine joint capability area portfolios, each to be managed by civilian and military co-leads. While the portfolios have no independent decision-making authority over requirements determination and resource allocation, according to some DOD officials, they provided key input and recommendations in this year’s budget process. However, without portfolios in which managers have authority and control over resources, the department is at risk of continuing to develop and acquire systems in a stovepiped manner and of not knowing if its systems are being developed within available resources. Observations on Proposed Acquisition Reform Legislation Overall, we believe that the legislative initiatives being proposed by the committee have the potential, if implemented, to lead to significant improvements in DOD’s management of weapon system programs. Several of the initiatives—including the increased emphasis on systems engineering and developmental testing, the requirement for earlier preliminary design reviews, and the strengthening of independent cost estimates and technology readiness assessments—could instill more discipline into the front end of the acquisition process when it is critical for programs to gain knowledge. Establishing a termination criterion for Nunn-McCurdy cost breaches could help prevent the acceptance of unrealistic cost estimates as a foundation for starting programs. Having greater involvement by the combatant commands in determining requirements and requiring greater consultation between the requirements, budget, and acquisition processes could help improve the department’s efforts to balance its portfolio of weapon system programs. In addition, several of the proposals as currently drafted will codify what DOD policy already calls for, but are not being implemented consistently in weapon programs. Section 101: Systems Engineering Capabilities Requires DOD to (1) assess the extent to which the department has in place the systems engineering capabilities needed to ensure that key acquisition decisions are supported by a rigorous systems analysis and systems engineering process and (2) establish organizations and develop skilled employees to fill any gaps in such capabilities. The lack of disciplined systems engineering analysis conducted prior to starting system development has been a key factor contributing to poor acquisition outcomes. Systems engineering activities—requirements analysis, design, and testing—are needed to ensure that a weapon system program’s requirements are achievable and designable given available resources, such as technologies. In recent years, DOD has taken steps to improve its systems engineering capabilities by establishing a Systems and Software Engineering Center of Excellence and publishing guidance to assist the acquisition workforce in the development of systems engineering plans, education, and training. However, as the National Research Council recently reported, DOD’s systems engineering capabilities have declined over time and shifted increasingly to outside contractors. A comprehensive assessment to determine what systems engineering capabilities are in place and what capabilities are needed, as recommended in the proposed legislation, is a critical first step in enhancing the function of systems engineering in DOD acquisitions. At the same time, it will be important for DOD to implement steps to ensure systems engineering is applied in the right way and at the right time. Section 102: Developmental Testing Requires DOD to reestablish the position of Director of Developmental Test and Evaluation and requires the services to assess and address any shortcomings in their developmental testing organizations and personnel. Robust developmental testing efforts are an integral part of the systems development process. They help to identify, evaluate, and reduce technical risks, and indicate whether the design solution is on track to satisfy the desired capabilities. As the Defense Science Board reported in 2008, developmental testing in weapon system programs needs to be improved. We believe that developmental testing would be strengthened by a formal elevation of its role in the acquisition process and the reestablishment of a Director of Developmental Test and Evaluation position. Furthermore, requiring the Director to prepare an annual report for Congress summarizing DOD’s developmental test and evaluation activities would provide more accountability. We also agree that the military services should be required to assess their respective developmental testing entities and address any shortcomings. This action would help ensure that the services have the knowledge and capacity for effective developmental test efforts. Section 103: Technological Maturity Assessments Makes it the responsibility of the Director of Defense Research and Engineering (DDR&E) to periodically review and assess the technological maturity of critical technologies used in major defense acquisition programs. Ensuring that programs have mature technology before starting systems development is critical to avoiding cost and schedule problems, yet for many years we have reported that a majority of programs go forward with immature technologies and experience significant cost growth. Legislation enacted by Congress in 2006, requiring DOD to certify that the technology in a program has been demonstrated in a relevant environment before it receives approval to start system development, has begun to help address this problem. Since the legislation was enacted, DOD has asked the DDR&E to conduct independent reviews of technology readiness assessments for system development milestone decisions. Although DDR&E reviews are advisory in nature, we have seen reviews that have pushed programs to do more to demonstrate technology maturity. The improvements that this proposed legislation, as currently written, is intended to bring about may already be occurring in DOD. Congress, however, may wish to consider requiring the DDR&E to conduct technology readiness reviews not just periodically, but for all major defense acquisition programs, and whether or not DDR&E has the capacity and resources to effectively conduct technology assessments. Section 104: Independent Cost Assessment Establish a Director of Independent Cost Assessment to ensure that cost estimates for major defense acquisition programs are fair, reliable, and unbiased. Within DOD, the Cost Analysis Improvement Group (CAIG) is the organization responsible for conducting independent costs estimates for major defense acquisition programs. The CAIG reports to the department’s Director of Program Analysis and Evaluation, but its principal customer is the Under Secretary of Defense for Acquisition, Technology and Logistics. We believe that establishing an independent assessment office that reports directly to the Secretary or Deputy Secretary of Defense and to Congress—similar to the Office of the Director of Operation Test and Evaluation—would more fully integrate cost estimating with the acquisition management framework and provide an increased level of accountability. We see no reason why CAIG should not form the basis of the proposed organization. Congress may also wish to consider appointing the Director for a time-certain term and making the Director responsible for prescribing cost-estimating policy and guidance and for preparing an annual report summarizing cost estimates for major acquisition programs. Ultimately, however, improved cost estimating will only occur if there is a better foundation for planning and acquiring weapon system programs— one that promotes well-defined requirements, is knowledge-based and informed by disciplined systems engineering, requires mature technology, and adheres to shorter development cycle times. Section 105: Role of Combatant Commanders Requires the Joint Requirements Oversight Council (JROC) to seek and consider input from the commanders of the combatant commands in identifying joint military requirements. Requirements determination in DOD, particularly for major weapon system programs, continues to be driven largely by the military services. Studies by the Defense Science Board, Center for Strategic and International Studies, and others have revealed that although the combatant commands—which are responsible for planning and executing military missions—are the principal joint warfighting customer in DOD, they have played a limited role in determining requirements. Currently, the JROC is doing more to seek out and consider input from the combatant commands through regular trips and meetings to discuss capability needs and resourcing issues. However, many of the combatant commands do not believe that their needs, which are reflected through the Integrated Priority List process, are sufficiently addressed through the department’s JCIDS process. For the combatant commands to meet this proposed legislative mandate and have more influence in establishing requirements, DOD should consider providing the combatant commands with additional resources to establish robust analytical capabilities for identifying and assessing their capability needs. Ultimately, the department must better prioritize and balance the needs of the military services, combatant commands, and other defense components, and be more agile in responding to near-term capability needs. Section 201: Trade-offs of Cost, Schedule, and Performance Requires consultation between the budget, requirements, and acquisition processes to ensure the consideration of trade-offs between cost, schedule, and performance early in the process of developing major weapon systems. As currently structured, DOD’s budget, requirements, and acquisition processes do not operate in an integrated manner. The function and timing of the processes are not sufficiently synchronized, and the decision makers for each process are motivated by different incentives. These weaknesses have contributed to the development of a portfolio with more programs than available resources can support and programs that launch into system development without executable business cases. We have recommended that the department establish an enterprisewide portfolio management approach to weapon system investment decisions that integrates the determination of joint warfighting needs with the allocation of resources, and cuts across the services by functional or capability area. To ensure the success of such an approach, we believe that the department should establish a single point of accountability with the authority, responsibility, and tools to implement portfolio management effectively. Section 202: Preliminary Design Review Require the completion of a Preliminary Design Review (PDR) and a formal post-PDR assessment before a major defense acquisition program receives approval to start system development. We have found that a key deliverable in a knowledge-based acquisition process is the preliminary design of the proposed solution based on a robust systems engineering assessment prior to making a large financial commitment to system development. Early systems engineering provides the knowledge needed by a developer to identify and resolve gaps, such as overly optimistic requirements that cannot be met with current resources, before product development begins. Consequently, DOD would have more confidence that a particular system could successfully proceed into a detailed system development phase and meet stated performance requirements within cost, schedule, risk, and other relevant constraints. The recently revised DOD acquisition policy places an increased emphasis on programs planning for preliminary design review prior to the start of system development but does not go as far as making it a requirement to do so. We support any effort to add controls to the acquisition process to ensure that timely and robust systems engineering is conducted before major investment decisions, such as the approval to start system development, are made. Section 203: Life-Cycle Competition Require DOD to adopt measures recommended by the 2008 Defense Science Board Task Force on Defense Industrial Structure for Transformation—such as competitive prototyping, dual sourcing, open architectures, periodic competitions for subsystem upgrades, and licensing of additional suppliers—to maximize competition throughout the life of a program. We have reported in the past on the problem of diminishing competition and the potential benefits of more competition. In discussing the environment that leads to poor acquisition outcomes, we have noted that changes within the defense supplier base have added pressure to this environment. We noted that in 2006, a DOD-commissioned study found that the number of fully competent prime contractors competing for programs had fallen from more than 20 in 1985 to only 6, and that this has limited DOD’s ability to maximize competition in order to reduce costs and encourage innovation. However, avenues exist for reducing costs through competition. For example, we reported that although continuing an alternate engine program for the Joint Strike Fighter would cost significantly more in development costs than a sole-source program, it could, in the long run, reduce overall life cycle costs and bring other benefits. Section 204: Nunn- McCurdy Breaches Requires that a major defense acquisition program that experiences a critical Nunn-McCurdy cost breach be terminated unless the Secretary of Defense certifies that (1) continuing the program is essential to national security and the program can be modified to proceed in a cost- effective manner and (2) the program receives a new milestone approval prior to the award of any new or modified contract extending the scope of the program. In order for DOD to improve its program outcomes, realistic cost estimates must be required when programs are approved for development initiation. DOD often underestimates costs in large part because of a lack of knowledge and overly optimistic assumptions about requirements and critical technologies. This underestimation is also influenced by DOD’s continuing failure to balance its needs with available resources, which promotes unhealthy competition among programs and encourages programs to overpromise on performance capabilities and underestimate cost. This false optimism is reinforced by an acquisition environment in which there are few ramifications for cost growth and delays. Only in very rare instances have programs been terminated for poor performance. When DOD consistently allows unsound, unexecutable programs to begin with few negative ramifications for poor outcomes, accountability suffers. As section 204 proposes, the strengthening of the Nunn-McCurdy provision—by including the potential termination of programs that experience critical cost growth—could facilitate a change in DOD’s behavior by preventing the acceptance of unrealistic cost estimates as a foundation for program initiation and placing more accountability on senior DOD leadership for justifying program continuation. Programs may thus be forced to be more candid and up front about potential costs, risks, and funding needs, and the likelihood of delivering a successful capability to the warfighter at the cost and in the time promised may grow. Section 205: Organizational Conflicts of Interest Prohibits systems engineering contractors from participating in the development or construction of major weapon systems on which they are advising DOD, and requires tightened oversight of organizational conflicts of interest by contractors in the acquisition of major weapon systems. The defense industry has undergone significant consolidation in recent years which has resulted in a few large, vertically integrated prime contractors. This consolidation creates the potential for organizational conflicts of interest where, for example, one business unit of a large company may be asked to provide systems engineering work on a system being produced by another unit of the same company. As the Defense Science Board has recognized, these conflicts of interest may lead to impaired objectivity, which may not be mitigated effectively through techniques such as erecting a firewall between the employees of the two units. While the Federal Acquisition Regulation currently covers some cases of potential organizational conflicts of interest involving the systems engineering function, there may be a need for additional coverage in this area. In general, we would support efforts to enhance the oversight of potential organizational conflicts of interest, particularly in the current environment of a heavily consolidated defense industry. Section 206: Acquisition Excellence Establishes an annual awards program to recognize individuals and teams that make significant contributions to the improved cost, schedule, and performance of defense acquisition programs We support the creation of an annual awards program to recognize individuals and teams for improving the cost, schedule, and performance of defense acquisition programs. We have reported that meaningful and lasting reform will not be achieved until the right incentives are established and accountability is bolstered at all levels of the acquisition process. The need for incentives emerged as a significant issue in our recent discussions with acquisition experts examining potential changes to the acquisition processes enumerated in last year’s defense authorization act. The discussions revealed that those changes may not achieve the desired improvement in acquisition outcomes unless they are accompanied by changes in the overall acquisition environment and culture, and the incentives they provide for success. Concluding Observations on What Remains to Be Done A broad consensus exists that weapon system problems are serious and that their resolution is overdue. With the federal budget under increasing strain from the nation’s economic crisis, the time for change is now. DOD is off to a good start with the recent revisions to its acquisition policy, which, if implemented properly, should provide a foundation for establishing sound, knowledge-based business cases before launching into development and for maintaining discipline after initiation. The new policy will not work effectively, however, without changes to the overall acquisition environment. Resisting the urge to achieve the revolutionary but unachievable capability, allowing technologies to mature in the science and technology base before bringing them onto programs, ensuring that requirements are well-defined and doable, and instituting shorter development cycles would all make it easier to estimate costs accurately, and then predict funding needs and allocate resources effectively. But these measures will succeed only if the department uses an incremental approach. Constraining development cycle times to 5 or 6 years will force more manageable commitments, make costs and schedules more predictable, and facilitate the delivery of capabilities in a timely manner. Acquisition problems are also likely to continue until DOD’s approach to managing its weapon system portfolio (1) prioritizes needs with available resources, thus eliminating unhealthy competition for funding and the incentives for making programs look affordable when they are not; (2) facilitates better decisions about which programs to pursue and which not to pursue given existing and expected funding; and (3) balances the near-term needs of the joint warfighter with the long-term need to modernize the force. Achieving this affordable portfolio will require strong leadership and accountability. Establishing a single point of accountability could help the department align competing needs with available resources. The department has tough decisions to make about its weapon systems and portfolio, and stakeholders, including military services, industry, and Congress, have to play a constructive role in the process toward change. Reform will not be achieved until DOD changes its acquisition environment and the incentives that drive the behavior of its decision makers, the military services, program managers, and the defense industry. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. Contacts and Acknowledgements For further information about this statement, please contact Michael J. Sullivan (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include John Oppenheim, Charlie Shivers, Dayna Foster, Matt Lea, Susan Neill, Ron Schwenn, and Bruce Thomas. Related GAO Products Defense Acquisitions: Perspectives on Potential Changes to DOD’s Acquisition Management Framework. GAO-09-295R. Washington, D.C.: February 27, 2009. Defense Management: Actions Needed to Overcome Long-standing Challenges with Weapon Systems Acquisition and Service Contract Management. GAO-09-362T. Washington, D.C.: February 11, 2009. Defense Acquisitions: Fundamental Changes Are Needed to Improve Weapon Program Outcomes. GAO-08-1159T. Washington, D.C.: September 25, 2008. Defense Acquisitions: DOD’s Requirements Determination Process Has Not Been Effective in Prioritizing Joint Capabilities. GAO-08-1060. Washington, D.C.: September 25, 2008. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs. GAO-07-1134SP. Washington, D.C.: July 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Defense Acquisitions: Factors Affecting Outcomes of Advanced Concept Technology Demonstration. GAO-03-52. Washington, D.C.: December 2, 2002. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1990, GAO has consistently designated the Department of Defense's (DOD) management of its major weapon acquisitions as a high-risk area. A broad consensus exists that weapon system problems are serious, but efforts at reform have had limited impact. Last year, GAO reported that DOD's portfolio of weapon programs experienced cost growth of $295 billion from first estimates, were delayed by an average of 21 months, and delivered fewer quantities and capabilities to the warfighter than originally planned. At a time when DOD faces increased fiscal pressures from ongoing operations in Iraq and Afghanistan, and the federal budget is strained by a growing number of priorities, it is critical that the department effectively manage its substantial investment in weapon system programs. Every dollar wasted or used inefficiently on acquiring weapon systems means that less money is available for the government's other important budgetary demands. This testimony describes the systemic problems that contribute to the cost, schedule, and performance problems in weapon system programs, recent actions that DOD has taken to address these problems, proposed reform legislation that the committee recently introduced, and additional steps needed to improve future performance of acquisition programs. The testimony is drawn from GAO's body of work on DOD's acquisition, requirements, and funding processes. For several years, GAO's work has highlighted a number of strategic- and program-level causes for cost, schedule, and performance problems in DOD's weapon system programs. At the strategic level, DOD's processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems, which together define the department's overall weapon system investment strategy, are fragmented. As a result, DOD fails to balance the competing needs of the services with those of the joint warfighter and commits to more programs than resources can support. At the program level, DOD allows programs to begin development without a full understanding of requirements and the resources needed to execute them. The lack of early systems engineering, acceptance of unreliable cost estimates based on overly optimistic assumptions, failure to commit full funding, and the addition of new requirements well into the acquisition cycle all contribute to poor outcomes. Moreover, DOD officials are rarely held accountable for poor decisions or poor program outcomes. Recognizing the need for more discipline in weapon systems acquisition and to implement Congressional direction, DOD recently revised its policy and introduced several initiatives. The revised policy, if implemented properly, could provide a foundation for developing individual acquisition programs with sound, knowledge-based business cases. The policy recommends the completion of key systems engineering activities, establishes early milestone reviews, requires competitive prototyping, and establishes review boards to manage potential requirements changes to ongoing programs. The committee's proposed reform legislation should lead to further improvements in outcomes. Improved systems engineering, early preliminary design reviews, and strengthened independent cost estimates and technology readiness assessments should make the critical front end of the acquisition process more disciplined. Establishing a termination criterion for critical cost breaches could help prevent the acceptance of unrealistic cost estimates at program initiation. Having greater combatant command involvement in determining requirements and greater consultation between the requirements, budget, and acquisition processes could help improve the department's efforts to balance its portfolio of weapon system programs. Legislation and policy revisions may lead to improvements but cannot work effectively without changes to the overall acquisition environment and the incentives that drive it. Resisting the urge to achieve revolutionary but unachievable capabilities, allowing technologies to mature in the technology base before bringing them onto programs, ensuring requirements are well-defined and doable, and instituting shorter development cycles would all make it easier to estimate costs accurately, and then predict funding needs and allocate resources effectively. These measures will only succeed if the department balances its portfolio and adopts an incremental approach to developing and procuring weapon systems.
Introduction Because public housing represents a $90 billion investment on the part of the federal government since the program’s inception in 1937 and because the Department of Housing and Urban Development (HUD) currently spends $5.4 billion a year on operating subsidies and modernization grants for this housing, interest remains keen in knowing how well local public housing authorities (PHA) are managing their properties. The PHAs, through which HUD provides these subsidies and grants, house 3 million low-income people, many of whom are elderly or disabled. The Congress holds HUD responsible for ensuring that the authorities provide safe and decent housing, operate their developments efficiently, and protect the federal investment in their properties. The National Affordable Housing Act of 1990 required HUD to develop indicators to assess the management performance of PHAs. This law became the framework through which HUD developed one of its primary oversight tools for housing authorities, the Public Housing Management Assessment Program (PHMAP). Primarily, PHMAP establishes objective standards for HUD to evaluate and monitor the management operations of all PHAs to identify those that are troubled. According to HUD, PHMAP also allows the Department to identify ways to reward high-performing PHAs as well as improve the management practices of troubled PHAs. The program also allows PHAs’ governing bodies, management officials, residents, and the local community to better understand and identify specific program areas needing improvement. To help improve public housing management, the National Affordable Housing Act of 1990, as amended (the act), required HUD to develop indicators to assess the performance of PHAs in all the major aspects of their management operations. The act required HUD to use certain indicators as well as provided discretion for the Secretary of HUD to develop up to five additional indicators that the Department deemed appropriate. HUD implemented PHMAP by using the 12 indicators listed in table 1.1, the first seven of which are those required by statute. Because some indicators are more important than others in measuring management performance, HUD assigns them added weight in determining the overall score. HUD considers the indicators for vacancies, rents uncollected, annual inspection and condition of units and systems, and resident initiatives most indicative of good property management and delivery of services to residents, so each one has a greater weight than other indicators. After reviewing existing procedures and extensively consulting with a group of PHAs, public housing industry groups, private management firms, resident groups, and HUD staff in field offices, HUD has significantly revised the PHMAP indicators. HUD’s revisions to PHMAP, published December 30, 1996, eliminated three indicators; consolidated four other indicators into two; and added one new indicator, security. These revisions primarily address the performance indicators on which housing authorities report data, not HUD’s use of PHMAP data. Indicator Grades Determine the PHMAP Score, Performance Designation, Required Follow-Up, and Incentives Annually, PHAs receive a grade of “A” through “F” for each of the twelve indicators that apply to their operations. HUD uses a formula that reflects the weights assigned to each indicator, converts indicator grades into points, totals each PHA’s points, and divides that total by the maximum total the PHA could have achieved to arrive at a percentage. That percentage, a number between 0 and 100, is the PHMAP score. HUD draws data on the performance of a housing authority from two sources to determine the authority’s PHMAP score. First, the housing authority submits data to HUD for about half of the PHMAP indicators and certifies that this information is accurate and complete. HUD assigns grades to each of these indicators according to a comparison of the authority’s data and HUD’s criteria for grades “A” through “F.” The balance of the information HUD uses comes from its own information system for tracking expenditures from major grants. This system contains the financial and other types of data the field offices need to grade the remaining indicators for which the PHAs do not provide data. The field offices use this data and the PHA-certified data to determine indicator scores, the PHMAP score, and the PHA’s performance designation. The PHMAP score is HUD’s starting point for both the performance designation it assigns to a PHA and, depending on that designation, the extent of follow-up required of the PHA to correct deficiencies identified during the PHMAP assessment. Generally, HUD uses three designations to describe the performance of PHAs: troubled performers are those scoring less than 60 percent; standard performers are those scoring between 60 and less than high performers are those scoring 90 percent or more. HUD has the discretion to withhold the troubled designation or award the high performer designation if a PHA’s score is within 10 points of the threshold for either designation and HUD determines that its score results from the physical condition and/or neighborhood environment of that authority’s units rather than from the PHA’s poor management practices. If a housing authority is designated as troubled, it faces several mandatory follow-up activities and/or corrective actions to improve performance and remove the troubled designation. Specifically, the act requires HUD to perform an independent management assessment of the troubled PHA’s overall operations to identify the causes of the deficiencies that led to its poor PHMAP score. HUD uses private contractors to perform these independent assessments. HUD expects the independent assessments to form the basis for the second requirement for troubled PHAs—the memorandum of agreement (MOA). A memorandum of agreement is a binding contract between HUD and a troubled PHA to identify solutions to its management problems and pursue those solutions in a way that is significant, expeditious, and lasting. Among other things, HUD requires that the MOA address the specific responsibilities of HUD and the PHA, the resources each will commit to resolving the authority’s problems, the annual and quarterly performance targets for improving its performance on PHMAP indicators, and the incentives for it to meet its performance targets as well as sanctions for failing to do so. A PHA’s initial MOA generally lasts 18 months so that it can complete a second-year agreement with HUD, if necessary, before the first expires. HUD’s regulations for implementing PHMAP require standard- and high-performing PHAs to develop improvement plans for every PHMAP indicator on which the PHA received an “F,” unless the PHA can correct the deficiency within 90 days; HUD may also choose to require these plans for indicators receiving scores of “D” or “E” when failure to raise the grade might pose significant added risk. An improvement plan documents how and when the PHA plans to correct deficiencies. Although similar in content and scope to a memorandum of agreement, improvement plans differ in that (1) PHAs develop and submit them to HUD for approval rather than negotiate them with HUD officials and (2) they are not a binding contractual commitment between the PHA and HUD. When HUD first implemented PHMAP, it offered high-performers a variety of incentives, primarily regulatory relief from various reporting requirements. These incentives included less frequent reviews of changes to a PHA’s operating budget and, for those performing well on the modernization indicator, no prior HUD review for architects’ or engineers’ contracts. In addition to regulatory relief, high-performing PHAs receive a HUD certificate of commendation and public recognition for their performance. In its fiscal year 1997 budget request, HUD proposed an additional PHMAP-based incentive for high-performing PHAs when it sought to create a $500 million capital bonus fund (as part of the $3.2 billion it sought for its public housing capital fund). To be eligible for a bonus, a PHA would have to be a PHMAP high performer and have undertaken substantive efforts to obtain education and job training for its residents. However, the Congress chose not to fund the bonus proposal for public housing or any of HUD’s other major programs, in part because of concerns about HUD’s ability to accurately and reliably track the performance of bonus recipients. HUD’s Field Offices Implement PHMAP With nearly 800 staff devoted to oversight of housing authorities and implementation of the full range of HUD’s public housing programs, its field offices have the bulk of the Department’s responsibility for the day-to-day implementation of PHMAP. Field offices’ PHMAP responsibilities include determining the indicator grades and PHMAP scores, negotiating memorandums of agreement, approving PHAs’ improvement plans, and monitoring their progress in meeting the goals the MOA or improvement plan set forth. To determine a housing authority’s PHMAP score, a field office relies on that PHA to provide about half the data that leads to the overall PHMAP score and certify the data’s accuracy. As a result, the overall PHMAP score and everything it influences—from incentives for high performers to sanctions for troubled PHAs—are very much a joint effort and a shared responsibility. A PHA may also request to exclude or modify the data HUD should consider in computing its PHMAP score. An exclusion means that the indicator (or one or more of its components) is entirely excluded from calculations to determine the PHMAP score. For example, PHAs with no ongoing modernization or development programs are automatically excluded from being assessed on those indicators. Modifying the data for an indicator allows HUD to consider unique or unusual circumstances by exempting some of the data HUD usually requires the PHA to consider. The PHA still receives a score for the indicator, but the score would not reflect the data associated with the PHA’s unique or unusual circumstances. For example, a PHA operating under a court order not to collect tenants’ rent at specific developments until it corrects deficiencies the court had identified can seek to exempt those units in its developments from being considered in its indicator score for rents uncollected. A PHA always has the right to appeal a field office’s decision about modifications, exclusions, indicator scores, or the performance designation. However, after those appeals have been exhausted, the field office certifies the PHA’s PHMAP score, assigns a final performance designation, and proceeds with any required improvement plans, MOAs, or other necessary follow-up. When a troubled authority’s new PHMAP score is high enough to cause HUD to remove its troubled designation, HUD’s policy is to require the field office to verify the accuracy and completeness of the new data submitted by the housing authority. HUD also requires the field office to conduct a confirmatory review to verify the data the PHA had certified as well as the accuracy of the data HUD had obtained from its own information system. HUD’s guidance for implementing PHMAP stipulates that a confirmatory review must take place on-site at the PHA and cannot be accomplished through remote monitoring. HUD’s field offices may choose to conduct some confirmatory reviews of standard- and high-performing PHAs’ PHMAP certifications. HUD expects its field offices to choose these PHAs according to the risk they pose and focus on those with the highest potential for fraud, waste, mismanagement, or poor performance. Some of the factors HUD field offices may consider in analyzing the risk associated with a PHA’s PHMAP certification include size (number of units), borderline troubled designation (5 percent above or below the percentage for the designation), and negative trends in overall or individual indicator scores over several years. In May 1995, HUD expanded the scope of the annual independent audit each PHA receives in order to improve the Department’s ability to determine whether PHA-certified data are accurate. The annual audit, conducted pursuant to the requirements of the Single Audit Act, examines the housing authority’s financial statements, internal controls, and compliance with HUD’s rules and regulations. Housing authorities are responsible for selecting their own auditors and submitting the results of the audits to their HUD field office. Field offices are responsible for reviewing the audits to ensure they meet all of HUD’s requirements and, when they have approved the audit, reimbursing housing authorities for them. In fiscal year 1995, these independent audits cost HUD about $8 million for all housing authorities. HUD now requires the independent auditors to determine whether a housing authority has adequate documentation for the data it submits to HUD for its PHMAP certification. According to HUD officials, because the Department’s resources are too limited to conduct annual confirmatory reviews of most housing authorities, they expected to use the results of these audits to better focus HUD’s attention, oversight, and technical assistance. In addition to paying for the audits, HUD expects its field offices to use the results as part of a risk assessment to determine which housing authorities should get the most sustained attention and technical assistance. Objectives, Scope, and Methodology Stressing the need for HUD to hold housing authorities accountable while making better use of the data that PHMAP produces, the Chairman of the Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, asked GAO to review HUD’s use and implementation of PHMAP. As agreed with the Chairman’s office, we reviewed whether HUD’s field offices are using PHMAP and complying with the program’s statutory and regulatory requirements to monitor and provide technical assistance to housing authorities, whether PHMAP scores have increased and how HUD uses the program to inform HUD’s Secretary and the Congress about the performance of housing authorities, and whether PHMAP scores are consistently accurate and can be considered a generally accepted measure of good property management. We developed information from several different sources to address questions concerning the usefulness of PHMAP to HUD and other uses for which PHMAP may not be appropriate. To determine PHMAP’s usefulness to HUD, we interviewed officials and collected information on technical assistance activities at both the Department’s headquarters and field offices. At HUD’s headquarters, we analyzed a variety of documents pertaining to PHMAP and discussed the program’s use as a basis for technical assistance with the Offices of the Deputy Assistant Secretaries under HUD’s Assistant Secretary for Public and Indian Housing. At HUD’s field offices, our approach was twofold. First, we surveyed them via fax questionnaire to obtain data on the use of PHMAP, such as the number of confirmatory reviews each field office performs and how useful such program tools as improvement plans have been. This data reflect responses from all of HUD’s public housing field offices. Second, we visited five HUD field offices to review their use of PHMAP in more depth and to supplement the information we had gathered in our survey. We judgmentally selected the five field offices because of their geographic distribution, variations in the number of HUD staff in each office as well as the number of PHAs each oversees, and variations in average PHMAP scores for the PHAs reporting to those offices. To provide information on PHAs’ PHMAP scores, we relied on existing data from HUD sources, including HUD’s primary public housing database, the System for Management Information Retrieval-Public Housing (SMIRPH). From this database, we extracted the module containing housing authorities’ PHMAP data, including the PHMAP scores and individual indicator grades. Our analysis covers federal fiscal years 1992 through 1995 because the first fiscal year in which the rules governing PHMAP took effect was 1992 and the most recent year for which all PHMAP scores were complete at the time of our review was 1995. We did not systematically verify the accuracy of HUD’s data or conduct a reliability assessment of HUD’s database. In performing our analysis we found erroneous and incomplete information for a few PHAs, ranging from 1 to 3 percent of the total. We confirmed this with HUD officials, who attributed the errors to mistakes in data input or the field office’s having entered incomplete scores. However, because we used these data in context with additional evidence we obtained directly from HUD’s field offices and we did not focus on the scores of specific PHAs or small groups of PHAs, we believe our conclusions about overall trends in scores are valid. Throughout the course of our work, because the number of PHAs reporting PHMAP scores is too great for us to visit a representative sample, we consulted with several prominent groups representing the public housing industry to discuss HUD’s uses for PHMAP as well as their perspectives on the program’s ability to measure the performance of public housing authorities. These groups include the Council of Large Public Housing Authorities, the National Association of Housing and Redevelopment Officials, and the Public Housing Authorities Directors Association. We provided a draft of this report to HUD for review and comment. HUD’s comments appear in appendix V and are addressed at the end of each applicable chapter. We performed our work from January through December 1996 in accordance with generally accepted government auditing standards. Although Field Offices Use PHMAP to Identify Troubled PHAs, Compliance With Statutory and Agency Follow-Up Requirements Has Been Limited HUD’s field offices use PHMAP scores for their primary intended purposes: as a standard, objective means to identify troubled housing authorities; to compare performance among PHAs; and to identify when, where, and how to target HUD’s limited resources for technical assistance. However, beyond identifying troubled authorities and what they need, the amounts and kinds of technical assistance HUD provides varies because its field offices interpret their responsibilities differently—some choose to be actively involved while others adopt a hands-off approach. Furthermore, HUD’s 1995 reorganization of its field offices adversely affected some offices’ ability to provide technical assistance while others adapted to changed expectations and resumed providing as much assistance as they did before the reorganization. HUD Uses PHMAP to Identify Troubled Housing Authorities, but Technical Assistance Varies As part of HUD’s oversight of public housing, the PHMAP score is an important tool for identifying troubled authorities so HUD can focus technical assistance and monitoring on them. The most common types of technical assistance that HUD’s 49 public housing field offices provided all PHAs were telephone consultations, training, and participation in conferences. However, we found differences in how field offices defined their roles in providing PHAs technical assistance as well as some innovations in how others provided that assistance. For example, some field offices have encouraged high-performing PHAs to provide “peer assistance” to lower performers. Many of the differences in assistance were due to variations in field offices’ interpretations of their roles and the impact of HUD’s 1995 reorganization of its field offices. HUD headquarters officials believe that more training for all field staff and leadership from field office managers would help achieve more quality and consistency among field offices in providing technical assistance. HUD Uses PHMAP to Target PHAs for Technical Assistance Officials in 40 of HUD’s 49 field offices rated PHMAP as being of “utmost” or “major” importance in identifying which housing authorities need the most technical assistance. According to field office staff, PHMAP provides standard indicators to objectively measure an authority’s performance. In addition, some staff said that because PHAs have a strong aversion to failing performance scores and try to avoid failure, they are confident that when PHAs report information that results in low scores or failing grades, the data and the resulting scores are accurate. Because an accumulation of low or failing scores results in a PHA’s being designated troubled, HUD staff are confident that those PHAs PHMAP identifies as the worst-performing housing authorities are accurately designated as troubled performers. Some field office staff also use declining PHMAP scores to provide an early warning of management problems and to identify which PHAs could need additional technical assistance. In addition, the staff use PHMAP’s 12 individual indicator grades to better focus their limited technical assistance resources and thereby maximize the benefits PHAs receive from HUD’s assistance. For example, one field office developed a package of technical assistance for the “resident initiatives” indicator because many PHAs failed this indicator. The package of assistance included sample policies and procedures for operating resident programs. Another field office developed assistance specifically for small housing authorities because many of them were having trouble renting their units when they became vacant (thus failing PHMAP’s unit turnaround indicator). Among other things, that field office provided its small PHAs an extensive list of suggestions on how and where to better market their units. Most technical assistance from HUD’s field offices consisted of telephone consultations, training sessions, and industry conferences. HUD also provided assistance—although limited because of time constraints—at the time of a PHMAP confirmatory review. During telephone consultations, several offices we visited answered questions from housing authority staff and helped the executive directors of new housing authorities better understand public housing regulations and operations. Training sessions covered these and other topics and provided more details than telephone discussions. In addition, to increase the amount of personal contact they have with housing authority staff and to provide technical assistance, field office staff said they regularly participate in conferences hosted by public housing industry associations. Field Offices’ Interpretations of Their Role and Their Recent Reorganization Influence the Level and Types of Technical Assistance Field offices’ interpretations of their obligation to improve the performance of housing authorities influences the type of technical assistance they provide. For example, officials in one field office did not believe that it was HUD’s role to manage PHAs’ operations. Instead, they believed that the role of their field office should be limited to providing information on compliance with federal rules and regulations and to suggesting solutions to management problems. This field office avoids showing PHAs how to manage their developments because the staff believe that they do not have sufficient expertise and that the housing authorities would view this advice as intrusive. In contrast, staff at other field offices that we visited believed they are obligated to tell PHAs what must be done to correct management deficiencies because HUD is responsible for ensuring that PHAs use federal funds efficiently and effectively to provide safe, decent housing. For example, staff from one field office spent several days at a troubled authority to help it set up proper tenant rent records and waiting lists. In addition to differences in how they view their role to directly assist PHAs, we found differences in the extent to which field offices use outside resources to help their housing authorities. Some field offices told us that to compensate for a shortage of resources from HUD, they help PHAs in their jurisdiction by encouraging technical assistance from other PHAs rather than providing it themselves. For example, some of the field offices arranged for high-performing PHAs to provide peer assistance to authorities with management problems. One field office persuaded staff from a high-performing PHA to temporarily manage a small authority that unexpectedly lost its executive director. Another field office recruited a high-performing PHA to help another one develop a system for inspecting its housing units. In 1995, HUD reorganized the field offices and changed the responsibilities of the staff who oversee and assist PHAs. Before the reorganization, most field office staff were generalists and broadly understood federal housing regulations and PHA operations. After the reorganization, however, the responsibilities of individual field office staff became more specialized to focus on the rules and regulations of specific public housing operations.This specialization confused some staff in field offices and housing authorities as well as impaired the ability of some field offices to provide technical assistance. For example, field office staff we visited said that some specialists do not have the skills needed to do their jobs because many of them did not have the work experience or requisite training for the specialists’ positions; the staff also noted that HUD had not provided sufficient training for the staff to understand the reorganization and their new responsibilities. The staff also said that the reorganization was a source of confusion for PHAs. Before the reorganization, a housing authority could call one employee at HUD’s field office to answer all its questions; afterward, a housing authority generally needed to call several different staff at HUD’s field office to answer questions. Adjusting to the reorganization differed across field offices. At one field office, staff resisted the reorganization because they did not want to become specialists and they recognized that technical assistance to the PHAs suffered as a result. For example, the staff now disagree over who is responsible for overseeing certain PHA operations. They also have resisted working together to provide technical assistance and have not been sharing PHMAP information to develop the best plan for correcting management deficiencies. Other field offices we visited adapted to the reorganization. Staff in these field offices worked cooperatively to build on the skills of the experienced staff. For example, one field office continues to assign each housing authority to only one staff member who provides or coordinates all technical assistance to that authority. The responsible staff member, however, belongs to a team of staff from all operational areas who work together to solve each PHA’s problems. Officials at HUD headquarters, including the Deputy Assistant Secretary for Public and Assisted Housing Operations, acknowledged that some field offices had difficulty adjusting to the reorganization. They stated that although adequate training was crucial to the reorganization’s success, some field offices either did not seek it or did not take the need for it seriously, despite the availability of training funds for field staff. HUD officials continue to emphasize the importance and availability of training and expect field office management to assess the staff’s skills and expertise and request the appropriate training. These officials believe that because of limited staff resources, now and in the future, the reorganization is the best way for field offices to provide effective oversight and technical assistance to PHAs. Furthermore, they believe that managers of the field offices must take a more active leadership role in directing their staff to work together. HUD’s Infrequent Use of Some Oversight Tools May Not Adequately Improve the Performance of PHAs or Target Technical Assistance The act and HUD’s requirements for how field offices use PHMAP provide for several tools to guide improvements in a housing authority’s performance and thereby raise its indicator grades and PHMAP score. These tools include the memorandums of agreement (MOA), improvement plans, confirmatory reviews, and the annual independent audits. While such tools as MOAs and improvement plans generally apply to PHAs designated as troubled or failing specific indicators, a confirmatory review is mandatory for any PHA coming off HUD’s troubled list and an independent audit is mandatory for all PHAs. Nonetheless, we found that the compliance of field offices with statutory requirements and HUD’s guidance for using these tools has been inadequate and infrequent. Furthermore, HUD has not determined whether these statutory or agency requirements are effective, adequately improve housing authority performance, or help the field offices better target limited technical assistance resources. As a result, HUD has little information to determine which of these tools best improve a PHA’s performance and which tools its field offices can use most effectively to offset their declining resources. Field Offices Make Limited Use of Oversight Tools Over 90 percent of the field offices we surveyed reported that on-site visits to the housing authorities were one of the most effective means to ensure compliance with PHMAP requirements and provide technical assistance. Officials at one field office responded that PHAs under its jurisdiction believed that on-site visits from HUD staff to provide technical assistance were essential to maintaining effective operations. Yet, most field office staff we visited made fewer personal visits to housing authorities than they felt were necessary because of limited staff resources and travel funds. Field office staff told us, for example, that their workload has increased because their offices have been unable to replace staff who have left the agency. With less time available for on-site visits, direct monitoring of the PHAs’ performance has occurred less frequently. In addition, some field office staff said that they could rarely justify to their management using limited staff and travel resources to visit a PHA that is more than a 1-day trip from the office unless that authority’s PHMAP score was below 60. Memorandums of Agreement Although HUD is required by law to enter into MOAs with troubled housing authorities to improve management performance, few field offices have done so. Figure 2.1 shows that the percentage of troubled PHAs operating under an MOA has been decreasing since 1992. Furthermore, in fiscal year 1995, only 3 of HUD’s 32 field offices that had troubled PHAs were fully in compliance with the requirement to enter into an MOA with each troubled authority. The primary reason HUD’s field offices told us that they did not enter into these required agreements with troubled housing authorities is that the PHAs had already corrected or were in the process of correcting their management deficiencies. However, HUD headquarters officials told us they did not accept this as a valid reason for not meeting the requirement and questioned how the field offices could be sure the housing authorities were no longer troubled. Improvement Plans When a PHA fails any of PHMAP’s 12 performance indicators, HUD requires the responsible HUD field office to obtain a plan from that PHA for improving its performance and to track its progress against the plan. However, we found that nearly a third—31 percent—of HUD’s field offices had not ensured that local housing authorities had developed these plans. We also found examples of PHAs’ plans lacking specific strategies and time frames for correcting management deficiencies. For example, one PHA’s plan for a failing “rents uncollected” indicator simply stated that the housing authority would start collecting rent. Although field office staff acknowledged that the PHA also needed to update its standard tenant lease and develop a rent collection policy to improve this indicator grade, they said that they had not yet had the time to contact the PHA to revise its plan. HUD requires its field offices to monitor the progress of housing authorities in implementing improvement plans to ensure PHAs meet the quarterly and annual performance targets in their plans. However, four of the five field offices we visited told us they do not follow up with the PHAs to determine the status of improvement plans or whether the plans had corrected the management deficiencies. Field office staff said that they did not have time to track the effectiveness of the plans because their workloads have been increasing due to decreasing numbers of staff. HUD headquarters officials confirmed that systematic tracking of the field offices’ success in obtaining improvement plans or executing MOAs has not been done. They emphasized that responsibility for implementing PHMAP rests with the field offices and said that limited efforts were underway to ensure field offices do more to use these tools and measure their effectiveness. However, they could not tell us whether troubled PHAs without MOAs had improved their scores and left the troubled list without such oversight, nor could they tell us whether improvement plans are instrumental in improving indicator scores. Field Offices Confirm Few PHMAP Scores When a troubled housing authority receives a new PHMAP score that is high enough to remove that designation, HUD requires that the field office confirm the score’s accuracy by verifying that the PHA’s improvements have been effective before removing the troubled designation. However, we found most field offices are not meeting this requirement. In 1995, for example, HUD’s field offices confirmed less than 30 percent of the scores that should have been confirmed. HUD officials acknowledged that the infrequency of confirmatory reviews by its field offices hampers the program’s credibility and integrity. Because it has done so few confirmatory reviews, HUD cannot say that most scores are accurate, nor can it say that most troubled PHAs that raised their scores above 60 really are no longer troubled. The HUD Inspector General (IG) recently noted that without more confirmatory reviews, the self-reporting nature of PHMAP creates a temptation for PHAs to manipulate data to raise their scores. In fiscal year 1995, 24 of the 49 field offices had housing authorities with PHMAP scores high enough to remove them from HUD’s troubled list, but only 11 of the 24 field offices performed all or some of the required confirmatory reviews. The remaining 13 offices performed none of the required confirmatory reviews. Nonetheless, some of these same 13 field offices performed discretionary confirmatory reviews of other housing authorities that had not been classified as troubled. In one case, a field office had just one housing authority whose new PHMAP score was high enough to remove its troubled designation. Although the field office did not perform a confirmatory review for that authority until the next fiscal year, it did complete nine confirmatory reviews of standard- or high-performing housing authorities. HUD headquarters officials told us that although they encourage the field offices to do as many additional, discretionary confirmatory reviews as possible, they expect field offices to complete the mandatory reviews first. They also told us that limited resources kept them from monitoring the performance of field offices on these reviews. In addition to the field offices’ lack of compliance with HUD’s requirement for performing confirmatory reviews, few offices are performing discretionary confirmatory reviews. Over the life of the program, HUD has confirmed 6.7 percent of all PHMAP scores. Table 2.1 shows that since the program began in 1992, HUD has confirmed no more than 8 percent of all PHMAP scores in any given year (see table 2.1). To expand on fiscal year 1995 data, nine field offices performed no confirmatory reviews, over two thirds performed five or fewer, and 4 offices performed 10 or more confirmatory reviews (see fig. 2.2). Recognizing that PHMAP scores may not be as accurate as they could or should be to give the program integrity and credibility, HUD has added new requirements and begun initiatives to improve the accuracy of the scores and strengthen the program. HUD currently requires its field offices to confirm the PHMAP scores of housing authorities whose scores have risen to 60 or above, thereby removing them from the troubled list. Recently, HUD formed a team of “expert” field office staff to develop review guidelines and to perform confirmatory reviews at selected housing authorities whose new PHMAP scores meet HUD’s criteria for a mandatory confirmatory review. HUD officials expect this team to perform as many as 12 confirmatory reviews in 1 year, during which they will focus primarily on large, high-risk housing authorities. Field Offices Are Not Using Independent Audits to Verify Data Provided by PHAs In May 1995, HUD expanded the scope of the mandatory annual financial audits of PHAs to require that auditors review the records underlying a PHA’s self-reported PHMAP data. HUD expects the financial audits to verify that the PHAs’ data are accurate and complete and that the PHAs have adequate documentation to support their submissions. HUD adopted this requirement because the field offices do not have sufficient resources to confirm each PHA’s score every year. Moreover, HUD officials told us that further departmental downsizing will limit its field offices’ ability to provide meaningful technical assistance, including confirmatory reviews. As a result, HUD expects that the PHMAP review in the annual audit can help ensure the integrity of housing authorities’ PHMAP data and should be a valuable tool for aiding the field offices to identify those housing authorities most needing technical assistance. HUD does not consider the auditors’ analysis to be a confirmatory review because the auditors do not verify the information HUD maintains in its information system. Furthermore, even though the auditors certify that a housing authority has documentation to support the data it submitted to HUD, they do not verify that some of the activities reflected in that data were actually performed by that authority. For example, while the auditors verify that a PHA has data indicating it has met the requirements for the indicator on conducting annual inspections of all of its housing units and major systems (e.g., heating, plumbing, and electrical), the auditors do not verify that those inspections actually took place. Although the independent audit requirement has been in place since May 1995, few of the staff in the five field offices we visited were aware of it. Before field offices authorize payment for an annual audit, HUD headquarters officials said that they expect field offices to review the audits for quality and completeness and verify that the audits addressed all appropriate areas of the PHAs’ operations, including the PHMAP. However, field office staff said that they had not seen an audit of a housing authority that tested the reliability of its PHMAP submission. HUD also expects the field offices to consider significant audit findings in deciding which PHAs need additional oversight or assistance. HUD officials acknowledged, however, that the independent auditors may need training to better understand HUD’s expectations of them, regulations, and PHMAP system as well as the operations of PHAs. Similarly, these officials noted that staff in HUD’s field offices need training and guidance in how to better use the annual independent audit. Conclusions One of the key challenges HUD faces in the coming years is effectively downsizing the Department while maintaining the needed level of oversight at public housing authorities. However, HUD is currently not maintaining a consistent, minimally acceptable level of oversight at all housing authorities because of the variance in how field offices interpret their roles to provide that oversight as well as their lack of systematic compliance with follow-up requirements. Furthermore, because field offices are not making enough use of the independent audits’ verification of PHMAP data to target their technical assistance, HUD is not using the resources it has to effectively determine which housing authorities’ scores are most likely to be inaccurate. As a result, HUD is not ensuring that the housing authorities most in need of oversight and assistance are receiving it and thereby improving their performance. Continued departmental downsizing likely will cause HUD to leverage its existing resources to achieve a minimally acceptable level of oversight. This oversight is needed for HUD to be reasonably confident that all housing authorities are using federal funds appropriately, managing and maintaining their developments properly, and reporting accurately their performance information. Recommendation To make better use of the limited resources it has to devote to the oversight of public housing, we recommend that HUD provide guidance to its field offices that clearly (1) articulates their minimally acceptable roles regarding oversight and assistance to housing authorities and (2) emphasizes the importance of using the results of the independent audits to better target HUD’s limited technical assistance resources. Agency Comments HUD agreed with our findings regarding oversight of public housing authorities and stated that it has begun taking steps to address this recommendation. These steps include a wide variety of training and other activities to (1) explain the revisions HUD is making to PHMAP; (2) reemphasize the need for and importance of statutory and agency follow-up requirements, such as memorandums of agreement, improvement plans, and confirmatory reviews; and (3) update HUD’s guidance to its field offices regarding their PHMAP and other oversight responsibilities. Although PHMAP Scores Have Risen, HUD Recognizes That Flaws in the Program’s Database Limit Its Use According to a HUD database of PHMAP scores, average PHMAP scores have increased over the life of the program from an average of 83 in 1992 to 86 in 1995 (the last year of complete data). The number of high-performing housing authorities increased, with more than half of all authorities designated high performers in 1995, and the number of troubled authorities decreased. However, the smallest housing authorities—those with fewer than 100 units—now make up a greater proportion of those designated troubled than when the program began. During our analysis of this database, we found omissions of key data, such as the number of units under a PHA’s management and its performance designation. We also found inconsistencies between PHMAP scores and the assigned performance designations. Notwithstanding these weaknesses, the database represents the most complete data available on PHA performance over time. Most PHMAP Scores Are Increasing and Fewer Housing Authorities Are Troubled Nationwide, average PHMAP scores generally increased over the 4 years of the program for which we analyzed data. By 1995, over half of all public housing authorities were high performers. Subsequent analysis showed little regional variation in how well they scored on PHMAP. While the overall increases in PHMAP scores held true for all sizes of PHAs, the largest ones had scores consistently lower than the national average. With average scores increasing, the number of PHAs with scores low enough for HUD to designate them as troubled also decreased. The number of troubled authorities reached 83 in 1995, with half of that total consisting of the smallest housing authorities (those managing fewer than 100 units). Average PHMAP Scores Increased The average PHMAP score for all housing authorities rose from about 83 in 1992 to 86 in 1995. This increase held true for PHAs of all sizes, although large PHAs—those with more than 1,250 units—consistently scored lower than the national average (see table 3.1). In fiscal year 1995, 151 large PHAs accounted for approximately 5 percent of all PHAs reporting PHMAP scores, but they operated nearly 60 percent of all public housing units. Consequently, while more PHAs had higher scores, more units were under the control of PHAs with somewhat lower scores. Appendix I provides average PHMAP scores for PHAs for all of HUD’s field offices for fiscal years 1992 through 1995. The Majority of PHAs Were High Performers By fiscal year 1995, more than half—about 57 percent—of all public housing authorities were designated as high performers. As shown in table 3.2, the number of high performing authorities grew each year, rising from 1,033 (33 percent) in 1992 to 1,791 (57 percent) in 1995. Also, by 1995, nearly 50 percent of all public housing units were under the management of high-performing authorities. Little Variation Among Regions Our analysis showed little regional variation in PHMAP scores. The regional differences we found were slightly greater than those associated with the size of housing authorities, but no region was significantly below the national average. Likewise, there was little variation among the regions in the percentage of troubled PHAs under their jurisdiction. For example, in fiscal year 1995, 5 percent of all PHAs nationwide were troubled, but within the 10 regions we analyzed, the percentage of troubled housing authorities ranged from 2 to 9 percent. Appendixes I-IV provide detailed information on average PHMAP scores as well as the number of troubled, standard- and high-performing PHAs, respectively, for each HUD field office. PHAs Consistently Failed Some Indicators Despite some improvement in overall scores, some indicators were more problematic for PHAs than others. As shown in table 3.3, with the exception of 1 year, PHAs consistently had the most difficulty with the energy consumption indicator—which had the highest failure rate for 1992, 1994, and 1995. Similarly, the indicators for unit turnaround, tenants accounts receivable, and operating expenses proved troublesome, with 10 percent or more of all PHAs failing them in 1995. A HUD official explained that the high failure rate in 1993 for the indicator measuring resident initiatives occurred because the PHAs were not paying attention to this indicator. In 1992, all PHAs received an automatic “C” for this indicator because HUD had not provided enough information on the requirements for grades “A” through “F” until after the assessment period started. This official said that many PHAs assumed they would receive an automatic “C” the next year as well, even though HUD had stated in 1992 that the automatic grade was a one-time occurrence. This official added that most field offices followed up by providing technical assistance to the PHAs with failing grades and were able to resolve the problems in the following year. This appears to be supported by the decline of the failure rate over the following 2 years to less than 6 percent in 1995. Smaller PHAs Were More Likely to Be Troubled While the total number of troubled housing authorities declined—130 were troubled in 1992 compared to 83 in 1995—more of those PHAs were concentrated among the smallest housing authorities than when the program began. The percentage of troubled PHAs that were small—managing fewer than 100 units each—grew from 32 percent of all troubled authorities in 1992 to 49 percent in 1995 (see fig. 3.1). HUD Recognizes Database Flaws and Plans Corrections We found missing, inaccurate, and inconsistent data in HUD’s SMIRPH database, the primary database for storing PHMAP scores. A HUD official attributed these problems to data input problems at the field offices. Although HUD headquarters makes regular, periodic use of this database, it must also manually verify much of the information before providing it to HUD’s Secretary, Members of Congress, and others. HUD’s General Deputy Assistant Secretary for Public and Indian Housing acknowledged that the SMIRPH database, as currently implemented, does not produce a complete, accurate list of troubled PHAs and that HUD is in the process of making it more reliable and useful. We found that the number of troubled authorities (150) for fiscal year 1995 that we derived from the database was inaccurate when we compared it to the number reported (83) as of December 20, 1995, by HUD’s Management Assessment Coordinator. We also found performance designations that were inconsistent with PHMAP scores. In 1995, for the 150 PHAs we found to be troubled, HUD had designated 42 as high performers, 7 as standard, and 51 had no designation. Among high-performing PHAs in 1995, of the 1,791 PHAs that we found that had PHMAP scores of 90 or higher, HUD had designated one as troubled, 43 as standard, and 325 had no performance designation. We also found some omissions in the database. Data, such as the number of units and performance designations, had not been entered for all PHAs. For example, we found that the database did not have size information on 18 PHAs from fiscal years 1992 through 1995. We also found that no designations had been entered for 132 PHAs with scores less than 60 and 1,037 PHAs with scores 90 or higher. HUD’s Management Assessment Coordinator stated that these problems with missing, inaccurate, and inconsistent data occurred because field offices either (1) did not enter the information at all or (2) entered it incorrectly. These instances of inconsistent or missing data suggest that basic system safeguards do not exist to prevent field offices from making these data entry errors or omitting essential PHMAP data. While HUD officials who oversee PHMAP and the Department’s field offices acknowledged problems with the database, they added that the program’s redesign includes changes that will address the problems with data accuracy and reliability. HUD officials told us they plan to change procedures for entering information on PHAs into the database to allow field offices to update PHA data on a real-time basis and to make immediate corrections when they find errors or omissions. These procedural changes will also enable HUD headquarters staff to access field office data directly and allow ongoing reviews of the information for accuracy and completeness. HUD officials also believe that the changes will increase control over the information from the field offices and help ensure that the information in the SMIRPH database is accurate. Agency Comments HUD expressed concern that our draft report used data from the SMIRPH database that HUD had not verified for accuracy. HUD noted that it is making changes to the database that will improve headquarters’ ability to find and correct data errors that have been entered by staff at its field offices. To address HUD’s concern that we used inaccurate, unverified data from its database to analyze PHMAP data on housing authorities’ scores by size and region, we recalculated the number of troubled housing authorities by size category for 1995 using data HUD verified with its field offices; we also modified this report to reflect a more accurate and lower number of troubled housing authorities in 1995. Recalculating the number of troubled authorities by size did not change our conclusion that a greater proportion of the authorities that HUD verified as being troubled are those with fewer than 100 units. In fact, while HUD’s database indicates that 44 percent of troubled authorities in 1995 were small, HUD’s verified list of troubled authorities indicates 49 percent were small. Furthermore, although HUD officials told us that a manually-verified list of troubled authorities for 1992 was not available, they agreed with our conclusion that the smallest housing authorities make up a greater proportion of troubled housing authorities in 1995 than in 1992. Because our draft report presented no analysis of data on a regional basis (only data as drawn from HUD’s database) and because we draw no conclusions in that regard in this report, we have retained appendixes I-IV, which show average PHMAP scores and the number of troubled, standard, and high-performers in HUD’s regions. Where HUD provided us with manually verified data—particularly in appendix II showing troubled authorities—we have modified the appendixes to reflect the more accurate data. The Questionable Accuracy of PHMAP’s Scores and the Program’s Validity Limit Its Usefulness Our review and those of others indicate that PHMAP scores are often inaccurate, imprecise, and must be changed when HUD verifies the data that public housing authorities have submitted to support their scores. Furthermore, professional property managers and others in the public housing industry question whether PHMAP can capture all aspects of management operations. Although HUD has taken some steps to help ensure that future scores are more accurate than they have been over the program’s first 4 years, these steps will be resource-intensive and do not address all of the program’s limitations. In the past, both HUD and the Congress have proposed additional uses for PHMAP, such as deregulating and awarding bonuses to PHAs with high PHMAP scores. However, until greater confidence exists that individual scores are accurate and HUD brings greater validity to PHMAP as a comprehensive measure of management operations, such additional uses for the program may not be appropriate. Accuracy of Scores and Validity as a Management Assessment Tool Limit Uses for PHMAP After performing on-site reviews of selected PHAs to confirm the accuracy of their PHMAP scores, HUD’s field offices changed half of the scores. In commenting on this report, HUD indicated that most confirmatory reviews involved high-risk PHAs, whose PHMAP data have been most susceptible to being found inaccurate. In similar reviews, HUD’s independent assessment contractors as well as HUD’s IG found that many scores or grades for specific indicators were inaccurate. To better identify PHAs that need oversight and technical assistance, HUD staff often supplement their decision-making with other measures of management problems to get a more complete picture of an authority’s performance. Professional property managers and industry representatives agreed that more information is needed than PHMAP provides to give a complete picture of how well a PHA’s management is performing. After Confirmatory Reviews, PHMAP Scores Change Significantly After performing confirmatory reviews of 200 PHAs in fiscal year 1995, HUD’s 49 field offices changed 98 PHMAP scores (see table 4.1). In several cases, the changes HUD made to PHMAP scores also meant HUD would have to change the performance designation of those PHAs. For example, HUD lowered the scores of 14 PHAs enough to designate them as troubled, raised the scores of 4 troubled PHAs to 60 points or higher, and raised the scores of 10 standard-performing PHAs to 90 or higher. Both of HUD’s independent assessment contractors as well as HUD’s IG have reviewed PHMAP data to confirm the accuracy of PHAs’ scores. For example, in 1993, the IG confirmed the scores of 12 housing authorities. As a result of this review, the IG concluded that the PHMAP scores for 9 of the 12 PHAs should be lowered because 3 of them fell below 60, a score which should have warranted the troubled designation. In a second report on PHMAP, the IG reported that six of HUD’s field offices reduced over half of the scores they reviewed. Similarly, one of HUD’s independent assessment contractors reported that for the 30 assessments it has performed at troubled housing authorities, it found 21 indicator grades and/or PHMAP scores that were inaccurate. Over 50 percent of the contractor’s assessments resulted in lowering the indicator grades to an “F.” The contractor most often lowered the indicators used to measure outstanding workorders and annual inspections of housing conditions and systems. Several reasons explain why HUD and others changed so many PHMAP scores after performing a confirmatory review. Some field office staff said these scores changed because the PHAs did not understand all the requirements of PHMAP and therefore misreported their data. They also told us that PHMAP is particularly difficult for smaller housing authorities whose limited staff can find HUD’s paperwork requirements overwhelming. HUD staff do not believe many PHAs intentionally try to deceive the Department by reporting false PHMAP information. Instead, they, as well as the contractor staff, said that the PHAs often have insufficient documentation to support the data they must submit to the field offices or do not understand how HUD wants them to report the information. For example, while a PHA may report the average number of days their housing units have been vacant, the PHA may not have the tenant files to document when the previous tenants moved out and when the new tenants’ leases took effect. Without supporting documentation or evidence of a system to track unit turnaround, HUD assigns an “F” to this indicator. Similarly, a PHA may be providing support programs for its residents, but fail to understand that its board of commissioners must approve those programs to receive a passing grade on PHMAP’s indicator for resident initiatives. Typically, when HUD’s field office staff find examples, such as these, during a confirmatory review, they use the correct data to recalculate the housing authority’s grade for each of the affected indicators. HUD and Industry Professionals Supplement PHMAP With Additional Factors to Evaluate Management Performance HUD’s field office staff did not use PHMAP alone to assess the management performance of its public housing authorities. Although they agreed that PHMAP accurately identifies troubled authorities, several staff said that they consider other factors besides PHMAP indicators to supplement their decision-making for the other authorities they oversee. They said that some PHAs with scores over 90 have management problems that the program’s indicators do not measure. Other factors used by some HUD staff to identify the potential for management problems at standard- and high-performing authorities include the failure of a PHA to implement consistent and effective operating policies and procedures, the frequency of changes in the executive leadership and the continued interference into a PHA’s daily operations by its board of commissioners, the number and the type of telephone calls received from a PHA’s residents and staff, and any adverse news stories about a PHA. Staff at the five field offices we visited said that they believed some housing authorities with high PHMAP scores were not operating their housing programs efficiently or effectively. These field offices differed, however, in how they treated those PHAs. Staff at two field offices told us that although they use the scores to determine which PHAs need on-site reviews, they would not let a high score prevent them from visiting an authority they believed had serious management problems. The HUD IG also questioned whether or not PHMAP scores accurately measure the management performance of public housing authorities. The IG’s reviews of high- and standard-performing PHAs found instances of fraud and program abuse. For example, the IG reported that the executive director of a high-performing PHA had charged over $62,000 in ineligible expenses, including excessive compensatory time, unsupported travel costs, and health and insurance benefits for his divorced spouse. Another PHA executive director falsified PHMAP data to obtain a high-performing designation. After reviewing the operations of a standard-performing PHA, the IG also cited numerous program abuses and mismanagement. The IG concluded that although PHMAP could be a useful tool to assess PHAs, the program was too unreliable for HUD to make oversight decisions. Other public housing professionals—property managers and those representing industry associations—agreed that more information is needed than PHMAP provides to give a complete picture of how well a PHA is managed. For example, they noted that PHMAP does not automatically include an on-site observation and inspection of a PHA’s housing developments. One association noted that while a PHA could improve its PHMAP score by simply writing off more past due rents from former tenants as uncollectible to improve its grade on the indicator for rents uncollected, its PHMAP score would not measure how diligent an effort it had undertaken to collect the rent. Another industry association official knew of several examples of PHAs that were making good property management decisions, such as choosing to perform deferred maintenance when a unit became vacant rather than rent it immediately, that ironically led to lower PHMAP scores. Citing a similar situation, HUD has agreed that occasionally the best decision for a PHA is to take an action that yields a lower PHMAP score, and that the score should not be the sole driving force influencing a PHA’s decisions. The Congress and HUD Have Proposed to Use PHMAP as a Basis for Deregulation and Funding Bonuses While HUD’s primary use of PHMAP has been to identify troubled housing authorities and target technical assistance to them, the Congress and HUD have proposed to use this program for other purposes. In 1994, the Senate Committee on Banking, Housing, and Urban Affairs proposed some deregulation and additional flexibility for those authorities that had achieved PHMAP scores of 90 or above. In addition, in its fiscal year 1997 budget request, HUD proposed to give high-performing PHAs bonuses based in part on their PHMAP scores. Because PHMAP scores do not always measure the true management performance of the PHAs, the benefits of these proposals need to be weighed against the possibility of granting undeserved flexibility and awards. To encourage individual PHAs to be more innovative, the Banking Committee proposed limited deregulation and additional flexibility for high-performing PHAs in two ways. First, it proposed permitting a PHA that generates income over a certain level to exclude that income from calculations of its need for a subsidy from HUD to operate and manage its properties. At that time, each dollar of extra income that a PHA generated reduced its subsidy by a dollar, thereby creating a disincentive to generate additional income from sources other than rent. Second, the Committee proposed to waive all but a few key regulations—such as nondiscrimination, equal opportunity, and tenant income eligibility—so high-performing PHAs could have more flexibility to bring innovative solutions to local problems and achieve more efficient operations. In its fiscal year 1997 budget request, HUD proposed to award $500 million to high-performing PHAs as bonuses based, in part, on their PHMAP scores. As we reported in our testimony in June 1996 and as we found in the course of our work on this report, HUD does not confirm the scores of high performers and generally accepts them. In our June 1996 testimony, we recommended that the Congress consider not appropriating the bonus funding until HUD develops adequate performance measures and supporting information systems. The HUD appropriations bill which the Congress approved and the President signed did not contain funding for performance bonuses. The three associations representing the public housing industry and the professional property managers that we interviewed all opposed or had strong reservations about using PHMAP scores for purposes other than identifying troubled housing authorities and targeting technical assistance to them. They also believed that other uses would be inappropriate because of the limited number of confirmatory reviews the field offices perform and the proportion of PHMAP scores that have been changed after a review. Two of the associations did not believe that PHMAP scores adequately measured the management performance of housing authorities because they thought some PHAs that received high scores did not provide their residents with decent, safe housing. The professional property management firm that independently verified some scores also agreed that the usefulness of these scores is limited. Because this firm has recommended lowering many scores after an independent assessment, the firm lacks confidence in the scores’ accuracy and does not believe that the program provides enough information about the management performance of PHAs for HUD to make effective funding decisions. Conclusions In recent years, both the Congress and HUD have proposed additional uses for PHMAP, such as bonuses to reward those housing authorities with the highest scores. While PHMAP has provided a quantifiable means to assess the management performance of housing authorities, the scores are not sufficiently accurate for detailed comparisons of performance. Although HUD is currently working to enhance the accuracy of these scores, they do not yet provide a comprehensive, generally accepted way to assess the performance of PHAs. To be useful for other purposes, not only would these scores have to be more accurate, but the program would have to be expanded to provide a more comprehensive measure of public housing authorities’ management operations. Because HUD does not frequently confirm most scores—confirmatory reviews have focused on troubled PHAs—HUD does not know how many authorities are not receiving the proper designation. When HUD does confirm scores, it changes half of them—and more than half of these changes result in HUD’s lowering the score. We found that when HUD lowers a PHMAP score, it does so by an average of 14 points. If this average change held true for housing authorities in general, then HUD may not be properly designating as troubled those authorities currently scoring between 60 and the low 70s whose scores should be lower. As a result, those authorities are not receiving the oversight and technical assistance HUD should be providing to improve their performance. Recommendations We recommend that until it establishes a cost-effective means to ensure consistently accurate scores, HUD should not consider additional uses for PHMAP, including using its scores as criteria for funding bonuses, until it determines that PHMAP meets an acceptable level of accuracy and more comprehensively measures property management performance and require its field offices to confirm the PHMAP scores of housing authorities with scores low enough that they are at risk of being designated troubled. Agency Comments HUD agreed with our findings and recommendations. When we met with HUD officials, including the General Deputy Assistant Secretary for Public and Indian Housing, to discuss a draft of this report, they told us that the Department is no longer considering additional uses for PHMAP, such as using scores as criteria for funding bonuses. Even in the absence of using PHMAP for such purposes, we believe that it is important that HUD works to ensure scores are more consistently accurate and have, therefore, retained this recommendation. HUD has begun taking steps to address our recommendation that it confirm PHMAP scores of those housing authorities that are at risk of being designated troubled but expressed concern that it may not have sufficient resources to fully implement this recommendation. HUD expressed three concerns relating to the information and conclusions presented in this chapter of our report. HUD believed that this chapter (1) assumes that PHMAP was intended to be an all-inclusive assessment system for property management, (2) does not place PHMAP in a historical perspective, and (3) reaches incorrect conclusions regarding the overall reliability of PHMAP scores. We do not believe that we characterize PHMAP’s purpose as being an all-inclusive measure of property management. Our discussion of the program does not state that this is the purpose of PHMAP. Rather, the report discusses how the program’s limitations—including its intentional design not to be a complete performance measure—affect its suitability for additional purposes, such as those proposed in recent years by HUD and the Congress. HUD agreed that there is a perception that PHMAP is an all-encompassing system to assess the performance of PHAs and stated it is taking steps to address this misperception. Seeking to clarify the program’s purpose, HUD added language to its recently revised interim PHMAP rule (published in the December 30, 1996, Federal Register), that the program’s indicators reflect performance in only specific areas. HUD correctly states that this report does not provide a historical perspective of PHMAP by discussing previous HUD systems for assessing and identifying troubled housing authorities. We believe that such information would not contribute substantially to our report’s three objectives to evaluate HUD’s use of the current program, provide trends in PHMAP scores from fiscal years 1992 through 1995, and discuss limitations in the program’s design and implementation that affect its usefulness for purposes other than identifying troubled housing authorities and targeting assistance to them. Therefore, we have not added the historical information HUD suggested to the report. Finally, HUD is concerned that we have incorrectly reached conclusions about the reliability of all PHMAP scores based on the results of confirmatory reviews of high-risk authorities. HUD noted that the accuracy of the scores of these PHAs does not necessarily represent the accuracy of all PHMAP scores because the data provided by these PHAs are most susceptible to being inaccurate. Our report did not reach a conclusion about the reliability of all housing authorities’ scores because of the changes that resulted from confirmatory reviews. This report discusses the reliability of PHMAP scores for housing authorities whose scores are low enough that they may be at risk of being designated troubled. We have added language to the report to clarify this point.
Pursuant to a congressional request, GAO reviewed the Department of Housing and Urban Development's (HUD) use of its Public Housing Management Assessment Program (PHMAP), focusing on: (1) HUD's use and implementation of the program at its field offices; (2) public housing authorities' PHMAP scores over the first 4 years of the program; and (3) limits on any additional uses for the program. GAO found that: (1) most of HUD's field offices are using PHMAP to identify troubled housing authorities and target HUD's limited technical assistance resources; (2) however, the field offices have not been systematically using the assessment program, as required by statutes and regulations, to monitor housing authorities' progress in improving their performance and target technical assistance to them; (3) the impact of a 1995 reorganization of the field offices' functions and current departmental downsizing continue to influence some offices' ability to provide technical assistance; (4) performance scores generally have increased during the first 4 full years of the program; (5) with average scores increasing, the total number of troubled housing authorities has decreased, and the greatest proportion of those that are troubled are the smallest authorities, those managing fewer than 100 units; (6) the proportion of high-performing authorities has increased steadily from about 33 percent in 1992 to over 50 percent in 1995; (7) high-performing authorities manage nearly 50 percent of all public housing units; (8) periodically, HUD officials provide the Secretary of Housing and Urban Development and Congress information on the performance of all housing authorities as well as the number of troubled authorities; (9) HUD's confirmatory reviews of the information underlying assessment scores have shown the scores to be inaccurate in half the instances when such reviews were performed; (10) regardless of the scores' accuracy, HUD and public housing industry officials do not believe that the management assessment program comprehensively assesses how well local housing authorities manage their properties; and (11) this is because the assessment program does not include indicators to specifically measure overall housing quality or the quality of maintenance.
Background Excise Tax on High-cost Employer-sponsored Health Insurance PPACA’s excise tax on high-cost employer-sponsored health insurance is imposed when the value of employees’ health coverage exceeds a threshold, referred to as the tax’s applicable dollar limit. The applicable dollar limit was established in statute for 2018, the year the tax was originally to be implemented. PPACA stipulated that for 2019, the applicable dollar limit would increase by the amount of the Consumer Price Index for All Urban Consumers (CPI-U), plus an additional 1 percent. Starting in 2020, the applicable dollar limit would then increase in step with the CPI-U each year thereafter. The Consolidated Appropriations Act, 2016 delayed the tax’s implementation until 2020. Some economists have noted that because health care premiums have historically outpaced the CPI-U, it can be expected that the share of employers impacted by the tax should grow over time. The basis for determining the value of employees’ health coverage that is measured against the applicable dollar limit of the tax—referred to as applicable coverage—is defined in statute. Applicable coverage includes both the employer’s and the employee’s pre-tax contributions to the premium for a group health plan and to a flexible spending arrangement, Archer Medical Savings Account, health savings account, or health reimbursement arrangement. The amount of an employee’s applicable coverage that exceeds their applicable dollar limit—known as the excess benefit—is subject to the tax. Because applicable coverage can vary by employee, for example, depending on whether or not they chose to contribute to a flexible spending arrangement or health savings account, the tax is determined separately for each employee. As a result, the tax could be owed for some employees and not others. Age and Gender Adjustment The age and gender adjustment is designed to make the applicable dollar limit—the threshold for the tax—higher for employers with workforce demographics that are typically costlier than average. Specifically, the law stipulates that the age and gender adjustment would increase the applicable dollar limit by an amount equal to the excess of a) the premium cost of the BCBS Standard plan, if priced for the age and gender characteristics of all employees of an employer, over b) the premium cost of the BCBS Standard plan, if priced for the age and gender characteristics of the national workforce. In 2015, the IRS released a notice outlining a draft proposal for how the age and gender adjustment might be implemented. The notice proposed using BCBS Standard plan premium and claims cost data (including claims costs classified into 5- year age and gender groups), as well as CPS national workforce data, to produce published tables that an employer could use to calculate its age and gender adjustment based on its specific workforce data. In its notice, IRS asked for comments on whether the calculation of group costs should rely on actual claims data from the BCBS Standard plan or, as an alternative, on “national claims data reflecting plans with a design similar to that of the .” FEHBP provides health care coverage to federal employees, retirees, and their dependents through health insurance carriers that contract with OPM. In 2015, FEHBP provided an estimated $47.9 billion in health care benefits to roughly 8.2 million individuals, according to agency officials. Carriers offer plans in which eligible individuals may enroll to receive health care coverage. For the 2015 plan year, FEHBP options included fee-for-service plans that were available nationwide, plans available only to certain types of federal employees (e.g., postal workers), and plans offered by health maintenance organizations that were available only in certain regions. Of these plans, some were high-deductible plans and consumer-driven plans. Generally, individuals are able to choose from several plans, but most FEHBP contract holders were in plans offered by the BCBSA. In addition to offering the Standard plan, BCBSA also offers the Basic plan, and combined, these two plans are among the most popular of FEHBP plans. Experts Cited Benefits and Limitations of the FEHBP BCBS Standard Plan Data and Identified Alternative Data Sources, Which Also Have Limitations The BCBS Standard plan has many characteristics that experts cited as important when considered for use as the basis of the age and gender adjustment. However, they also noted that it has limitations because it is not fully representative of the national workforce, has selection bias, and has experienced declining enrollment in recent years. Experts identified alternative cost data sources, but these data sources also have limitations. Some experts also expressed concern with the use of a premium value as the basis for the adjustment and suggested alternative approaches. Experts Noted That the BCBS Standard Plan Is a Large and Convenient Source of Cost Data, but Underlying and Changing Member Demographics Limit Its Strengths According to industry and actuarial experts we interviewed and stakeholders that commented on IRS’s notices for the age and gender adjustment, BCBS Standard plan data have several benefits when considered for use as the basis of an age and gender adjustment, as stipulated in the law. Specifically, it is a large dataset that includes several years of data and is readily available (convenient). Experts we spoke with identified these as important characteristics for cost data that is to be used as the basis of an age and gender adjustment. Specifically, experts noted that the data source should have the following characteristics: Be representative. Several experts noted that the data source should reflect the demographics of the broader U.S. population, the national workforce, or the population eligible for employer-sponsored insurance, to the extent possible. Differences in the demographics between the broader population and the data source used for an adjustment could have an impact on health care costs and utilization and, thus, have an impact on the adjustment. Be large. Several experts pointed out that an ideal data source would be large, in terms of the number of individuals covered, in part due to the fact that there needs to be sufficient data within each of the age and gender groups. Contain several years of data. Some experts pointed out the benefit of using a data source that has been in existence for some time and that has several years of data so that one would have confidence that the data for a given year are not unusual. Be convenient. For the purposes of the government’s use, several experts also noted that convenience of the data source could be important to consider—such as the ease with which the government can access and use the data and the costs for obtaining them. Notably, the data from the BCBS Standard plan meet several of these characteristics because the plan is large, relatively popular, and covered just over 3 million members across the United States in 2015, making it the FEHBP plan with the highest enrollment. It is also a mature plan that has been in existence since 1959. Finally, it is convenient in that it is already available and familiar to the federal government, and BCBSA already provides summary cost and enrollment data to OPM on an annual basis. However, experts and stakeholders identified two important limitations to using BCBS Standard plan cost data as the basis of an age and gender adjustment: 1) not being representative of the national workforce due to selection bias and 2) declining enrollment. Selection bias. Enrollment in the BCBS Standard plan is affected by selection bias among the FEHBP options that may result in it not being representative of the national workforce. Within the FEHBP, federal employees can choose among many different health plan options. The BCBS Standard plan is a relatively expensive plan within the FEHBP and covers older and sicker members compared to other, less expensive plans, such as the other nationwide BCBS FEHBP option, BCBS Basic. Actuarial experts also noted that the BCBS Standard plan may be less attractive to healthier individuals and younger families who may be more attracted to the FEHBP health maintenance organization options, including high-deductible and consumer-driven plans, or the BCBS Basic plan. Officials from OPM noted, and our review of two years of cost data confirm, that members in the BCBS Standard plan generally have higher health care costs than their counterparts in BCBS Basic and that this is particularly true for younger members. While other employers may offer more than one plan, most employers do not provide the number of options that the federal government provides, so selection bias among plans offered by other employers may be less extreme. Experts and stakeholders noted that the selection bias within the FEHBP of more young members with higher health care costs in the BCBS Standard plan may result in an age and gender adjustment that is not adequate. For example, in part because the BCBS Standard plan disproportionately covers young members with higher health care costs, the ratio of the average claims costs of the older age groups to the average claims costs of the younger age groups is smaller than it would be in a plan that did not have that particular selection bias issue. As such, the ratios of costs for older age groups to costs for younger age groups would be understated compared to the ratios calculated based on data of a more representative population. If the claims cost data used for the adjustment had ratios that were understated in this way, then the adjustment based on these data might also be too small, for example, for employers with older demographics. Some experts and stakeholders also noted selection bias in the FEHBP more broadly, in that its members, who include employees as well as retired former employees and their dependents, are not representative of the national workforce. For example, they noted that the federal workforce is skewed to a higher proportion of older workers than the national workforce. However, some experts we spoke with asserted that this may not be a limitation that would generally affect the use of FEHBP data for the age and gender adjustment because relative costs between older and younger employees in the federal workforce are likely similar to those of the national workforce. Declining enrollment. In addition, while the BCBS Standard plan is large, it has experienced declining enrollment in recent years. Specifically, from 2010 through 2015, enrollment in the BCBS Standard plan decreased by over 10 percent. In contrast, enrollment in the BCBS Basic plan increased significantly from 2010 through 2015—a 46 percent increase in contract holders. (See table 1.) Notably, the cumulative enrollment for the two BCBS FEHBP plans has been relatively stable over time. OPM officials noted that over time, this shift in enrollment from the Standard to the Basic plan may further exaggerate the demographic differences between Standard plan members and other populations, including the Basic plan and the general employed population. They also noted that it was possible that the BCBS Standard plan could continue to experience an enrollment decline, becoming more disproportionately skewed to older and higher-cost members. Finally, OPM, IRS, and Treasury officials all noted that any one plan offering could be discontinued. For example, in 2002, BCBSA merged its High Option plan in FEHBP with the Standard plan and added the Basic Option plan. Experts Identified Potential Alternative Data Sources; However, Those Data Sources Also Have Limitations Experts cited other potential cost data sources, but each of these sources also has limitations. These sources and their limitations include the following: The Agency for Healthcare Research and Quality—a research agency within the HHS—maintains Medical Expenditure Panel Survey data collected through its annual survey, which contains cost information based on respondent recollection and provider reported data. According to agency officials, its 2014 dataset includes information on over 7,500 employer-sponsored insurance contract holders. While these data are grounded in a nationally representative probability sample and include the years 1996 to present, the survey’s relatively smaller size may prove to be a limitation when classified into the necessary age and gender groups. Agency officials noted that several years of data could be pooled to ameliorate this issue. Blue Health Intelligence—an independent licensee of BCBSA— maintains data from many, but not all, BCBS plans across markets. Its dataset is large; however, because the members covered in the data only include BCBS members, it is not known whether the data are representative of national demographics. In addition, using these data would likely require contracting with Blue Health Intelligence for proprietary data, making this option potentially inconvenient. The Health Care Cost Institute (HCCI)—a research institute— maintains claims data from plans offered by Aetna, Humana, Kaiser Permanente, and UnitedHealthcare. According to HCCI representatives, its most recent year of data covers over 40 million employer-sponsored members. HCCI’s dataset is large and includes the years 2007 to 2015, but it is not known whether the data are representative of the national workforce. HCCI representatives told us that the data contain members in all 50 states and the District of Columbia, but some states have lower counts of members. They also noted that the data can be adjusted through weighting to make them more representative. However, as of June 2017, HCCI data did not contain information from BCBS plans, which represent the majority of enrollment in the insurance market in many states. In addition, it is not possible to identify costs by coverage type, such as self-only, which is needed to calculate the age and gender adjustment. Truven Health Analytics, an IBM company (Truven) is a healthcare data and consulting company that maintains the MarketScan claims database. According to Truven representatives, its 2015 dataset covers claims from 28.5 million members across its various clients and includes data mostly from large employers with self-funded health plans. Truven’s MarketScan dataset is large and goes back to 1995, but Truven’s data are comprised of a convenience sample—data collected from organizations that happen to be clients of Truven—and it is not known whether the data are representative of the national workforce. Truven representatives told us that the data contain members in all 50 states, but some states have lower counts of members. They also noted that the data can be adjusted through weighting to make them more representative. In addition, using Truven data would likely require contracting with Truven for proprietary data, making this option potentially inconvenient. Because these alternative data sources also have limitations, coupled with benefits identified related to the BCBS Standard plan, some experts stated that, while imperfect, the BCBS Standard plan is a fairly reasonable option for the basis of the age and gender adjustment. However, because of its noted limitations, its use could result in adjustments to the tax threshold that are not as effective as they could be for certain employers—in particular, for employers with older employees. Some Experts Cited Concerns about how Premiums Might Be Used in Determining the Adjustment Amount Some experts we interviewed and stakeholders that commented on IRS’s notices for the age and gender adjustment raised concerns about tying an adjustment to a premium value. As stipulated by PPACA, the age and gender adjustment would increase the applicable dollar limit by “…an amount equal to the excess of aa) the premium cost of the [BCBS Standard plan], if priced for the age and gender characteristics of all employees of the individual’s employer, over bb) the premium cost of the , if priced for the age and gender characteristics of the national workforce.” This could be achieved by establishing a dollar value for the adjustment by taking an employer-specific premium cost and subtracting a national premium cost, both priced using the BCBS Standard plan costs applied to the national and employer-specific workforces, respectively. This would create a specific dollar difference that would represent the adjustment for that employer. It could also be achieved by creating an adjustment factor by taking the percentage difference of these employer-specific and national premium costs. Two actuarial experts and one industry expert we spoke with suggested that a percentage difference approach would be more appropriate than a dollar difference approach. Specifically, one actuarial expert contended that the value of the adjustment could be distorted if the value of the BCBS premium cost in any given year was unusually high or low. In either year, the percentage difference between costs priced for the national workforce compared to the employer’s workforce should be the same (assuming no changes to the workforce makeup), but the dollar difference would not be the same. (See table 2.) If a percentage difference approach were used, the adjustment factor created through this approach would need to be converted to a dollar value to determine a specific adjustment amount. All three experts who suggested this approach noted that the adjustment factor could simply be applied to the tax’s applicable dollar limit, which will increase over time in line with the CPI-U. A similar approach could be to apply the adjustment factor to a portion of the tax’s applicable dollar limit, for example, a portion estimated to represent health premium costs, excluding estimated costs associated with other health benefits such as flexible spending arrangements or health savings accounts. Another approach could be to apply the adjustment factor to a value that represents actual health care costs, such as an estimated average employer-sponsored premium, which would increase over time in line with health care inflation. We note that the decision on what to apply the adjustment factor to when using a percentage difference approach would be dependent on the policy goal: Limit the rate of growth of the adjustment value to general inflation. If the adjustment factor were applied to the applicable dollar limit for the tax year or a portion of that limit, then the adjustment dollar amount would be expected to increase somewhat more slowly over time than it would if it were tied to an amount representing actual health care costs, which would rise at the steeper rate of health care inflation. This could be preferable if the policy goal were to limit the rate of growth of the adjustment dollar amount to a rate lower than the typical health care inflation rate. Keep the rate of growth of the adjustment value in line with health care inflation. If the policy goal were to allow the adjustment dollar amount to increase in step with health care inflation, then it would be preferable to tie the adjustment to an amount representing employer-sponsored health plan costs. The number of employers who received the age and gender adjustment that became subject to the tax would increase more quickly over time if the adjustment were tied to the applicable dollar limit that increases with the CPI-U than it would if tied to an amount representing health care costs. Combining Premium Cost Data from Multiple FEHBP Plans Could Mitigate Standard Plan Data Limitations Combining data from multiple FEHBP plans could mitigate some of the limitations of sole reliance on the BCBS Standard plan data as the basis for the age and gender adjustment, including concerns regarding selection bias. Several experts and stakeholders who commented on IRS’s notices suggested this approach. They noted that combining data from multiple FEHBP plans, such as data from the BCBS Standard and Basic plans, could mitigate concerns. They specifically said that an adjustment based on data from the BCBS Standard plan alone may not be adequate due to the plan’s selection bias within the FEHBP, as previously discussed. The BCBS Standard plan is a relatively expensive plan within the FEHBP and covers members with higher health care costs compared to other less expensive plans, including the BCBS Basic plan. We found that combining the data from these two plans could mitigate this selection bias. Specifically, we found that the adjustment may be particularly affected by selection bias among young Standard plan contract holders with higher health care costs. In particular, combining 2015 data from these two plans increased the percentage of young contract holders, and also increased the ratio of the average claims costs of older contract holders to the average claims costs of younger contract holders. (See fig. 1.) In addition to mitigating certain selection bias concerns, combining data from multiple FEHBP plans could address concerns regarding the BCBS Standard plan’s declining enrollment. Combining data from multiple FEHBP plans—such as the BCBS Standard and BCBS Basic plans— would result in a more stable underlying enrollment population, based on current enrollment trends. Our analysis of OPM data shows that increases in BCBS Basic plan enrollment exceeded declines in BCBS Standard plan enrollment, resulting in a net increase in combined enrollment. Specifically, the number of contract holders enrolled in BCBS Standard and BCBS Basic plans combined increased 3.2 percent from 2010 through 2015. (See table 3.) In addition, combined contract holders accounted for 65 percent of all FEHBP contract holders. Some experts and stakeholders also suggested that combining data from more FEHBP plans could further improve the data, by capturing individuals who select other types of plans, such as health maintenance organizations or high-deductible health plans. However, we note that combining data from different plans would require appropriate actuarial adjustments to account for cost differences that result from benefit design and other differences among the plans. Using combined FEHBP data as the basis for the age and gender adjustment to include a broader selection of younger members could result in a different adjustment that could increase the adjustment amount for some employers. For example, we calculated a hypothetical, illustrative adjustment amount using the BCBS Standard plan only, as well as using BCBS Standard plan data combined with BCBS Basic plan data. We did this for a hypothetical employer with a workforce that is, on average, older than the national workforce—an employer that would likely receive an age and gender adjustment—without making any actuarial adjustments to the data. We found that combining 2015 cost data for active federal government workers enrolled in the BCBS Standard and BCBS Basic self-only coverage plans resulted in a higher adjustment amount for the hypothetical employer than did an adjustment based on BCBS Standard data alone. This also resulted in a higher percentage difference adjustment factor for the hypothetical employer. (See table 4.) According to our analysis, combining the BCBS Standard and BCBS Basic data resulted in an increase in the ratio between the average claims costs of the oldest and youngest groups, yielding higher age and gender adjustment amounts for our hypothetical employer. We found that different adjustment amounts could have an impact on the total amount of taxes owed for an employer’s workforce depending on the number of employees to which the tax was applied. For example, the adjustment amounts could determine whether or not an employee’s coverage is subject to the tax and, if the employee’s coverage is subject to the tax, how much tax is owed. Using the previously presented hypothetical example of an employer with a workforce that is older, on average, than the national workforce can illustrate the potential impact. In this example, we compare the hypothetical tax owed for 100 similar workers employed by that employer to illustrate the difference in the total taxes owed depending on whether the data used as the basis for the age and gender adjustment are the BCBS Standard data alone or the combined BCBS Standard and Basic data. (See table 5.) Standards for internal control suggest that effective information is vital for an entity to achieve its objectives. Although the current law specifies the use of premium cost data from the BCBS Standard plan, relying on BCBS Standard plan data alone does not provide IRS with the comprehensive information it may need to determine an appropriate and adequate age and gender adjustment. Because the Consolidated Appropriations Act, 2016 delayed the implementation of the age and gender adjustment until 2020, an opportunity exists for IRS to consider options for mitigating the limitations of the BCBS Standard plan premium cost data. IRS and Treasury officials told us they are considering what flexibility they have under the statute to do so. Conclusion The age and gender adjustment was designed to increase the applicable dollar limit of the tax for employers with employees that are expected to be costlier than average so that taxes are owed based on the plan design and not based on member costs. Use of the BCBS Standard plan premium costs as the basis of the age and gender adjustment, as stipulated in the law, has certain limitations, primarily because of selection bias. Further, data limitations may become more pronounced over time because the plan has been experiencing declining enrollment. Other potential data source options exist, but these options also have limitations. Combining data from multiple FEHBP plans could mitigate selection bias concerns, as well as any concerns about the future of the BCBS Standard plan alone. However, if data were pooled from plans with different benefit structures, the data may need to be actuarially adjusted. Nonetheless, because of its limitations, using the BCBS Standard plan data alone as the basis of the age and gender adjustment could result in an adjustment that is not as effective as it could be at increasing the applicable dollar limit for employers with costlier than average employees. Recommendation for Executive Action We recommend that, in implementing the age and gender adjustment, the Commissioner of Internal Revenue consider taking steps to mitigate the limitations of the BCBS Standard plan premium cost data—such as by combining data from multiple FEHBP plans. If combining the costs of plans with different benefit structures, the Commissioner should consider whether an appropriate actuarial adjustment should be used. If the Commissioner interprets that the statute does not provide the flexibility to mitigate the limitations of the BCBS Standard plan premium cost data by combining data from multiple sources or by other means, we recommend seeking that authority from Congress. Agency Comments and Our Evaluation We provided a draft of this report to IRS and OPM for review and comment. The draft report was also reviewed by Treasury. Subsequent to reviewing the draft, IRS and Treasury officials contacted us to share some of their concerns with the wording of the recommendation related to combining claims costs of multiple health plans with varying designs. As a result of these discussions, we clarified our recommendation language so that it more explicitly focused on the need to mitigate the limitations of the BCBS Standard plan data. We continue to believe that it is worthwhile to consider using cost data from multiple FEHBP plans, but, as we note in the report, if this is done, an actuarial adjustment should be considered. We then shared the clarified recommendation language with the agencies. We later received written comments from IRS, which are reproduced in appendix I. We also received technical comments on the draft from both IRS and OPM, which we incorporated as appropriate. We did not receive additional comments from Treasury. In its written comments, IRS neither agreed nor disagreed with our recommendation, but stated that it would consider the recommendation as it continues to review comments received in response to an agency notice and work with the Department of the Treasury to issue guidance on the age and gender adjustment. We are sending copies of this report to the Commissioner of Internal Revenue and the Acting Director of the Office of Personnel Management. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Internal Revenue Service Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gerardine Brennan (Assistant Director), Kate Nast Jones (Analyst-in-Charge), Barbara Hansen, and Laurie Pachter made key contributions to this report. Also contributing were Sandra George, Emei Li, Vikki Porter, Jennifer Rudisill, and Jennifer Stratton.
Some stakeholder groups have questioned the use of the BCBS FEHBP Standard plan premium costs as the basis of the age and gender adjustment, as stipulated by PPACA. The Consolidated Appropriations Act, 2016 includes a provision for GAO to report on the suitability of using these data for this purpose. This report examines: 1) the benefits and limitations of using FEHBP BCBS Standard plan data as the basis of the age and gender adjustment, and what alternatives to these data could be considered; and 2) how any limitations to BCBS Standard plan data could be mitigated. GAO reviewed IRS documentation; interviewed industry experts and officials from IRS, the Office of Personnel Management (OPM), the Department of Treasury, and other agencies; reviewed comment letters submitted in response to IRS notices; and analyzed 2010 and 2015 cost and enrollment data from OPM. The Patient Protection and Affordable Care Act (PPACA) included a revenue provision for a 40 percent excise tax on high-cost employer-sponsored health coverage to be administered by the Internal Revenue Service (IRS). The tax would be imposed when an employee's annual cost of coverage exceeds an established dollar limit. This limit could be adjusted upward if an employer's workforce—based on its age and gender characteristics—was likely to have higher health costs than the national workforce, on average. This adjustment, known as the age and gender adjustment, is based on the premise that older individuals and younger females tend to have higher health care costs than other individuals. It is designed to lower the tax burden so that taxes are owed based on the plan design and not based on the health care costs of its members. PPACA stated that this adjustment would be made based on the premium costs of the Blue Cross and Blue Shield (BCBS) Standard plan under the Federal Employees Health Benefits Program (FEHBP). The BCBS Standard plan has benefits and limitations for use as the basis of the adjustment. The benefits include that it is a large, national, decades-old, convenient data source, in that it is already known by, and available to, the federal government. However, there are some specific limitations to its use. The BCBS Standard plan has selection bias within FEHBP because members have a choice among many plans, and, compared to other options available to federal employees, it is a relatively expensive plan that covers members with higher health care costs. GAO's analysis of OPM data found that these higher costs are particularly true for younger members. The plan's enrollment has declined in recent years. Furthermore, officials noted that any one plan offering could be discontinued. The selection bias in the BCBS Standard plan may result in an age and gender adjustment that is not adequate. For example, because the BCBS Standard plan covers young members with higher health care costs, the ratio between the average claims costs of the younger and older members in that plan is smaller than it would be in a plan that did not have that particular selection bias issue. Therefore, the age and gender adjustment could be too small. While experts GAO spoke with identified several potential alternative sources of cost data for use as the basis of the adjustment, those alternatives also had limitations, such as not being convenient sources of data and potentially not being representative of the national workforce. To mitigate limitations of the BCBS Standard plan, these data could be supplemented with data from other FEHBP plans, such as the BCBS Basic plan, which is known to have younger members with lower health care costs and increasing enrollment. GAO found that using combined data from these two sources could result in a different adjustment for some employers—in particular, for those with older employees. Standards for internal control suggest that effective information is vital for an entity to achieve its objectives. Relying on BCBS Standard plan data alone does not provide IRS with the comprehensive information it may need to determine an adequate age and gender adjustment.
Background EPA was established in December 1970 to consolidate a variety of federal research, monitoring, standard-setting, and enforcement activities into one agency to ensure the protection, development, and enhancement of the total environment. To help accomplish its mission, EPA in 1971 established a library network that came to comprise 26 libraries located across the country. The library network functions as a collection of independent local libraries, catering primarily to the needs of local EPA staff and walk-in public patrons. The libraries are funded and managed by several different regional and program offices at EPA (see fig. 1). EPA defines network libraries as those of its libraries with an official membership presence in the global Online Computer Library Center system. The combined EPA library network collection contains a wide range of general information on environmental protection and management and on basic and applied sciences, as well as extensive coverage of topics related to the statutory mandates that EPA must meet. Several of the libraries maintain collections focused on special topics to support specific regional or program office projects. The libraries thus differ in function, scope of collections, extent of services, and public access. In addition to its physical locations and holdings, the EPA network provides staff and public access to its collections through the following: its online library system, a Web-based database of all of EPA library holdings, also known as EPA’s online “card catalog”; interlibrary loans to another library within the network or to other libraries; a Web site combining two databases—EPA’s National Service Center for Environmental Publications (NSCEP) and its National Environmental Publications Internet Site (NEPIS)—which provide an online gateway for access to available print and digital documents, respectively; and desktop access for staff to online journals, the Federal Register, news, databases of bibliographic information, and article citations. In addition, librarians are available in each library to catalog and maintain collections and to assist EPA staff and the public with research. In 2003, EPA began studying options for operating its library network in the future. In August 2006, the agency issued a reorganization plan, titled EPA FY 2007 Library Plan: National Framework for the Headquarters and Regional Libraries. The focus of this plan was a reorganization of the headquarters library and the 10 regional libraries, all of which received substantial funding from EPA’s Office of Environmental Information. The 2007 library plan identified a new model for library services, which consisted of three components: a coordinated library network, instead of stand-alone operations; more electronic delivery of services; and maintenance of existing essential services. During implementation of this plan, EPA closed the Chicago, Dallas, and Kansas City regional libraries and closed its headquarters library to physical access, although the headquarters library remained as one of three repositories for storing print collections. Another library located at EPA headquarters within the Office of Prevention, Pesticides, and Toxic Substances (referred to as the “chemical library”) was not subject to budget reductions and was not discussed in the reorganization plan, but, like the headquarters library, was also closed to physical access. EPA also reduced or eliminated the library staff at the closed libraries. Several other libraries reduced their operating hours, and some libraries disposed of their materials or dispersed them to other EPA libraries or to non-EPA libraries. EPA also began to digitize EPA documents not currently in NEPIS, beginning with documents in the libraries being closed. EPA’s reorganization plan also discussed how the closed libraries were to handle their collections, directing them primarily to disperse the collections to other libraries. EPA’s implementation of its reorganization plan caused widespread concern among its staff, the public, interested parties, and Congress. In response to these concerns, congressional committees directed $1 million in funding to restore the libraries recently closed or consolidated, asked us to review EPA’s reorganization plan and its implementation, and directed EPA to prepare a report regarding actions to restore publicly available libraries. In addition, EPA in January 2007 imposed a moratorium on its reorganization efforts. Until April 2007, EPA’s library network operations had been guided by EPA’s Information Resources Management Policy Manual, which stated that the library network was to provide EPA staff with access to information to carry out the agency’s mission and that the libraries were to provide state agencies and the public with access to the library collection. The Policy Manual also defined the role of a national library program manager, who was to have responsibility for coordinating major activities of the library network, although not budget authority for the libraries. EPA replaced this manual in April 2007 with an interim library policy and, in May 2009, with its final library policy. The final May 2009 policy also defined key roles and responsibilities, including those of the national library program manager and those of “federal library managers,” who were to have first-line responsibility for operating the physical libraries and providing services. EPA lifted the moratorium in June 2009, following implementation of its May 2009 policy and 6 of 12 proposed procedures for the library network. After we issued our report in February 2008, Congress held hearings on EPA’s library network reorganization efforts, which were followed by the release of EPA’s March 2008 report addressed to Congress, in which EPA stated that it would reopen the closed libraries by September 30, 2008. In our 2008 report on EPA’s library network reorganization, we assessed the reorganization effort against our past work on key practices and implementation steps to assist mergers and organizational transformations. These key practices include ensuring that top leadership drives the transformation, establishing a coherent mission and integrated strategic goals to guide the transformation, and setting implementation goals and a timeline to show progress from day one. EPA Has Not Completed a Strategic Plan for Its Library Network Identifying an Overall Strategy for the Network Although it has been preparing a strategic plan for its library network for 3 years, EPA has not completed a plan identifying its overall network strategy, with implementation goals and a timeline for what it seeks to accomplish. In our 2008 report, we stated that EPA was developing a library strategic plan for 2008 and beyond, which was to detail library services for staff and the public and lay out a vision for the library network’s future. EPA has had a draft outline of this strategic plan since July 2007. We reported that in October 2007, EPA’s Office of Environmental Information asked local unions throughout the agency to comment on a draft of the plan. The draft outline of the strategic plan envisions the library network as “the premier environmental library network that provides timely access to information and library services to its employees and the public” and proposes to realize this vision by increasing emphasis on electronic resources and using new information technologies. The draft outline of the strategic plan also lists several principles as the foundation for present and future directions of the library network: setting overall goals and objectives and a direction for implementation; periodic review of the plan to evaluate progress and update strategies to respond to new opportunities or challenges; soliciting input from internal and external stakeholders; and developing the plan in a transparent manner by reporting progress and soliciting input from interested parties. According to EPA officials, since 2007, EPA has been in the process of assessing library users’ needs, which is to be completed before they believe they can finish the strategic plan. EPA officials have stated that a working group led by the national library program manager is to resume work on the plan later in 2010. The draft outline of the strategic plan is largely a list of current and planned EPA activities—primarily placeholders to be completed. For example, under the heading, “Digitization,” the text states that the digitization procedures will outline the methods to be used by EPA libraries to prepare and send EPA documents for digitization and inclusion in NEPIS; no goals or timeline for implementing these activities—which we previously reported are among the key components of successful organizational transformations—are given. We have found that an organization’s transformation is strengthened when it publicizes implementation goals and a timeline to build momentum and show progress. Despite an emphasis on the central role to be played by electronic library resources, the draft outline of the strategic plan briefly describes procedures for packing and shipping documents to be digitized, without describing actions to be taken to digitize holdings or target dates for accomplishing those actions. EPA holds an enormous amount of environmental information, including publications generated by its program offices, as well as research publications generated under contracts, grants, and cooperative agreements. A large portion of this information exists only in print form. As part of its vision for the library network, according to the draft outline, EPA is seeking to convert this information into a digital format to make it more widely available and readily accessible to users. Yet the draft outline of the strategic plan does not describe criteria for deciding what documents should be digitized, for deciding whether or how to digitize copyrighted documents of value, or for scheduling the funding needed and a time frame for completing the digitization. Without such criteria, EPA cannot ensure that it is digitizing the most valuable or important documents or providing users with information most relevant to them. Furthermore, although the draft outline of the strategic plan includes a placeholder for a section describing a funding model for the network, it contains no detail. Under the heading, “Funding Model,” the text states that this section in the plan will address how EPA will ensure that the network libraries have “adequate funding” and will discuss how funding decisions are made, along with the Office of Environmental Information’s role in the funding process. But the draft outline of the strategic plan does not define what constitutes adequate funding, although inadequate funding has been a concern for the library network since fiscal year 2007. Until then, library spending had remained relatively stable, ranging from a high of $9.2 million in fiscal year 2002 to a low of $8.2 million in fiscal year 2006. In fiscal year 2007, when EPA’s budget was reduced, library spending was $6.3 million. The draft outline of the strategic plan also does not set out the details of how funding decisions are made. Setting out details for how such decisions are to be made, to ensure that they are informed and transparent, is especially important because of the decentralized nature of the library network. The library network’s funding remains subject to uncertainty in the future because the several different program and regional offices responsible for EPA’s libraries generally decide how much to spend on their libraries out of funding available in larger accounts that support multiple activities. EPA’s Office of Environmental Information, the primary source of funding for the regional libraries, typically provides funding through each region’s support budget and generally gives regional management officials discretion on how to distribute this funding among the library and other support services, such as information technology, utilities, and mail room support. EPA officials told us that, starting in fiscal year 2010, they are increasing the amount of funding allocated to the libraries in the regions. The regions also obtain a much smaller portion of their library funding from other program offices. For example, the Superfund program office funds the storage and maintenance of information on the National Priorities List, EPA’s list of the most seriously contaminated sites in the United States. The extent to which other program offices provide funding to the regional libraries varies. Thus, without a detailed strategy for how decisions are made to acquire, deploy, and manage funding resources across the library network, EPA may find it difficult, particularly in an era of declining budgets and competing national priorities, to achieve its vision for the library network and to fully meet the needs of library users. Moreover, although the draft outline of the strategic plan contains a section on communication among network libraries, it makes no mention of a strategy or a formal outreach plan to ensure that EPA communicates with and obtains feedback from users about improvements to its library network in a way that would allow it to measure whether such improvements are meeting users’ needs. The section lists communication methods EPA is using, such as monthly network teleconferences among staff and federal managers. In addition, the section identifies comment cards, questionnaires, and a Web presence for how it solicits users’ feedback, but there is no mention of how EPA plans to assess feedback on what is important to users or what improvements are working well or poorly. EPA has another draft document, titled “EPA Library Network Communication Strategies,” whose purpose is to establish procedures by which libraries in the network are to communicate both internally and externally. Most of this document focuses on communication within the library network itself, explaining how the library network is coordinated and detailing mechanisms for internal communication, including annual meetings for library network staff and federal library managers. One of the final sections in this procedures document lists several means of communicating externally, including Web sites and various local options for libraries to reach out to local patrons, such as tours, signs, comment cards, and online feedback mechanisms. Beyond listing such mechanisms, however, this document, like the draft outline of the strategic plan, does not lay out a systematic communication strategy with communication and feedback performance goals that can be measured to determine progress. Without such a strategy, communication with library users is likely to remain piecemeal and reactive. Since 2008, EPA Has Reopened Closed Libraries and Taken Other Actions EPA has reopened all of the libraries it closed in 2007 and has taken other actions to improve library operations. In its 2008 report addressed to Congress, the agency stated its commitment to have libraries in each region and at headquarters open to the public. EPA also committed to reestablishing on-site libraries for its staff and the public in the three regions where the libraries had been closed and at the combined headquarters and chemical library. EPA reopened all five closed libraries by September 2008, although the agency had to find new space for two of the three closed regional libraries and their collections, and all three of these regional libraries are operating on reduced schedules. Each of the reopened libraries was staffed with a professional librarian and required to maintain a collection of core reference materials and additional library resources to meet local needs and to ensure that staff had access to core library services and the public had access to the library and its collections. With the reopening of the closed libraries, one other regional library that had not been closed also began operating on reduced schedules, as compared with its hours before reorganization (see table 1). As of September 2010, about half of the 10 regional libraries were operating 4 days a week, rather than 5 days as they were before EPA’s reorganization efforts—a reduction in hours due largely to funding constraints, according to library officials. All of the libraries are staffed with at least one full-time or part-time librarian, with several libraries having more than one librarian or additional library staff. In addition to reopening the closed libraries, according to EPA officials we spoke with, EPA developed standards for the regional and headquarters libraries’ use of space, on-site collections, staffing, and services. These standards specify, for example, that the libraries make adequate space available for in-person interactions between library staff and users, that on-site collections and materials should address local and regional needs, that staff and the public have certain minimum hours of access per week (at least 4 days per week on a walk-in or appointment basis in the regional libraries and at least 5 days per week on a walk-in or appointment basis in the headquarters library), and that the libraries provide interlibrary loans and reference or research assistance. One of the key actions taken by EPA in May 2007 was to hire a national library program manager, a position that had been vacant since 2005. Housed in EPA’s Office of Information Analysis and Access, within the Office of Environmental Information, the national library program manager is charged with carrying out day-to-day activities of the library network and with bringing focus and cohesion to the network. Part of this charge involves agencywide responsibility for public information access, including strategic planning for the library network, and participating in policy formulation regarding access to EPA’s public information. To fulfill this leadership role, EPA officials said, the national library program manager is to work closely with the management of EPA’s Office of Environmental Information to set in motion a number of actions meant to improve library network operation and communication. To communicate with and gather feedback internally, the national library program manager initiated monthly teleconferences and annual meetings for all library managers and staff. Seeking to get the most out of the experience and knowledge of these library managers, librarians, and staff, the national library program manager established internal working groups to research improvement activities, address concerns, and recommend improvements. For example, the national library program manager established working groups on digitization, staff information needs, and development of the final May 2009 library policy and related procedures (see table 2). In addition, the national library program manager serves as the EPA- appointed representative in working with outside library professionals, specifically an external board of advisors created by the Federal Library and Information Center Committee, which advises EPA on future directions for the library network. EPA Has Resumed Digitizing Unique EPA Documents but Has Not Inventoried Its Holdings EPA has restarted its process of digitizing some of its libraries’ holdings, but because the agency has not completed an inventory of its holdings, it does not know the total number of documents to be digitized. According to EPA data, which our limited testing found to be insufficiently reliable, EPA had digitized 16,175 documents from its libraries as of January 2010. Creating an electronic copy of a document by means of digitization is relatively simple—essentially the same scanning process for making photocopies—although it can be time-consuming and expensive if the document contains special features such as foldout pages, cannot be taken apart, or needs to be digitized at a high level of resolution or in color. After the resulting electronic files are uploaded to EPA’s Web databases, the administrator of EPA’s online library catalog is to ensure that links to the digitized documents are included in the bibliographic records for each document. According to EPA officials, the present digitization effort will expand NEPIS, EPA’s sole electronic archive of published material, which, according to EPA officials, contains 40,000 publicly accessible digital documents as of June 2010, up from 4,000 in 1996. According to EPA documents, the digitization process is to take place in three phases: Phase 1 covered unique EPA documents held by the libraries that were closed under the reorganization plan. EPA data show that 15,260 documents were digitized during this phase, which was completed in January 2007. Phase 2 is to cover all remaining unique EPA documents except those larger than 11 by 17 inches. According to EPA officials, this phase is scheduled for completion in December 2010 and should produce 10,102 additional digitized documents, bringing the total number of digitized library documents available to the public to over 25,000. Phase 3 is to include EPA documents of which more than one copy exists in the library network, plus unique EPA documents larger than 11 by 17 inches. As of July 2010, EPA was beginning to plan this phase. As of September 2010, the total estimated cost for digitizing EPA’s library holdings remained unclear, in large part because EPA has not completed an inventory of its holdings and has therefore not determined the total number of documents that need to be digitized. When it began digitizing documents from the closed libraries in 2006 under phase 1, EPA estimated that the project would cost $80,000—$78,000 for scanning and $2,000 for uploading the digital files to the Web databases—although, according to agency officials, the agency did not track the actual costs. For phase 2 digitization, which began in fiscal year 2009, EPA estimated the cost at about $327,000. EPA has not estimated the cost or a completion date for its final, phase 3 digitization effort, in part because the agency is still cataloging all its library holdings in a single database so it can inventory all the documents that need to be digitized. One regional librarian we spoke with, for example, told us that about 2,000 documents in the regional library’s catalog were not in EPA’s online library system, and it is still unknown which or how many of these documents will need to be digitized. Without a complete catalog or inventory of its holdings, EPA cannot determine which documents, or how many, will need to be digitized and, consequently, cannot accurately estimate the total cost of digitization or how long it will take. According to the national library program manager, an EPA workgroup is currently drafting a new cataloging procedure for the libraries and expects the procedure to be approved and implemented before the end of 2010. This procedure requires each network library to inventory its collection on a regular basis, either the entire collection every 3 years or one-third of the collection each year. In addition, EPA library officials observed that a significant number of the documents in EPA libraries are copyrighted, and to date EPA does not plan to digitize them. EPA, like other federal agencies, often contracts with entities in the private sector to do work. In addition, EPA provides financial assistance in the form of grants and cooperative agreements to various recipients, such as state, local, and tribal governments; educational institutions; hospitals; and nonprofit organizations. Such assistance is documented in an assistance agreement. Both contracts and assistance agreements may result in the production of copyrighted documents. In the case of contracts, federal regulations provide that when a contractor is permitted to assert a copyright in any document(s) produced, the government has a license to display the copyrighted work publicly, which includes posting it on a Web site accessible to the public. In the case of works produced under assistance agreements, on the other hand, the government has a right to reproduce, publish, or otherwise use a copyrighted work for federal purposes, but EPA’s Office of General Counsel has determined that inclusion in EPA’s online public library would not constitute a federal government purpose. On the advice of EPA’s general counsel, EPA’s digitization workgroup has recommended digitizing documents created under contract but not those created under EPA’s assistance agreements. According to EPA’s grant awards database, these agreements have resulted in more than 21,000 grants valued at over $40 billion in taxpayer dollars. Some of these grants led to publications, resulting in a substantial body of publicly funded written material. According to EPA’s general counsel, EPA may digitize such documents so that staff and others may use them for federal government purposes but may not disseminate them for other purposes. EPA may also seek permission from copyright holders to digitize and disseminate online copyrighted documents produced under assistance agreements, although some costs may be associated with obtaining such permissions—tracking down copyright holders after years, or even decades, have passed, for example—further complicating any estimation of total digitization costs. An alternative practice has been in use by the Federal Library and Information Network, the business arm of the Federal Library and Information Center Committee: permission to use copyrighted material produced under assistance agreements is sought at the time an agreement is established. If the prospective copyright holder grants permission, then a statement to that effect is incorporated into the assistance agreement, incurring minimal, if any, additional costs. Without permission from copyright holders, however, documents prepared under EPA assistance agreements, using taxpayer dollars, will remain unavailable online to the public. EPA Has Taken Steps to Communicate with Staff and Other Stakeholders about Its Network, but Its Staff Survey Was Flawed EPA has taken steps to communicate with staff and other stakeholders about its library network—including providing information about the libraries as well as soliciting information from library users—but a 2009 survey about its staff’s information needs was flawed. In general, EPA staff and external stakeholders told us the agency is doing a better job of communicating with them and soliciting input on the operations and future direction of the library network, particularly at the local level. Representatives from EPA’s employees’ unions and regional science councils stated that communication about the library and its services— such as new resources, training, and open houses—is primarily done at the regional level, either through e-mail or the region’s intranet page. Although staff have not been directly solicited for feedback, according to the representatives, no outstanding issues regarding the libraries have been raised, except that a few representatives said they would like to see the libraries open 5 days and 40 hours per week. To keep library managers and staff engaged in improving library operations, EPA has adopted a number of techniques to communicate with them and solicit their input. These techniques have allowed EPA to gather staff input for policies and procedures, operational issues, and Web page improvements. Examples include the following: The national library program manager holds monthly network teleconferences with library managers and staff on matters of interest to the entire network or on operational topics, such as the library policy and procedures. The national library program manager also holds ad hoc teleconferences with library managers elsewhere in the network to discuss their libraries’ needs. Managers and staff use mailing lists to communicate with one another about daily library operations or requests for assistance. For the last 3 years, EPA has held an annual network meeting in different locations for library managers and staff to foster collaboration, provide training, and share information about the network. At the last meeting, in October 2009, participants discussed ways to address results of the 2009 staff survey, prepared for the next round of digitization, and discussed ways to improve library services. The next annual meeting is scheduled for March 2011. EPA and the union representing EPA staff agreed to create a union- management advisory board with six members—three union representatives and three from EPA management. The board reviews and makes recommendations on library network policy and procedures and will review the library network strategic plan once it is completed. In December 2009, EPA instituted a pilot program, an “ask a librarian” live chat. Ten libraries are participating in the program, which lets users contact a librarian through an electronic link to request services. As of July 2010, the “ask a librarian” pilot was available only to EPA staff. To begin to realize its vision of effectively implementing new information technologies and making documents readily available electronically, EPA in 2007 engaged a contractor to review the user-friendliness of the combined Web page, or gateway, to the NSCEP and NEPIS online databases. The review identified ways to improve the site’s effectiveness and overall functionality for users. EPA implemented many of these improvements. For example, the gateway home page now describes the purpose of each database (NSCEP for print materials and NEPIS for electronic documents) and what types of publications they contain, noting that they contain only EPA publications. EPA made several changes to the document display page as well, such as placing a navigation bar at the top and bottom of the document with large icons and providing a button that allows users to obtain a copy in one of three formats. The display page allows users to put a copy of frequently used documents in a holding area for later retrieval. Work is also under way to integrate Google search capabilities into this gateway, as well as the ability to refine the precision of searches with user-friendly “clouds” of related keywords. In addition, EPA has added easy ways for users to offer feedback, which EPA may then incorporate to make improvements. For example, the navigation bar on the document display page now includes a “report an error” button, and every page has a “contact us” link. Furthermore, the left frame of the site contains a link to a customer satisfaction survey, and the site also has a separate page for user feedback. EPA has also made outreach efforts to library professionals outside EPA— primarily by presenting and exhibiting at professional library association trade shows and conferences, attending external training, visiting other federal national libraries, and interacting with its external board of advisors. Ties with the external board have been a particularly important part of EPA’s response to concerns over the agency’s library reorganization. From June 2007 through February 2010, the national library program manager met with the board approximately 20 times, working with it on the full range of key issues, from policy development to funding models to communication with stakeholders. The external board also advised EPA on the development of a survey to assess the information needs of EPA staff. One of the principles in the draft outline of the strategic plan is soliciting feedback from internal and external stakeholders about their information needs. To solicit such feedback from staff, EPA in early 2008, under the direction of the national library program manager, engaged a contractor to conduct interviews, hold focus groups, and conduct a Web-based survey. The survey was made available for approximately 1 month in 2009 via a secure Web site only to EPA staff (about 17,000 individuals), not all of whom were library users. After the survey was completed, the contractor conducted a series of focus groups and one-on-one interviews with EPA managers, focusing on significant issues identified in the Web-based survey; according to EPA officials, these in-person discussions were to help ensure that a comprehensive perspective of user needs was captured. EPA officials stated that the Web-based survey results corroborated what the agency learned in an earlier survey, done in 2004 to 2005. On the basis of the survey results, focus group discussions, and management interviews, the contractor developed recommendations for EPA’s consideration. EPA received the results of the survey and discussions, along with the contractor’s recommendations, in August 2009 and has assigned working groups of library staff to review the findings and suggest how EPA could address them. We found, however, that this survey had flaws, similar to those we identified in the 2004 to 2005 survey and discussed in our 2008 report, which greatly reduce its usefulness. First, in both the earlier and the 2009 surveys, the response rate was 14 percent, far lower than the 80 percent response rate that Office of Management and Budget guidance recommends for a survey to increase the likelihood that it adequately represents a universe of respondents. Neither EPA nor the contractor for the 2009 survey analyzed the results for the nonresponse bias that may occur at response rates lower than 80 percent, particularly if the group of respondents differs significantly in relevant ways from nonrespondents. Thus, EPA cannot be assured that either survey accurately described staff needs for information or their uses of the library. Second, respondents to both the 2009 and the earlier survey were self-selected, a methodology that often leads to biased samples, since the traits that affect a person’s decision to participate in the survey—such as strong opinions or substantial knowledge—may be correlated with traits that affect survey responses; the result is an unrepresentative sample of possible respondents. The risk of potential bias through self-selection is increased by the fact that neither EPA nor the contractor for the 2009 survey instituted any safeguards to prevent respondents from submitting more than one survey each. Thus, there is no assurance that the survey results are unbiased and reflect a broad range of EPA staff perspectives and experiences. Third, in neither survey did EPA gather views from or determine the needs of other significant users of EPA libraries, such as state and local environmental organizations or the public at large. Although EPA officials told us that EPA is planning to assess the needs of public patrons, an assessment that does not correct the methodological weaknesses we found in EPA’s two previous surveys of its staff is unlikely to produce results that accurately reflect the needs of public patrons. Conclusions In the 4 years since EPA issued a reorganization plan for its library network, the agency has taken a number of steps to better communicate with, and meet the needs of, library users. EPA’s lack of a completed strategic plan identifying its overall strategy for the network, however, leaves unclear how the agency will translate into reality its vision of a “premier environmental library” with an “emphasis on electronic resources.” Steps the agency has taken, including hiring a national library program manager and establishing a uniform policy and some procedures for the libraries, have led to some improvements in library services and will undoubtedly enhance network cohesion. But without a completed strategic plan that contains implementation goals and timelines, neither EPA nor users of its libraries can have a clear view of what EPA plans to do, when EPA plans to do it, and whether EPA’s actions will ultimately meet users’ needs. In particular, without a strategy for acquiring, deploying, and efficiently allocating library funding, the library network could have difficulty maintaining high-quality service in the digital age. Moreover, EPA’s approach to digitizing copyrighted works in the future— as well as the fact that the agency has not yet inventoried all library holdings—could, if not revisited, detract significantly from the utility of the library network. Specifically, unless EPA revisits its decision not to digitize documents prepared with taxpayer dollars under assistance agreements, it may be missing opportunities to make these documents more readily available to users, including other federal users, who need them to better carry out their work. Finally, improvements to the library network’s Internet gateway offer new means of seeking feedback from library patrons about their use of and need for library services. Nevertheless, EPA does not have a valid method for assessing those library users’ needs. If, in future assessments of users’ needs, EPA fails to correct the flawed methods of its previous staff surveys, the agency is unlikely to obtain accurate information that would enable it to make appropriate decisions on the corrective actions that would best address those needs. Recommendations for Executive Action To ensure that EPA’s library network continues to meet its users’ needs, we recommend that the Administrator of EPA take the following six actions: Complete EPA’s strategic plan for the library network, including implementation goals and timelines. In so doing, EPA should outline details for how funding decisions are to be made, to ensure that they are informed and transparent. Complete an inventory of the library network’s holdings to identify what items remain to be digitized. For assistance agreements already in place, EPA should digitize documents produced under the agreements and make them available to federal employees and other authorized users for federal government purposes. In future assistance agreements, make explicit that EPA can include in the agency’s public online database, without obtaining prior permission from the copyright holder, any documents produced under the agreements. For future assistance agreements where EPA cannot make such an arrangement, EPA should digitize documents produced under the agreements and make them available to federal employees and other authorized users for federal government purposes. Ensure that the data analysis protocols used for conducting surveys of users’ needs—including sampling procedures and response rates—are sufficiently sound methodologically to provide reliable information on which to base decisions and allocate resources efficiently. Agency Comments and Our Evaluation We provided EPA with a draft of this report for review and comment. With clarifications, the agency concurred with our recommendations. EPA acknowledged that the planning document available on the agency’s Web site—which our report refers to as the draft outline of the strategic plan— has provided more of a working agenda than a strategic plan to guide the rebuilding of the library program. In responding to our recommendations, EPA wrote that it believes it now has enough information to develop an effective strategic plan for the library network and that it is time to complete and publish a formal plan identifying an overall network strategy, with implementation goals and a timeline for future accomplishments. The agency stated that it is moving forward with the strategic plan, which it aims to complete in fiscal year 2011. In addition, EPA said it will address the inventory of library holdings, completing a schedule for cataloging the inventory by November 1, 2010, and striving to complete the cataloging by September 30, 2011. The agency further agreed to take the necessary steps to ensure that any future assessments of users’ needs employ methodologically sound data protocols and provide reliable information. Regarding our recommendations on the digitization of copyrighted documents produced under assistance agreements, EPA said it would address the feasibility and legality of digitizing products resulting from such agreements. For future assistance agreements, the agency said it will develop options for gaining advance permission to digitize products from these agreements and take these options to senior agency managers by mid-2011 for consideration and action. For existing assistance agreements, however, EPA wrote that, because of legal and technical constraints, it does not plan to digitize products produced under existing agreements. In further clarifying the agency’s written comments, EPA officials told us that because the documents produced under existing assistance agreements are copyrighted, the agency cannot include them in its public online database. In the agency’s view, EPA would therefore need to develop a forum for disseminating the documents to EPA staff and determine whether other federal employees needed access to the documents for federal government purposes. EPA officials also said that digitizing these documents was constrained by several factors, including agency priorities for which documents are to take precedence and efforts to identify which of the many types of assistance programs are likely to produce documents of most value to EPA staff. We have clarified the wording of our recommendations to eliminate any implied reference that EPA should make the copyrighted documents available to the general public in its online database, and we maintain that making copyrighted documents resulting from assistance agreements available solely for federal government purposes is permissible, feasible, and desirable. EPA’s written comments appear in appendix II. EPA also provided technical corrections, which we incorporated. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Administrator of EPA, and other interested parties. in addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions on this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To complete our work, we reviewed relevant Environmental Protection Agency (EPA) funding and inventory documents, policies, plans, guidance, and procedures, as well as related regulations and requirements pertinent to the library network and efforts to improve its operations. We limited our review to the 26 libraries belonging to EPA’s library network, that is, libraries that are members of the Online Computer Library Center system. We focused on EPA’s headquarters library, the 10 regional libraries funded in part by EPA’s Office of Environmental Information, and the Office of Administration and Resources Management libraries in Cincinnati, Ohio (which is responsible for EPA’s digitization and Web site maintenance), and Research Triangle Park, North Carolina. We compared library operations before, during, and after attempted reorganization in fiscal year 2007; obtained and reviewed library network policy and procedures; reviewed the agency’s draft outline of a strategic plan for the library network; obtained and reviewed documents on EPA’s digitization process; and reviewed EPA’s efforts to communicate with and solicit input from users. We interviewed EPA librarians and library managers in selected EPA libraries, as well as Office of Environmental Information officials knowledgeable about EPA’s library network and budget; when possible, we corroborated information provided to us during interviews with relevant documentation. We also interviewed management officials from the federal employees’ union representing EPA staff and spoke with representatives from EPA’s regional science councils, which consist of EPA scientists and technical specialists. We further sought information from library professionals, including representatives from the Library of Congress; the National Agriculture Library; and, through visits and interviews, from Lockheed Martin and Integrated Solutions and Services, contractors involved in digitizing EPA documents. In addition, we obtained information on library funding from each of the 26 libraries in the network from fiscal year 2002 to fiscal year 2010. Because EPA does not specifically track funding for the libraries, the information provided contained a mix of outlays for some fiscal years and budget authority for other fiscal years. In addition, the information provided by each of the libraries reflected only spending by the library and not funding sources. For example, a large portion of funding for regional office libraries comes from the Office of Environmental Information, but these libraries also receive funding from other EPA program offices, such as Superfund. Also, funding data from the libraries contained a mix of funding for contract support; library staff salaries; and acquisition costs for books, journals, and other materials. We interviewed EPA budget officials to assess data reliability and performed a limited test to verify the accuracy and completeness of the data provided by the libraries. On the basis of this test and discussions with EPA officials, we concluded that the data were not reliable enough to include in our report. We also obtained data on the number of EPA and other documents that have already been digitized and the number still to be digitized. After limited testing and discussions with EPA officials, we determined that EPA’s data on library funding and on the number of digitized documents and those scheduled to be digitized were not sufficiently reliable for our purposes. Because these data were the only data available, however, we used them to some extent, noting their limitations in our report as appropriate. We also reviewed documents about EPA’s digitization process, guidance on what documents should or should not be digitized, and digitization contracts, and we discussed the contents of these documents with EPA and digitization-contractor officials. We also discussed EPA’s future digitization plans with Office of Environmental Information officials. In addition, we assessed EPA’s survey of library users, examining the adequacy of the survey methodology, including response rate, sampling methodology, security measures, survey questions, and processes. To determine the adequacy of the response rate to EPA’s survey, we followed an 80 percent response rate as a criterion, as Office of Management and Budget guidance recommends and we apply in our own surveys to increase the likelihood of sufficiently representing a universe of respondents. For surveys with response rates lower than 80 percent, we also perform an analysis to determine the existence of nonresponse bias. To generate its survey sample, however, EPA relied on self-selection, using a Web site to make the survey available to approximately 17,000 EPA staff; the response rate achieved was 14 percent. We performed a limited nonresponse analysis of EPA’s survey data and determined that some staffing categories were represented in proportions different from those found in the population of EPA staff. Given the 14 percent response rate to EPA’s survey, the nonrandom methodology that generated the sample, and the results of our limited analysis for nonresponse bias, we found EPA’s survey results to be inadequate for EPA’s purpose of obtaining a representative view of EPA library users. We also interviewed local union representatives from headquarters and some of EPA’s regional offices. Furthermore, we interviewed regional science council representatives from some of the regional offices. The science councils are located in each regional office and consist of EPA scientists and technical specialists. To determine the extent to which EPA communicated with and solicited views from outside stakeholders, we interviewed representatives from several professional library associations and other external stakeholder groups, such as the American Library Association, the Library of Congress, the Federal Library and Information Center Committee, and the Union of Concerned Scientists. We also reviewed information EPA provided to the public via the EPA Web site or, when applicable, Federal Register notices. We conducted this performance audit from October 2009 through September 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Environmental Protection Agency Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Ed Kratzer, Assistant Director; Ellen W. Chu; Pamela Davidson; Les Mahagan; John C. Martin; Ben Shouse; and Jeannette Soares made key contributions to this report.
The Environmental Protection Agency's (EPA) library network provides agency staff and the public with access to environmental information. A 2006 attempt by EPA to reorganize its network by consolidating libraries and making more materials and services available online caused concern among users, and in 2007, EPA put a moratorium on its reorganization plans. Congress requested that GAO report on the reorganization and has again requested a follow-up on these issues. Accordingly, GAO reviewed (1) the status of EPA's overall strategy for its library network, (2) the status of EPA's plan to reopen the libraries it closed and other actions planned or taken, (3) EPA's efforts to digitize printed documents to make them electronically available, and (4) EPA's efforts to communicate with staff and other stakeholders about its library network. GAO reviewed regulations and agency funding and inventory documents and interviewed EPA staff and contractors, as well as independent library professionals. GAO also assessed the reliability of EPA's data on library holdings and from EPA's staff survey on library use and needs. Although EPA has taken a number of steps to meet the needs of library users, it has not completed a plan identifying an overall strategy for its library network, with implementation goals and a timeline of what it intends to accomplish. Scheduled for completion in 2008, the strategic plan was to provide EPA staff and the public a detailed view of EPA's library operations and future direction. The draft outline of the strategic plan, however, is largely a placeholder list of current and planned EPA activities. For example, while it emphasizes the central role to be played by electronic library resources, the draft outline does not contain goals or a timeline for completing an inventory of holdings or digitizing those holdings. The draft outline also does not set out details of how funding decisions are to be made. Given the current economic environment, without a completed strategic plan, including a detailed strategy for acquiring, deploying, and managing funding, EPA may find itself hard-pressed to ensure that the network can meet its users' needs. The agency has reopened libraries closed during reorganization, although about half the network's 10 regional libraries are operating with reduced hours. EPA has also developed standards for the regional and headquarters libraries' use of space, on-site collections, staffing, and services. The agency has also hired a national library program manager to carry out day-to-day activities and bring focus and cohesion to the network. Working closely with EPA management and library staff, the national library program manager, who is responsible for library network strategic planning, has set in motion a number of actions meant to improve library network operation and communication, including working closely with internal and external advisory boards and creating a library policy and related procedures. EPA has resumed digitizing some of its libraries' documents, although it has not inventoried the network's holdings. The agency is digitizing documents in three phases. Phase 1 was completed in January 2007, phase 2 is scheduled for completion in December 2010, and planning has begun for phase 3. Because EPA has not taken a complete inventory of its library holdings, however, it cannot determine which documents, or how many, will need to be digitized and, consequently, cannot accurately estimate the total cost of digitization or how long it will take. Since we reported on the library network reorganization in 2008, EPA has taken steps to communicate with staff and other stakeholders about its library network, including providing information about the libraries and soliciting information from library users. EPA has also made improvements to the main Internet gateway to the network, making more documents available electronically and providing better access to electronic documents and services. Nevertheless, because EPA's 2009 survey of the information needs and library use of its staff had methodological flaws--similar to those GAO identified in 2008--the agency is unlikely to obtain accurate information that would enable it to make appropriate decisions on the corrective actions that would best address library users' needs.
Background Radar-guided missile systems emit radio-frequency energy, that is, radar signals, which reflect or bounce off the surfaces of aircraft in flight. In essence, all radar-guided missile systems use these reflected signals to locate and target aircraft. The Army currently has two types of radar countermeasure systems fielded on its helicopters to defend them from radar-guided missiles. The first type seeks to decoy the missile away from the aircraft by providing alternative reflected radar signals for the missile to follow. This is accomplished by using a missile warning system that detects approaching missiles and signals countermeasure dispensers on the aircraft to launch chaff in an attempt to confuse the missile’s radar.The second type of countermeasure system uses a radar-warning receiver and radar jammer to defeat radar-guided missile systems. A radar-warning receiver detects radar-guided missile systems so the aircraft’s pilot can navigate out of the missile’s range. If the systems cannot be avoided, a radar jammer emits electronic radio-frequency transmissions to confuse and/or blind the radar-guided missile system. The Army’s Suite of Integrated Radio Frequency Countermeasures system will include an advanced-threat radar-warning receiver and an advanced- threat radar jammer. (See figure 1.) These components are expected to provide state-of-the-art-radar warning and jamming capabilities and to perform better than the Army’s currently fielded radar warning receivers and radar jammers. The advanced-threat radar-warning receiver will provide enhanced situational awareness by more precisely detecting, identifying, locating, and tracking multiple radio-frequency threat systems. Likewise, the advanced-threat radar jammer is expected to counter multiple and simultaneous modern radio-frequency threats. In addition, the system can be reprogrammed to defeat different threat systems, and its modular open architecture allows for reconfiguring its components so that applications on multiple aircraft types are possible. For acquiring electronic warfare systems such as the new radar countermeasures system, departmental guidance states that developmental testing provides decisionmakers with knowledge about whether the system is ready to begin low-rate initial production—the next step in the acquisition process after engineering and manufacturing development. Developmental testing begins in a controlled environment by testing individual components of a system in the laboratory. Based on the results of this testing, individual components are modified, improved and/or replaced until they meet component-level performance requirements. After the performance of each component is tested and validated, the developmental test process is repeated at the subsystem and finally system level. The developmental test process continues until the system’s ability to meet performance requirements when installed on a weapon system platform is tested and validated. According to the Department’s guidance for acquiring systems, low-rate initial production is designed to (1) establish an initial production base for the system and ensure adequate and efficient manufacturing capability, (2) produce the minimum quantity necessary to provide production configured or representative articles for initial operational testing and evaluation, and (3) permit an orderly increase in the production rate for the system sufficient to lead to full-rate production upon the successful completion of operational testing. Operational testing, which follows developmental testing, is designed to determine whether a production- configured system can meet performance requirements in an operationally realistic environment. Software Modifications Will Be Tested Before Low- Rate Initial Production Decision, But Hardware Modifications Will Not Be The Army’s contractor for its new radar countermeasures system has substantial software and hardware changes under way to improve the system’s performance and address the obsolescence of parts, reduce cost, and improve producibility. The Army intends to determine that the modified software performs as required in time for the low-rate initial production decision now scheduled for some point from January through March 2002. However, the current schedule does not provide for completion and integration of the hardware changes into the system until June 2002 with testing completed by September 2002. Software Issues Beginning in 1999, laboratory testing of developmental prototypes of the new radar countermeasures system indicated that significant software deficiencies had to be corrected before the system could meet performance requirements. Because of these software deficiencies, the prototype countermeasures system could not properly perform any of its major functions; that is, it could not properly detect, identify, track, or defeat threat radars. In response to these results, the Army’s Program Manager directed the system contractor to undertake the major software maturation effort that is now under way. For the software maturation effort, the Army directed the contractor to follow a disciplined maturation process. This involved breaking down the system’s software into a series of 10 blocks with each successive block introducing more complex functionality (e.g., detect and identify one radar; detect and identify multiple radars; detect, identify and jam one radar; and so forth). To ensure that the contractor adheres to this process, the Army does not approve the introduction of succeeding software blocks into the system until the functionality of the prior block has been demonstrated in the Army’s laboratory at Fort Monmouth, New Jersey. According to the Defense Contract Management Agency, which the Army has engaged to oversee the program, the ongoing software maturation effort, as of April 2001, has been rated as high risk. Laboratory tests indicate that the software continues to have difficulty in properly detecting, identifying, tracking, and defeating threat-radar systems in complex environments where many radars are operating simultaneously. Moreover, according to the Agency, flight-testing on an Apache helicopter has begun recently and a new set of software problems is being experienced because the operating environments of the aircraft and open- air test range are very different than the controlled conditions of the laboratory. For instance, interference resulting from the simultaneous operation of the system with the Apache’s fire control radar is resulting in system resets. Resets are totally unacceptable for countermeasure systems because they refer to instances when the software causes the system to reboot. While the system is rebooting, the aircraft and aircrew are completely unprotected. Overall, the software maturation effort is 4 months behind schedule, and the contractor has been submitting increasing numbers of unanticipated software change requests each month for the past 6 months as the software blocks are becoming more complex. Change requests have increased each month from September 2000, when they numbered 699, to March 2001, when they reached 923. The need to make unanticipated changes is expected in a software maturation process, according to the Defense Contract Management Agency; nonetheless, increasing numbers of changes result in additional cost to the program and the extension of test schedules. Of the 10 software blocks, blocks 1 through 8a have now been accepted, and the contractor was scheduled to deliver block 9 for testing in April 2001. (Block 8 did not pass acceptance testing at Fort Monmouth, so the contractor had to create block 8a, which was accepted by the Army in March 2001.) Hardware Issues While software maturation continues under the original developmental contract, the contractor is addressing hardware improvements under a separate $13.2 million technology insertion program contract to redesign, develop, and test new system components. The contractor plans to complete and integrate hardware changes into the system by June 30, 2002. The Army then plans to determine whether the modified system performs as required by September 2002. According to the contractor, replacing key hardware components of the current prototype system is necessary to reduce costs, address the obsolescence of electronic parts, enhance producibility and improve system performance. The contractor is developing replacements for such components as the primary computer processor, the tracker used to locate radar sources, and the frequency synthesizer used to produce the electronic responses to hostile radar signals. The contractor is also replacing the analog wide-band receiver used to detect radar signals with an improved receiver based on digital technology. (See figure 2.) As of April 2001, the Defense Contract Management Agency was rating hardware issues and the system’s readiness for production as moderate risk. According to the Agency, the bases for this assessment include staffing shortages, parts delivery delays, and failures during electromagnetic interference, shock/vibration, and humidity testing, all of which are delaying the contractor’s schedule. Besides physical changes to the system, hardware changes will cause additional changes to be made to the system’s software. This is because the hardware functions of the system are software-controlled. In order to exercise this control, the software has to be written to “recognize” the behavior of the new components so the right software commands are issued and the hardware will do what it is supposed to do at the right time. Additionally, while making changes to hardware components and software, the contractor discovered carcinogenic beryllium oxide residue on the system during humidity testing. To address this problem, the contractor is now developing and testing aluminum component casings for beryllium casings that had already been developed. Substituting aluminum for beryllium is troublesome because (1) aluminum is weaker and heavier than beryllium and (2) the weight of the radar countermeasures system was already more than 20 pounds over the Army’s requirement even with use of the lighter beryllium casings. Department officials told us that the insertion of the hardware modifications is not substantial enough to constitute a significant design change and that little risk is associated with the integration of the new hardware with the software and the aircraft. However, based on test results to date and monthly status reports from the Defense Contract Management Agency, we did not find that integrating the new hardware with the software and the aircraft will be a low-risk undertaking. According to departmental guidance for acquiring systems, one of the purposes of low-rate initial production is to produce production representative articles for initial operational test and evaluation. In our view, a key to assuring that these articles will be production representative is to first conduct developmental testing of the modified software and hardware together as a system in the aircraft to ensure the design is stable before beginning low-rate initial production. We believe, therefore, that the Department would decrease its risks by deferring the low-rate initial production decision until the hardware modifications are completed and integrated and the system is found to perform as required. Only the testing of the actual replacement components can provide assurance that the system’s design is stable. Conclusion The Army has identified software and hardware modifications needed for its new radar countermeasures system. The Army expects that future tests will enable it to determine whether the modified software performs as required before the planned low-rate initial production decision in early 2002. However, the testing of the modified hardware is not scheduled for completion until September 2002. By deferring the low-rate initial production decision, the Army would reduce the risk of incurring unanticipated costs to retrofit articles if the system does not perform as required. Recommendation for Executive Action We recommend that the Secretary of Defense direct that the Army defer the low-rate initial production decision until software and hardware modifications are completed and the Army determines that the integrated system, as modified, performs as required. Agency Comments and Our Evaluation Although the Department of Defense concurred with our finding that the Army’s radar countermeasures program has faced technical challenges both in software and hardware, it did not concur with our recommendation. The Department stated that our draft report was incorrect in finding that hardware modifications were being made to correct performance deficiencies. It maintained that the contractor’s hardware modifications are necessary to address cost, parts obsolescence and producibility issues, and the changes are only more technologically advanced form, fit, and function replacements for existing components. We recognize that the purposes of the changes include addressing cost, parts obsolescence and producibility issues. Nevertheless, program documentation provided by the contractor and the Defense Contract Management Agency indicates that these changes are also necessary to meet system performance requirements for several components, including the wide-band receiver and the system processor. We also recognize that any replacement component for a system must be form, fit, and function compatible; otherwise it cannot be successfully installed or expected to work in the system. It cannot be automatically assumed, however, that developing these replacement components is low risk simply because they are planned to be form, fit, and function compatible. After receiving the Department’s comments, we acquired updated data from the Defense Contract Management Agency to provide the most current information on the risks associated with the ongoing software and hardware modification process. After reviewing the additional data, we continue to believe that the Department would decrease its risks by deferring the low-rate initial production decision until the hardware modifications are completed and integrated and the system is found to perform as required. Although the Department may well be confident in the ability of the contractor to successfully develop replacement components, it cannot conclude on the basis of the performance of existing hardware components that different, replacement components will be satisfactory. System development has been ongoing for seven years. In our view, it is prudent to take the extra several months to test the actual replacement components with the software and in the aircraft so that the Army can assure itself that the system design is stable before it proceeds to low-rate initial production. Scope and Methodology To determine whether the Army’s decisionmakers will have sufficient knowledge about the readiness of the Suite of Integrated Radio-Frequency Countermeasures system to enter the low-rate initial production decision as planned in the second quarter of fiscal year 2002, we analyzed the Army’s modernization, acquisition, and fielding plans for the system and the contractor’s performance reports and other program documentation produced by the Army and the Defense Contract Management Agency. To ensure that we understood the documentation we utilized, we interviewed officials of the Office of the Secretary of Defense, Washington, D.C.; the Department of the Army, at Arlington, Virginia; the Program Executive Office for Army Aviation, and Missile and Space Intelligence Center at Redstone Arsenal, Alabama; the Communications and Electronics Command at Fort Monmouth, New Jersey; and the Army Aviation Directorate of Combat Development at Fort Rucker, Alabama. We also interviewed representatives of the Suite of Integrated Radio- Frequency Countermeasures contractor, International Telephone and Telegraph, Avionics Division in Clifton, New Jersey. We conducted our work from September 2000 through April 2001 in accordance with generally accepted government auditing standards. This report contains a recommendation to you. The head of a federal agency is required under 31 U.S.C. 720 to submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later that 60 days after the date of this letter and to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to interested congressional committees; the Honorable Joseph W. Westphal, Acting Secretary of the Army; and the Honorable Mitch Daniels, Director, Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions regarding this report, please contact me at (202) 512-4841 or Charles A. Ward at (202) 512-4343. Key contributors to this assignment were Dana Solomon and John Warren. Appendix I: Comments From the Department of Defense
The Army is acquiring a new, state-of-the-art radar countermeasures system--called the Suite of Integrated Radio Frequency Countermeasures to help helicopters and other aircraft identify, track, and defeat radar-guided missiles in complex electronic environments where many radar systems could be operating simultaneously. The Army has identified software and hardware modification needed for its new radar countermeasures system. The Army expects that future tests will enable it to determine whether the modified software performs as required before the planned low-rate initial production decision in early 2002. However, the testing of the modified hardware is not scheduled for completion until September 2002. By deferring low-rate initial production decision, the Army would reduce the risk of incurring anticipated costs to retrofit articles if the system does not work as expected.
Background There are more than 3.9 million miles of roadway in the United States, of which about 3.1 million miles, or about 77 percent, are considered rural roads. Rural roads are defined as those roads that are located in or near areas where the population is less than 5,000. As figure 1 shows, rural roadways make up more than half of the road miles in 44 states. For purposes of this report, rural road data refers to roads in the 50 states. The District of Columbia has no rural roads and we do not include Puerto Rico’s 8,000 miles of rural roads in our computations. traffic speeds and often have multiple lanes and a degree of access control. Collector roads are designed for lower speeds and shorter trips and generally link areas to arterial roads and interstates. They are typically two- lane roads that extend into residential neighborhoods. Local roads are any roads below the collector system and may be paved or unpaved roadways that provide access to farms, residences, and other rural property. As shown in figure 2, local roads make up the majority of the nation’s rural roads. Rural roads have more fatalities and a greater rate of fatalities than urban roads, when considering vehicle miles traveled. In 2002, of the 42,815 fatalities on the nation’s roadways, 25,849 (60 percent) were on rural roads. Based on miles traveled, the overall fatality rate from traffic crashes on rural roads was about 2.29 fatalities for every 100 million miles traveled, while urban fatality rates were about .97 fatalities for every 100 million miles traveled. Fatalities occurred at higher rates on rural roads that have lower roadway functional classifications. As shown in figure 3, during 2002, rural local roads had the highest fatality rates at 3.63 per 100 million miles traveled, while rural interstates had fatality rates of 1.18. In an urban setting, the lowest rates are for urban interstates—.60 fatalities per 100 million miles traveled—about one-sixth the level of rural local roads. In the past two decades, the total number of fatalities on the nation’s roadways fell from 43,945 in 1982 to 42,815 in 2002. However, during this period, fatalities on rural roadways rose slightly from 25,005 in 1982 to 25,849 in 2002. As shown in figure 4, during the period from 1982 to 2002, the fatality rate per 100 million vehicle miles traveled on rural roads declined about 37 percent. During the same period, the fatality rate on urban roads declined about 54 percent. FHWA and NHTSA are two agencies within the U.S. Department of Transportation responsible for road safety. FHWA’s mission is to provide financial and technical support to state, local, and tribal governments for constructing, improving, and preserving the highway system. As part of this mission, FHWA seeks to reduce highway fatalities and injuries through research and by implementing technology innovations. In addition, its Office of Safety develops and implements strategies and programs to reduce the number and severity of highway crashes involving both motorized and nonmotorized travelers on the nation’s highways, streets, bicycle and pedestrian facilities, and at intermodal connections. NHTSA’s mission is to reduce deaths, injuries, and economic losses resulting from motor vehicle crashes. The agency sets and enforces safety performance standards for motor vehicles and motor vehicle equipment and provides grants to state and local governments. NHTSA, among other things, also investigates safety defects in motor vehicles, helps states and local communities reduce the threat of drunk drivers, promotes the use of safety belts and child safety seats, and provides consumer information on motor vehicle safety topics. Under the Transportation Equity Act for the 21st Century (TEA-21), NHTSA provided the states with about $2.7 billion for efforts to improve driver behaviors and safety data from fiscal year 1998 through fiscal year 2003. Other organizations such as the American Association of State Highway and Transportation Officials (AASHTO) and the Governors Highway Safety Association also play important roles in highway safety. As an organization representing state transportation departments, AASHTO provides engineers with guidance on how to design safe and efficient roads through a publication referred to as the Green Book. In addition, AASHTO recently published a special guide on alternative designs for very low-volume roads. Furthermore, in 1997 AASHTO also focused attention on improving roadway safety by developing a Strategic Highway Safety Plan that identified 22 key or emerging highway safety emphasis areas. Topics included (1) aggressive and speeding drivers, (2) keeping vehicles on the roadway and minimizing the consequences of leaving the roadway, and (3) supporting better state coordination and planning for behavioral and construction programs. For each of these areas, publications are being developed under the National Cooperative Highway Research Program that address the issues and potential countermeasures. Another organization that plays a major role in highway safety is the Governors Highway Safety Association, which represents the highway safety programs of states and territories on the human behavioral aspects of highway safety. Areas of focus include occupant protection, impaired driving, and speed enforcement, as well as motorcycle, school bus, pedestrian and bicycle safety, and traffic records. Four Factors Contribute to Rural Road Fatalities One or more of four factors contribute to rural road fatalities—human behavior, roadway environment, vehicles, and the degree of care for victims after a crash. Human behavioral factors involve actions taken by or the condition of the driver and passenger of the automobile, including the use or nonuse of safety belts, the effects of alcohol or drugs, speeding and other traffic violations, and being distracted or drowsy when driving. Roadway environment factors that contribute to rural road fatalities include the design of the roadway (e.g., medians, lane width, shoulders, curves, access points, lighting, or intersections); roadside hazards (e.g., utility poles, trees, and animals adjacent to the road); and roadway conditions (e.g., rain, ice, snow, or fog). Vehicle factors include vehicle- related failures and vehicle design issues that contribute to a crash and are important in both rural and urban crashes. Lastly, victim care includes the quality of the emergency response and the hospitals that provide medical treatment for those involved in a crash. Several Human Behaviors Contribute to Rural Road Fatalities Several human behaviors contribute to rural road fatalities, including nonuse of safety belts, alcohol-impaired driving, speeding, and being distracted or drowsy when driving. In general, human factors are considered the most prevalent in contributing to crashes. Not using safety belts contributes to fatalities in rural crashes. For example, of the approximately 53,000 unrestrained (unbelted) vehicle occupant fatalities that occurred from 2000 through 2002, about 36,000 or 68 percent occurred in rural areas. NHTSA research on safety belt use in rural areas shows that rural areas are essentially similar to urban areas in safety belt use rates. In 2002, NHTSA data showed about 73 percent belt use in rural areas and 72 percent in urban areas. Alcohol-impaired driving contributed to 27,775 rural road fatalities from 2000 through 2002—about 63 percent of the 44,403 alcohol-related fatalities nationwide. While, according to NHTSA data, there is little difference between blood alcohol concentrations (BAC) of rural and urban drivers involved in fatal crashes, state officials told us that risks from drinking and driving in rural areas are increased because of longer driving distances and the lack of public transportation options available to intoxicated drivers. From 2000 through 2002, about 62 percent of the nation’s speeding related fatalities were on rural roads, amounting to about 24,000 of the 39,000 fatalities where speed was a contributing factor, according to NHTSA data. According to Insurance Institute for Highway Safety officials, speed influences crashes by increasing the distance traveled from when a driver detects an emergency until the driver reacts; increasing the distance needed to stop; increasing the severity of an accident (i.e., when speed increases from 40 to 60 miles per hour, the energy released in a crash more than doubles); and reducing the ability of the vehicles, restraint systems, and roadside hardware, such as guardrails and barriers, to protect occupants. Drivers who are distracted or drowsy also contribute to rural crashes. For example, a 2002 NHTSA national survey found that drivers involved in a distracted-related crash attribute their distraction to such items as looking for something outside the car (23 percent of drivers in a distracted-related crash), dealing with children or other passengers (19 percent), looking for something inside the car (14 percent), or another driver (11 percent). A Virginia Commonwealth University pilot study of distracted drivers found that for rural drivers in the study, crashes often involved driver fatigue, insects striking the windshield or entering the vehicle, and animals and unrestrained pet distractions. The study found that in urban areas distracted driving crashes often involved drivers looking at other crashes, traffic, or vehicles, or using cell phones. Roadway Environment Factors Contribute to Rural Road Fatalities Roadway factors also contribute to rural road fatalities. Rural roads can be narrow; have limited sight distance due to hills and curves; have small or nonexistent shoulders; have no medians; and may have hazards or objects near the roadway such as trees, utility poles, or animals. As a result of these features, fatal crashes on two-lane rural roads are significant. For example, FHWA reports that over 70 percent of single-vehicle run-off-the-road fatalities occur on rural roadways and that about 90 percent of these were on two-lane rural roads. Similarly, crashes involving vehicles crossing the centerline and either sideswiping or striking the front end of oncoming vehicles are a major problem in rural areas, accounting for about 20 percent of all fatal crashes on rural two-lane roads. In addition, crashes with animals—specifically larger animals such as deer and elk-—are also prevalent in rural areas. For example, according to the Deer-Vehicle Crash Information Clearinghouse, there were more than 130,000 deer-vehicle crashes reported in five states in 2000. In addition, a Highway Safety Information System report examined five states’ experiences with motor vehicle collisions involving animals and found that from 1985 through 1990, 74 percent to 94 percent of reported crashes involving animals occurred on rural roads. The report also found that collisions involving animals ranged from about 12 percent to 35 percent of all reported crashes on two-lane rural roads. Rural roadway conditions can also contribute to rural crashes and resulting fatalities. Surface conditions that can impair a driver’s ability to control the vehicle include snow, ice, standing water, and oil, in addition to such road surface features as potholes, ruts, and pavement edge drop- offs. Lack of lighting also contributes to rural road fatalities. For example, a study performed for the Minnesota Department of Transportation found that the installation of street lighting at isolated rural intersections reduced both nighttime crash frequency (25 percent to 40 percent) and crash severity (8 percent to 26 percent). Vehicle Design Contributes to Rural Road Fatalities The design of the vehicle can contribute to rural road fatalities. The wide variances in vehicle sizes and weights, as well as vehicle configurations, sometimes result in greater damage and injury to smaller vehicles and their occupants if a collision occurs. For example, when heavy sport utility vehicles (SUV) or pickup trucks collide with smaller cars, the occupants in the lighter and lower vehicles are more likely to die as a result of the crash, particularly if struck in the side. Vehicle design has been shown to affect vehicle handling in particular types of maneuvers. In rural settings this is important because the roads may be narrow and have sharp curves. The design of the vehicle in these types of crashes can make a difference in whether a run-off-the-road vehicle rolls over, one of the most serious types of crashes. Almost three-fourths of fatal rollover crashes occur in rural areas, according to a 2002 NHTSA study. In 2002, rollover crashes killed 10,666 occupants in passenger cars, pickup trucks, SUVs, and vans. A study by the Insurance Institute for Highway Safety that examined single-vehicle rollover crashes concluded that the combined rollover crash rate for pickup trucks and SUVs was more than twice the rate for passenger cars. In addition, a NHTSA study found that in 2002, nearly two-thirds of the 3,995 SUV occupant fatalities occurred in rollover crashes. Lack of Effective and Available Emergency Medical Services Contribute to Rural Road Fatalities Lack of effective and available emergency medical services (EMS) also contribute to rural road fatalities. For example, victims did not reach a hospital within an hour of the crash in about 30 percent of the fatal crashes on rural roads, according to NHTSA data for 2002. This compares with 8 percent of the fatal crashes on urban highways where victims did not reach a hospital within an hour. In addition, the Emergency Medical Services Division Chief at NHTSA told us that providing adequate medical care in rural areas is more challenging due, in part, to the lack of trauma services. A 2001 GAO report found that rural areas are more likely to rely on volunteers rather than paid staff, and these volunteers may have fewer opportunities to maintain skills or upgrade their skills with training. According to an opinion survey of state EMS directors in 2000, rural areas received significantly less coverage by emergency medical technicians, paramedics, enhanced 911 services, and emergency dispatchers. Finally, a 1995 Montana study concluded that the absence of an organized trauma care system contributed to preventable deaths from mechanical trauma, including motor vehicle crashes. Federal and State Efforts to Improve Highway Safety Include Rural Roads Each year FHWA and NHTSA provide billions of dollars to states to improve roadways and eliminate roadway hazards, as well as to improve driver behavior. In addition to funding, FHWA and NHTSA provide technical guidance and support for state safety programs and conduct research on roadway safety. Neither agency has specific rural road safety programs, but efforts to improve rural road safety are generally included within programs that address broader aspects of highway construction or highway safety. The states are ultimately responsible for deciding on the use of the funding provided. The five states we contacted funded projects that improved rural road safety. However, not all the states could identify all funds used for rural road safety because the data were not collected nor maintained in that manner. Therefore, it is not possible to determine the relative emphasis that states place on rural road safety and whether the emphasis has changed over time. Funding Is Provided to States to Eliminate Roadway Hazards and Improve Driving Behavior but Portion Used for Rural Safety Is Unknown FHWA and NHTSA provide the states funding to support a variety of programs, part of which was used to improve rural road safety. In fiscal year 2003, FHWA provided states and the District of Columbia with about $27.4 billion in federal-aid highway funds. Under TEA-21, from fiscal year 1998 though fiscal year 2003, federal-aid highway funding totaled about $167 billion. States use these funds to, among other things, construct new roadways; maintain the interstate highway system through resurfacing, restoring, rehabilitating, or reconstructing activities; and replace or rehabilitate highway bridges. While many of these highway improvement projects may include safety features that affect rural roads, the safety features are not specifically segregated for reporting purposes. For example, expanding a stretch of roadway to ease congestion could have an added impact of improving safety but could be reported as reconstruction or rehabilitation, depending on the actual project. In addition, construction projects may include items that can improve or upgrade safety features such as installing new guardrails or impact barriers but may not be identified or accounted for as a safety improvement. However, the federal- aid highway funds include two specific safety programs—Hazard Elimination and Rail-Highway Crossings—that can be used for rural road safety improvements. In addition, NHTSA also provided states with funds under TEA-21 to address driver behaviors. As shown in figure 5, under TEA-21, from fiscal year 1998 through fiscal year 2003, FHWA and NHTSA provided states about $6.7 billion specifically to improve roadway safety and improve driver behavior. From fiscal year 1998 through 2003, under TEA-21, FHWA provided about $4 billion to states specifically for highway safety construction under two programs—Hazard Elimination and Rail-Highway Crossing Programs. Highway safety projects built with these funds include construction projects to eliminate highway design hazards, such as narrow lanes or sharp curves; improve intersections; or improve rail-highway grade crossings. Under these programs, states can spend funds to address safety construction issues on any public state or local roadway. Nationwide, about $1.4 billion, or 49 percent, of the funds spent by states were used for rural purposes. For fiscal year 2003, about $648 million went to the states for hazard elimination and highway-rail crossings programs—about $330 million of which went to improve rural road safety. Under TEA-21, from fiscal year 1998 through fiscal year 2003, NHTSA provided about $2.7 billion to states and the District of Columbia for programs addressing driving behavior through formula grants, incentive grants, and penalty transfer funds. (See fig. 6.) Under the formula grants program, about $859 million was provided to the states to carry out traffic safety programs designed to influence drivers’ behavior in such areas as safety belt use, alcohol-impaired driving, regional traffic safety initiatives, traffic records and safety data collection systems, and pedestrian safety. Incentive grants of about $1.2 billion under TEA-21 were provided to states for achieving improvements in safety belt use, reducing drunk driving, and improving highway safety data. Penalty transfer of funds was required under TEA-21 for states that did not adopt specific laws prohibiting open alcohol containers in passenger compartments or setting minimum penalties for repeat drunk driving offenders. Under these requirements, states that are currently subject to either penalty must transfer 3 percent of their federal-aid highway construction funds to the NHTSA programs. The transferred funds can be used to support behavioral programs to limit drunk driving or can be spent on highway hazard elimination projects. In fiscal year 2004, 23 states were subject to one or both penalty transfer programs. From fiscal year 2001, when the penalties began, through fiscal year 2003, about $637 million has been transferred under this program. NHTSA does not collect information on the funds used for rural roads because it is difficult to distinguish between urban and rural benefits of many efforts, such as drunk driving television or radio spots or billboard ads. FHWA and NHTSA Provide Technical Guidance and Support for State Safety Programs that Include Rural Road Projects FHWA provides safety training and technical assistance to state and local governments, some of which pertains to rural road safety. For example, FHWA’s National Highway Institute offers training for state transportation department staffs. Some training focuses on rural road safety issues, such as the 3-day course entitled “Safety and Operational Effects of Geometric Design Features on Two-Lane Rural Highways,” which addresses the safety impacts of highway features like lane and shoulder width, curves, and intersection designs. FHWA also offers training and technical assistance to states and others through its Resource Center offices in Baltimore, Chicago (Olympia Fields), Atlanta, and San Francisco. For example, in 2003, the Safety and Design National Technical Service Team from the Chicago center conducted 23 different workshops, some of them multiple times, for state and local officials. An example of a Resource Center activity that pertained to rural roads was a 1-day workshop on low-cost safety improvements. The workshop addressed more than 40 improvement measures and how they might reduce crashes. FHWA also offers training to local communities through its Local Technical Assistance Program. Under this program, FHWA established a center in every state to provide technical assistance to local highway program managers. In addition, seven centers have been established to provide technical assistance for tribal governments. The centers provide training courses, outreach visits, newsletters, and technical resources to local highway managers. Program officials said they have a constant demand for a number of safety-related courses. Examples of course topics include road safety fundamentals, road safety audits, data collection, safety management systems, and construction zone flagger training. In addition, FHWA, along with the Federal Transit Administration, has funded a Safety Conscious Planning training course offered to state DOT officials and others that helps them integrate safety as a key planning factor. Lastly, FHWA provides guidance to states by issuing standards for traffic signs and signals in a publication called the Manual for Uniform Traffic Control Devices. The manual sets minimum standards for topics like traffic sign size, placement, support, and nighttime visibility. In 2000, FHWA revised the manual and included a new section called “Traffic Control Devices for Low-Volume Roads.” NHTSA provides technical assistance to state traffic safety programs through its 10 regional offices. This assistance does not have a focus on rural road safety but rather is intended to help states identify their most important traffic safety problems, establish goals and performance measures, and review annual safety plans and reports. NHTSA regional offices provide training programs for state safety officials and encourage them to participate in national programs like the “Click It or Ticket” safety belt campaign. NHTSA staff from the regional offices and headquarters also provide technical assistance to rural and other areas of the states by participating in or supporting state assessments and forums on safety topics like safety belt use, impaired driving, or data improvements. For example, NHTSA’s Region III provided local governments in their five states and the District of Columbia with a communication kit for conducting a sobriety checkpoint campaign. It included background information on drinking and driving, suggestions for core messages that the localities could share with news organizations, sample news releases for increasing public awareness of drunk driving and the checkpoint campaign, and suggestions for preparing op-ed articles in local newspapers. In addition, NHTSA published “Partners for Rural Traffic Safety Action Kit” in 2001, in conjunction with the National Rural Health Association. This action kit is based on the experience of 15 rural community demonstration sites that conducted 30-day campaigns to increase safety belt use. The association developed, tested, and revised a step-by-step guide based on a community development process model and created the Action Kit, which is available online and through NHTSA’s resource center. In fiscal years 2003 and 2004, the Congress also provided NHTSA $3 million to support state efforts to increase safety belt use in minority, teen, and rural populations. Two initiatives to address rural populations are under way. One involves a 3-year demonstration program that tests community- based infrastructure development and delivery systems to increase rural safety belt use. Demonstration projects are being conducted in Michigan, Tennessee, Wisconsin, and Wyoming. The second is a 2-year program designed to demonstrate the impact of various strategies to increase safety belt use in pickup truck occupants, with concentrated activities in rural areas. This demonstration program includes Arkansas, Louisiana, New Mexico, Oklahoma, Texas and Indian Nations. NHTSA has also been involved with the “First There, First Care” program to increase bystander care for the injured. NHTSA, the Department of Health and Human Services, Health Resources and Services Administration, and the American Trauma Society developed this program to give motorists information, training, and confidence to provide basic lifesaving care at the scene of a crash, increasing the chances of survival for crash victims. Distribution of the program and its material to states and others has focused on rural implementation. FHWA and NHTSA Conduct Research That Includes Rural Road Safety Issues In 2003, FHWA budgeted $10.9 million, or about 12 percent of its research budget, for highway safety research and technology. This research addressed four key safety topics: run-off-the-road crashes, intersection crashes, pedestrian and bicyclist safety, and speed management. From a rural roadway perspective, research on run-off-the-road and speed- related crashes is particularly relevant. Over 70 percent of single-vehicle run-off- the-road fatalities occurred on rural roadways, and, according to a NHTSA official, in 2001 over 80 percent of fatalities at speeds of 55 miles per hour or higher occurred in rural areas. Many safety research efforts apply to both rural and urban roads, but FHWA’s work on the Interactive Highway Safety Design Model specifically addressed two-lane rural roads. This computer model provides a means of measuring the safety and operational impacts of various design decisions that might be used in stretches of two- lane roadway. It is anticipated that state and local highway planners and designers will use the model to help them evaluate various construction and improvement options. FHWA also provides funding for highway research by others. For example, under TEA-21, from fiscal year 1998 through fiscal year 2003, FHWA provided states $3.1 billion for Statewide Planning and Research. Under this program, TEA-21 required that the states use at least 25 percent of these funds, or $769 million, for transportation research, which includes conducting research on improving highway safety. Two of the states we visited provided examples of such research. For example, Texas sponsored research into crashes on low-volume rural two-lane highways and potential alternatives to avoid them, and Minnesota sponsored research on driver response to rumble strips and innovative research to address lane departures and intersection collisions, both safety issues on the state’s rural roads. FHWA has also provided funding through the states for the National Cooperative Highway Research Program, conducted by the National Research Council, which has been working on a safety design model for multilane rural roads and a Highway Safety Manual that would provide commonly accepted safety guidance on rural and urban highway design. NHTSA conducts research that addresses both driver behavioral and vehicle safety issues. NHTSA’s behavioral highway safety research program had a 2003 budget of $7.4 million. It focused on areas such as impaired driving, occupant protection, pedestrians, bicyclists, and motorcycle riders. According to NHTSA officials, their research generally addresses safety problem areas rather than rural or urban localities, but the results may be applicable to both rural and urban areas. Furthermore, in 2003, NHTSA’s vehicle safety research program received $69 million to, among other things, collect and analyze crash data. The Fatality Analysis Reporting System (FARS) tracks fatality data at a cost of about $5.7 million per year, and the General Estimates System provides descriptive statistics about traffic crashes of all severities at a cost of up to $3 million per year, according to NHTSA officials. States Are Responsible for Identifying and Implementing Improvements to Rural Road Safety While DOT provides states with funding, research, oversight, and guidance, ultimately states are responsible for identifying and addressing their roadway safety problems. The five states we visited had plans and initiatives that addressed what they determined to be their most important safety problems on all roadways, including rural roads. State efforts to improve rural road safety include eliminating rural roadway hazards through construction projects to widen lanes and shoulders and through lower-cost approaches, such as adding shoulder and centerline rumble strips, expanding clear zones along the roadways, installing intersection beacon lights, and improving signage and road markings. In addition, each state had programs that attempted to alter driver behavior through such efforts as increasing enforcement of traffic laws and conducting community awareness campaigns that include the use of paid advertising on television and radio. Two states also increased enforcement by conducting sobriety checkpoints. All but one of the states could not provide details on all the funds used to address rural road safety because data were not collected and maintained in that way. Most state officials we spoke with supported the current flexibility they have to use the funds provided in areas they determine are the most important and did not favor having a separate rural road program or initiative. One official in Pennsylvania told us that having a separate rural road program would help bring needed attention to rural road safety. The following are examples of rural-related projects supported in the five states we visited. Appendix II has additional information on the funding received by these states and the activities they support. California—The California Highway Patrol is leading a task force that is examining the safety of all state corridors based on fatality and accident data. This effort has identified 20 high-risk corridors in the state, of which 16 were two-lane roads with a majority of the corridors in rural areas. The task force is responsible for making both infrastructure and behavioral improvement recommendations to address the safety issue with these high-risk corridors. In addition, California is supporting a Traffic Collision Reduction on County Roads Project. For this effort, the Highway Patrol received $1.9 million from the California Office of Traffic Safety to reduce crashes on county roads by increasing enforcement of traffic violations that often lead to collisions: speeding, right-of-way violations, failing to drive on the right half of the road, improper turning, and driving under the influence of alcohol or drugs. California also uses sobriety checkpoints to discourage drinking and driving. Georgia—Using FHWA hazard elimination funding, the state has undertaken several roadway improvement programs that address aspects of rural road safety. For example, Georgia identified four problem areas that it focused on in 2003—run-off-the-road crashes, intersection crashes, car-train crashes, and animal crashes. A Georgia official said that the run-off-the-road and animal crashes were particularly prevalent in rural settings. A Georgia official said that the state is adding shoulder rumble strips and centerline reflectors to help reduce the run-off-the-road crashes, and, to reduce animal crashes, the state is expanding the recovery zone beyond the clear zone along some roads, culling deer herds, and researching light and sound devices to warn drivers of deer presence. Minnesota—State traffic safety officials have implemented several construction and behavioral initiatives to improve rural road safety. The “Towards Zero Deaths” initiative, for example, is an ongoing collaborative program among the Minnesota Department of Transportation, Public Safety, State Patrol, and local safe community organizations to provide grants to localities that work with safety officials to develop a plan to reduce traffic fatalities. In addition, the state Department of Transportation completed a statewide audit of intersections and corridors in 2003. The audit identified and ranked the top 200 intersections and 150 corridors with the highest crash costs. Rural areas accounted for 54 of the intersections and 53 of the corridors. The Department of Transportation’s goal is to address 40 of these high crash cost intersections and corridors for safety improvements each year in the State Transportation Improvement Plan. Further, the Department of Transportation has made extensive use of shoulder rumble strips and is beginning to use centerline rumble strips on two- lane roadways. Pennsylvania—Pennsylvania has installed 300 miles of centerline rumble strips on rural roadways in an effort to help warn drivers that they have strayed from their lane. State transportation officials estimated that rumble strips could reduce vehicle run-off-the-road crashes by 25 percent. In addition, Pennsylvania implemented a Tailgating Treatment program in which dots are painted on the state’s rural roadways to help drivers determine a safe following distance. Pennsylvania officials told us they also funded over 100 rural projects that focused on improving occupant protection, reducing impaired driving, and supporting community traffic safety efforts, and they conducted 722 sobriety checkpoints and DUI roving patrols during fiscal year 2002. Texas—For fiscal year 2004, the state identified 235 hazard elimination projects that it plans on undertaking, most of which were on rural roads. These $43.4 million in projects include such things as adding intersection beacon lights, widening lanes, and adding rumble strips to roadways. In addition, district engineers assessed 30,000 miles of rural two-lane highways in 2003, checking the appropriateness of speed limits and the condition of signs and pavement markings and assessing pavement edge drop-offs and curve warnings. Based on these assessments, changes will be made to address the most important findings. Many Challenges Hinder Efforts to Improve Rural Road Safety Many challenges hinder efforts to improve rural road safety. For example, some states have not adopted the most effective safety belt use and impaired driving laws. In addition, the sheer volume of rural roads and the low volume of traffic on some of them, combined with the high cost of major construction improvements makes it difficult to rebuild rural roads with safer designs. Also, while states can use federal highway funds provided for hazard elimination and rail-highway crossing safety improvements on any public roads or public crossings, most of the federal- aid highway funds cannot be used on certain rural roads—the rural minor collector and rural local roads. In addition, most rural roads are not state owned but rather are the responsibility of municipalities, counties, or townships, which may have limited resources. Further, some states lack information upon which to make informed decisions on potential road safety solutions, regardless of whether the road is rural or urban. Lastly, reducing fatalities on rural roads is also made more difficult because of limitations in emergency medical services in rural areas. Several proposals that the Congress is considering could potentially improve rural road safety. Some States Have Not Enacted Laws on Safety Belt Use and Drinking and Driving While the Congress has provided incentives and penalties to encourage states to pass various laws to increase safety belt use and reduce drinking and driving, many states have not done so. These two factors are particularly important given that, in more than 36,000 rural fatalities due to passenger car, light truck, or van crashes, victims were not using safety belts, and more than 27,000 rural fatalities were identified as alcohol related, from 2000 through 2002. While these laws are not directed specifically to rural road safety, the issues they address are applicable to all types of roadways. According to a report by the Advocates for Highway and Auto Safety, as of January 1, 2004: Thirty states have not enacted primary safety belt laws, which allow police officers to pull over and cite motorists exclusively for the infraction of not using their safety belts. Twenty-nine of these states have enacted secondary safety belt laws. Secondary belt laws allow police to issue a safety belt citation only if the motorist is pulled over for another infraction, such as speeding or an expired license tag. One state allows occupants over 18 to not use safety belts. As noted in our prior report, states with secondary enforcement laws can increase safety belt use, but their success is limited by the difficulty in effectively enforcing the law. Fourteen states have not enacted laws consistent with federal requirements for prohibiting open alcohol container in motor vehicles. Open container laws prohibit the possession of any open alcoholic beverage container or the consumption of any alcoholic beverage in the passenger area of a motor vehicle. In addition, 14 states have not enacted laws consistent with the federal requirement for penalizing repeat drunk driving offenders. Taken together, 23 different states have not enacted laws that are consistent with at least one of these two program requirements. Three states have not established .08 blood alcohol concentration (BAC) as the legal limit for drunk driving. In 2000, the Congress provided that states that did not do so would have 2 percent of their federal-aid highway funds withheld in 2004. The penalty grows to a high of 8 percent in 2007. States adopting the standard by 2007 would be reimbursed for any funds withheld. Safety Improvements to Rural Roads Limited by the Combination of the Millions of Miles of Rural Roads, Low Volume of Traffic, and High Cost of Construction Due to the extensive size of the rural highway system, the low volume of traffic on many rural roads and the high costs that would be incurred to make major safety changes, state and local governments find it difficult to undertake major safety construction programs on some rural roads. As a result, lower-cost alternatives are pursued to improve rural road safety in many situations. Of the 3.9 million miles of the nation’s road system, rural roads account for about 3 million miles (about 77 percent). In addition, most of the rural mileage is on the lowest functional class of rural roads—local rural roads— that account for about 68 percent of the rural roads (about 2.1 million miles). While making up three-fourths of the nation’s road system, rural roads overall carry only about 40 percent of the traffic, with the rural local roads carrying about 5 percent of the traffic. Although use of rural roads is low, the costs associated with major construction projects on rural roads are high. For example, FHWA’s Highway Economic Requirements System model estimates the cost of widening 11-foot lanes to 12-foot lanes at about $186,000 per mile—over five times the cost of resurfacing the 11-foot lanes. In addition, an official from FHWA’s Kentucky Division Office told us it would cost about $200,000 to $250,000 per mile to widen low-volume rural roads by 1 foot. Further, a Transportation Research Board report noted that providing wider cross- sections (wider lanes, wider full-strength shoulders, and enabling 100 percent passing sight-distance) on a two-lane roadway could cost from about $1 million to $3 million per mile. As a result, low-cost improvements are an option to be considered for many rural roads. For example, FHWA has identified more than 40 low-cost improvements that states can use on rural roads at high-crash locations. Examples include installing rumble strips to roadways, moving trees or utility poles away from the roadway, adding or improving roadside signs, and adding lighting or flashing beacons to intersections and rail-highway grade crossings. See appendix III for more information on the low-cost alternatives. States Are Limited in Using Federal Aid Highway Funds for Certain Rural Roadways Because of program requirements, states cannot use all categories of federal-aid highway funds for certain rural roads. These limitations specify that funds used for constructing new roadways or conducting major renovations of roadways cannot be used for rural local roads, rural minor collectors, or for urban local roads. These program restrictions were made to ensure that the interstate highway system and other roads with higher expected traffic have adequate funds to meet the transportation needs of the public, according to a FHWA official. While some other federal-aid highway funds are available for all rural roads, such as the Hazard Elimination and Rail-Highway Crossing Programs within the Surface Transportation Program, these roadways receive significantly less funding per mile than urban counterparts. As shown in table 1, of the $30 billion provided to states in fiscal year 2002, about $12.1 billion went to all rural roads, with $541 million going to rural local roads. States are also challenged in making improvements in rural road safety because, in most states, large portions of rural roads are not directly under the responsibility of the state but rather fall under the jurisdiction of counties, municipalities, or townships. Nationwide, about 78 percent of all rural roads (2.4 million of the nation’s 3.1 million rural miles) are not owned by the states. About 93 percent (about 2.0 million miles) of the rural local roads are not under state jurisdiction. In 45 states, jurisdictions other than the state own 75 percent or more of their rural local roads. (See fig. 7.) Some local officials in states we visited said they were challenged to make costly rural road construction improvements without finding other sources of funds to supplement those provided by states, such as issuing bonds or increasing local taxes. In addition, a study for the National Cooperative Highway Research Program noted that many of the roads most in need of roadside safety improvements are under the control of local governments that have the least amount of resources to address the needs. Information Lacking on Crashes and the Effectiveness of Countermeasures Used Accurate, timely crash data are important for planning future urban and rural highway safety programs and assessing the impacts of recent projects or programs to improve safety. States rely on crash data from fatality crashes, injury crashes, and property-damage-only crashes to identify safety problems and plan safety improvements. Some states we visited identified problems with their crash data system and were trying to improve their crash data to make them more accurate, complete, and timely. For example, Texas is about 2 ½ years behind in entering crash data from police accident reports into its data system. State officials pointed out that without timely data, it is difficult to determine if the actions taken on a stretch of road had the intended effect. To make the data timelier, Texas plans to have a new system in place by fiscal year 2005, at a cost of $14 million. The new Texas system would encourage local law enforcement agencies to collect, validate, and report crash data electronically. It would also provide centralized analysis, review, and data reporting to agencies that plan and conduct state highway safety programs. Georgia modified its crash data processing in 1998, but the changes were not successful, according to a Georgia State Auditor’s report. In 2001, a new agency took over the crash data system and, after a data recovery effort, eliminated a multi-year backlog of crash data reporting by 2003. In addition, California is testing a system that would allow data recorded by police to be directly reported into a database through handheld electronic systems, thereby speeding the availability of the information. The information would be recorded in the Statewide Integrated Traffic Reporting System database that is used to help traffic safety officials select safety initiatives. Difficulties in Providing Adequate Emergency Medical Services Reducing rural road fatalities is also hampered by the difficulty of providing prompt emergency medical services in rural settings. For example, we reported in 2001 that state and local officials told us that rural areas are less likely than urban areas to have 911 emergency dialing, and their communication between dispatchers or medical facilities and emergency vehicles are more likely to suffer from “dead spots”—areas where messages cannot be heard. The report also found that rural areas are more likely to rely on EMS volunteers rather than paid staff, and these volunteers may have fewer opportunities to maintain or upgrade their skills with training. In addition, the report noted that officials from national associations representing EMS physicians have indicated that long distances and potentially harsh weather conditions in rural areas can accelerate EMS vehicle wear and put these vehicles out of service more often. Survivability after a crash decreases as the time required for an injured person to receive medical treatment increases. Further, according to an Organization for Economic Cooperation and Development report, a lack of rapid trauma treatment is critical during the seconds and minutes that immediately follow a crash. The report noted that the risk of dying before medical attention can be provided increases as the crash location is further removed from trained rescue staff and trauma medical facilities. A study of fatalities in Michigan also highlights the impact of providing emergency care in rural areas. The study found that of 155 fatalities in 24 Michigan rural counties in 1995, 12.9 percent of the fatalities were definitely preventable or possibly preventable if rapid and appropriate emergency treatment had been available. Proposals Being Considered to Improve Roadway Safety Congress is considering legislation that includes proposals to improve highway safety, including safety on rural roads. The proposals include two bills for the reauthorization of TEA-21: (1) the Safe, Accountable, Flexible, and Efficient Transportation Equity Act of 2004 (SAFETEA), S. 1072, passed by the Senate in February 2004, and (2) the Transportation Equity Act: A Legacy for Users (TEA-LU), H.R. 3550, passed by the House in April 2004. Each of these proposals has features that could impact highway safety and, in some cases, directly address rural roads. Incentives for Enacting Stronger State Traffic Safety Laws. Safety belt use and impaired driving are important factors in rural road fatalities. S. 1072 would provide grants to states for enactment of primary safety belt laws and would reward those states that already have this law. The proposal offers a maximum of $600 million in potential grants to states that enact and retain primary laws. H.R. 3550 requires states that do not meet federal open-container laws or federal requirements for penalizing repeat drunk driving offenders to transfer 3 percent of certain federal- aid highway program funds to their Section 402 State and Community Grants Program. H.R. 3550 requires the transfer of 3 percent of certain federal-aid highway funds to Section 402 programs in states that have not enacted a primary seat belt law or achieved 90 percent belt usage. H.R. 3550 also includes a penalty provision that requires the withholding of 2 percent to 8 percent of certain federal-aid highway funds if a state has not enacted a law establishing .08 blood alcohol content as the legal limit for drunk driving. Finally, H.R. 3550 provides 1 year of additional funding for seat belt and drunk driving incentive grants. In addition, S.1072 proposes to withhold 2 percent of certain highway construction funds to those states that have not enacted open-container laws for fiscal years 2008 to 2011. Direct Funding for High-Risk Rural Roads. Poor roadway design can contribute to rural road fatalities. H.R. 3550 would authorize $675 million over 6 fiscal years for safety projects on high-risk rural roads. States could use federal funding to improve the safety of rural major collectors, rural minor collectors, or rural local roads that have, or that are expected to have, higher than average statewide fatality and incapacitating injury rates. New Highway Safety Improvement Program. Both S. 1072 and H.R. 3550 contain provisions for a new highway safety improvement program to replace the current statutory requirement that states set aside 10 percent of their Surface Transportation Program funds for carrying out Hazard Elimination and Rail-Highway Crossing Programs. S. 1072 would authorize $8.2 billion over 6 years for the program and H.R. 3550 proposes a level of $3.3 billion over 5 years. S. 1072 requires states to have crash data systems and the ability to perform safety problem identification and countermeasure analysis to use safety improvement funds. Both bills maintain state flexibility to use safety improvement funds for safety projects on any public road or publicly owned bicycle or pedestrian pathway or trail or public surface transportation facility. In both bills, states must identify roadway locations, sections, and elements that constitute a hazard to motorists, bicyclists, pedestrians, and other highway users and develop and implement projects to address the hazards identified. Enhanced Federal Funding for State Safety Data. Some of the states we visited had identified weaknesses in their highway data systems. S. 1072 and H.R. 3550 would each create a new State Traffic Safety Information System Improvement grant. Funding would be authorized at $45 million per year under S. 1072 and $24 million to $39 million per year (for 5 fiscal years—2005 through 2009) under H.R. 3550. Larger states could qualify for larger grants, but the minimum grant amount would be $300,000 per year. By comparison, federal funding for data improvement grants under TEA-21 was never more than $11 million per year and was only available in fiscal years 1999 through 2002. H.R. 3550 also allocates $4 million from NHTSA research authorizations to further develop a transportation safety information management system to provide for the collection, integration, management, and dissemination of safety data for state and local safety agencies. Proposals for New Safety Research. S. 1072 and H.R. 3550 would fund strategic highway research programs. S. 1072 would provide $450 million for this purpose and H.R. 3550 would provide $329 million. According to the related NCHRP planning study, 40 percent of the funds—$180 million—would support safety research. The goal of this safety research is to prevent or reduce the severity of highway crashes through more accurate knowledge of crash factors and of the cost- effectiveness of selected countermeasures in addressing these factors. The research plan focuses on road departure and intersection collisions, which represent 58 percent of traffic fatalities. Comprehensive Highway Safety Planning. S. 1072 requires states to develop and implement strategic highway safety plans that are comprehensive, data driven, and based on a collaborative process involving state and local safety stakeholders. The plans must be comprehensive, including all aspects of highway safety—infrastructure, driver behavior, motor carrier, and emergency medical services. They must be based on improved crash data collection and analysis. While not directed specifically at rural road safety, the collaborative process required by this provision provides an opportunity for local rural officials and leaders to participate in developing the goals and investments included in the plan. H.R. 3550 would encourage comprehensive safety planning for both behavioral and construction safety programs. Flexibility in Moving Funds between FHWA and NHTSA Programs. S. 1072 allows states to use up to a quarter of their Highway Safety Improvement Program funds for behavioral projects, if the projects are included in a state comprehensive highway safety improvement plan. Improving Emergency Medical Systems. The presence of timely competent medical attention has been shown to reduce rural and other traffic fatalities. S. 1072 would create an Emergency Medical Services grant program to provide state EMS offices funds for conducting coordinated EMS and 911 programs. S. 1072 would provide $5 million annually and would create a Federal Interagency Committee on Emergency Medical Services that would coordinate federal agencies’ involvement with state, local, tribal, or regional emergency medical services and 911 services and to identify the needs of those entities. Agency Comments and Our Evaluation We provided copies of a draft of this report to the Department of Transportation for its review and comment. The department generally agreed with the report’s contents and provided some technical comments, which we incorporated where appropriate. In discussing this report, agency officials noted that safety should be part of every project designed and built with federal-aid funds. We are sending copies of this report to the Secretary of Transportation, the Administrator of the National Highway Traffic Safety Administration, the Administrator of the Federal Highway Administration, and to interested congressional committees. We will also provide copies to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-2834. Key contributors to this report were Samer Abbas, Rick Calhoon, Colin Fallon, Sara Moessbauer, Stacey Thompson, and Glen Trochelman. Objectives, Scope, and Methodology The Conference Report accompanying the 2003 Consolidated Appropriation Resolution directed us to review aspects of rural road safety and report to the House and Senate Appropriations Committees. To meet this requirement, we identified (1) factors contributing to rural road fatalities, (2) federal and state efforts to improve safety on the nation’s rural roads, and (3) challenges that may hinder making improvements in rural road safety. To identify the factors contributing to rural road fatalities, we supplemented an earlier GAO report, Highway Safety: Research Continues on a Variety of Factors That Contribute to Motor Vehicle Crashes (GAO-03-436, March 2003), with information from the Federal Highway Administration, the National Highway Traffic Safety Administration, and other organizations with knowledge of this issue, such as the National Association of Counties and the American Association of State Highway and Transportation Officials. We also reviewed studies identifying factors that contribute to rural road fatalities. For each of the selected studies that are used in this report, we determined whether the study’s findings were generally reliable. To do so, we evaluated the methodological soundness of the studies using common social science and statistical practices. For example, we examined each study’s methodology, including its limitations, data sources, analyses, and conclusions. In addition, we updated the earlier report by obtaining more current information on traffic deaths by using data from NHTSA’s Fatality Analysis Reporting System (FARS). This database provides information on all traffic-related fatalities. Each state provides NHTSA fatality data in a standardized format. To be included in the database, a crash must result in the death of an occupant or nonmotorist within 30 days of the incident. The states obtain this information from such sources as police reports, vehicle registration files, state driver licensing files, death certificates, coroner or medical examiner reports, and hospital records. It should be noted that while fatality data is useful in understanding crashes, other factors in addition to those involved in causing the crash might have contributed to the fatality. This would include whether safety belt or other occupant protection measures were used and functioned properly. Before using this data, we assessed the reliability of the FARS data by reviewing the data for obvious errors in accuracy and completeness, reviewing existing information about the data, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. Further, in providing information on factors contributing to rural road fatalities, we identified fatalities per million miles traveled. To do so, we used vehicle miles traveled data maintained by FHWA in its Highway Performance Monitoring System (HPMS). This system is a national-level highway information system that includes data on the extent, condition, performance, use, and operating characteristics of the nation’s highways. In general, HPMS contains administrative and extent of system information on all public roads. The HPMS obtains vehicle-miles-traveled data from each state, and states have different methods for collecting certain travel information. We assessed the reliability of the HPMS data by reviewing it for obvious errors in accuracy and completeness, reviewing existing information about the data, and interviewing agency officials knowledgeable about the data. There are certain limitations associated with using these data. For example, the quality of the data in the system relies on state data collection techniques. HPMS guidance is flexible so that each state has its own approach, and some approaches do not require annual revisions. In addition, vehicle-miles-traveled data may not be comparable from state to state. However, we determined that the data were sufficiently reliable for the purposes of this report. To identify federal and state efforts to improve rural road safety, we interviewed and obtained documentation from officials in the Federal Highway Administration and the National Highway Traffic Safety Administration. In addition, we reviewed state use of safety funds by meeting with safety officials in five states. We selected Minnesota, which DOT officials recommended as having a good rural road safety program, and the four states with the highest rural vehicle miles traveled: California, Georgia, Pennsylvania, and Texas. In each of these locations we met with state officials responsible for the FHWA and NHTSA programs, as well as some officials at the local level. We also reviewed recently issued guides, models, and training programs intended to help traffic safety officials improve their rural road safety programs, such as the Transportation Research Board’s National Cooperative Highway Research Program 500 Report series that serves as guidance for implementing the American Association of State Highway Transportation Officials’ Strategic Highway Safety Plan. To identify challenges that hinder making improvements in rural roads, we interviewed federal and state officials identified above and contacted experts from academia and advocacy groups. In addition, we attended a Rural Road Safety Roundtable in West Virginia at which participants discussed challenges facing rural road safety. We relied on NHTSA and a report by the Advocates for Highway Safety to identify the status of the 50 states’ compliance with various federal highway safety statues. We also reviewed various legislative proposals that may help address the issues. The legislative proposals included bills for the reauthorization of TEA-21: (1) the Senate passed S. 1072, the Safe, Accountable, Flexible and Efficient Transportation Equity Act of 2004 (SAFETEA) and (2) the House passed H.R. 3550, the Transportation Equity Act: A Legacy for Users (TEA-LU). We also reviewed the administration’s proposal, the Safe, Accountable, Flexible, and Efficient Transportation Equity Act of 2003; the Senate Committee on Commerce, Science and Transportation bill S. 1978, the Surface Transportation Safety Reauthorization Act of 2003; and the House Committee on Science bill H.R. 3551, the Surface Transportation Research and Development Act of 2004. However, the Senate and House passed S. 1072 and H.R. 3550, respectively, so we did not include them in the report. We performed our review from July 2003 through April 2004 in accordance with generally accepted government auditing standards. Examples of State Activities to Improve Rural Road Safety We obtained information from five states (California, Georgia, Minnesota, Pennsylvania, and Texas) on the number of fatalities on their roadways, the federal funding they receive for safety purposes, and a description of the types of projects these funds support. California During 2002, 1,713 people were killed on rural roads in California—the second-highest total in the nation. When adjusted for miles traveled, California’s fatality rate on rural roads is about 2.67 fatalities per 100 million vehicle miles traveled—greater than the national average of 2.29. Rural fatalities accounted for approximately 42 percent of all state roadway fatalities in 2002. In fiscal year 2003, California was provided over $2.5 billion in federal-aid highway funds. About $60.5 million of these funds were provided for Hazard Elimination and Rail-Highway Crossing programs. These programs provided construction-related safety improvements on public roads, transportation facilities, bicycle or pedestrian pathways or trails, and for rail-highway crossing safety programs. California also received about $100.4 million in fiscal year 2003 to improve roadway safety through a variety of activities designed to influence driving behavior. About $47.5 million of the funds California received were transferred from the state’s federal-aid highway program because the state’s repeat offender law did not meet federal standards. California officials told us that they estimate they spent about $69.5 million on 58 rural road hazard elimination-related projects in 2003. Examples include: The 2-3 Lane Safety Program. The California Department of Transportation uses past crash analysis to identify cross-centerline crash locations on two- and three-lane roadways for safety investigations. The agency then attempts to utilize the most cost- effective solutions to make these roadways safer. In 2002, the agency identified 50 areas, 47 of which were located in rural locations. Run-Off-the-Road Task Force. The California Department of Transportation currently has a task force examining locations where a number of run-off-the-road crashes are occurring. The agency then attempts to utilize cost-effective strategies to reduce the number or severity of these types of collisions. In 2003, about 73 percent of the locations identified were in rural areas. The agency hopes to proceed with the run-off-the-road monitoring program by the end of 2004. California is also using about $48 million of the NHTSA provided funds to support 732 behavioral programs in fiscal year 2004. Of these funds, California officials identified about $9.9 million being used to support 80 rural road-related programs. These projects include emergency medical initiatives such as the “First There, First Care” program, which will train young drivers in 54 schools in 11 counties on providing basic first aid at the scene of a motor vehicle crash. In addition, California’s Office of Traffic Safety has worked with the California Highway Patrol to implement two programs that have rural road safety impacts: Corridor Safety Project. The California Highway Patrol is leading a task force that is examining the safety of all state corridors, based on fatality and accident data. This effort has identified 20 high-risk corridors in the state, of which 16 were two-lane roads, mostly in rural areas. The task force is responsible for making both behavioral and infrastructure improvement recommendations to address the safety issue with these high-risk corridors. Traffic Collision Reduction on County Roads Project. For the 2004 fiscal year, the Highway Patrol received $1.9 million from the Office of Traffic Safety to reduce crashes on county roads by increasing enforcement of traffic violations that often lead to collisions: speeding, right-of-way violations, failing to drive on the right half of the road, improper turning, and driving under the influence of alcohol or drugs. Georgia During 2002, Georgia had 902 fatalities on its rural roadways. When adjusted for miles traveled, Georgia’s fatality rate on rural roads was 1.81 fatalities per 100 million vehicle miles traveled—below the national average of 2.29 fatalities per 100 million vehicle miles traveled. Rural fatalities accounted for approximately 59 percent of all state roadway fatalities in 2002. In fiscal year 2003, Georgia received $975 million in federal-aid highway funds. About $25.3 million of these funds were provided for the Hazard Elimination and Rail-Highway Crossing Programs. Using these funds, the state has undertaken several roadway improvement programs that address aspects of rural road safety. For example, Georgia identified four problem areas that it focused on in 2003—run-off-the-road crashes, intersection crashes, car-train crashes, and animal crashes. A Georgia official said that the run-off-the-road and animal crashes were particularly prevalent in rural settings. He said that they are adding shoulder rumble strips and centerline reflectors to help reduce the run-off-the-road crashes, and to reduce animal crashes they are expanding the recovery area along some roads, culling deer herds, and researching light and sound devices to warn drivers of deer presence. In addition, Georgia is developing a Lane Departure Strategic Action Plan with the goal of reducing the lane departure serious injury and death rate from 4.93 per 100 million miles traveled in 2003 to 3.29 in 2008 and preventing 750 serious injuries and deaths annually. A draft of this plan recognizes that roadway departures on rural highways are a predominate concern. To meet this goal, Georgia is developing an approach that will use low-cost construction improvement; corridor enforcement, education, and engineering enhancements; local lane departure safety initiatives, targeted use of medium- to high-cost improvements at high-crash locations, and statewide initiatives to improve safe driver behaviors. According to Georgia officials, the state has also replaced its safety data system. It hopes to upgrade the current system of recording crash locations by use of more accurate global positioning technology at the crash scene, which would help them better identify problem areas throughout the state. In addition to these state initiatives, FHWA officials said Georgia is participating in AASHTO research projects that address run-off-the-road crashes and comprehensive state strategic highway safety plans. The state has also participated in two major NHTSA-sponsored behavioral programs: the eight-state evaluation of the “Click It or Ticket” safety belt campaign in 2001 and the current impaired driving strategic evaluation study, according to NHTSA officials. Georgia identified a need to increase use of safety belts, booster seats, and child safety seats among rural and minority populations statewide, so it initiated efforts to involve rural and minority communities in local initiatives to increase safety belt usage rates. Under the impaired driving study, enforcement agencies conduct at least one sobriety checkpoint per month in every county. Minnesota In 2002, 479 people were killed on Minnesota’s rural roads. When adjusted for miles traveled, Minnesota’s fatality rate on rural roads was about 1.8 fatalities per 100 million miles traveled—less than the national average of 2.29. Rural fatalities accounted for approximately 73 percent of all state roadway fatalities in 2002. In 2003, Minnesota received about $395 million in federal-aid highway funds. About $12.1 million of these funds were provided for hazard elimination projects, for construction-related safety improvements, and for rail-highway crossing improvements. The state also received about $14.7 million for NHTSA programs designed to improve behavioral activities. State officials could not provide a breakdown of how much of these funds was used for rural road safety projects. While the state does not have a specific rural road safety program, state traffic safety officials have implemented several construction and behavioral initiatives to improve rural road safety. The “Towards Zero Deaths” initiative, for example, is an ongoing collaborative program among the state department of transportation, public safety, state patrol, and local “safe community” organizations to reduce highway fatalities. The program provides grants to localities that work with safety officials to coordinate a plan to reduce traffic fatalities. Other behavioral initiatives include the following: NightCAP is a program involving concentrated alcohol patrols scheduled in conjunction with local events that serve alcohol, for example, music festivals that attract big crowds and where alcohol is sold or allowed to be consumed. Local, county, and state law enforcement patrol roads to look particularly for drivers showing signs of impairment. Releases are sent out to local press and broadcast media informing the local population that enforcement will be present during the event. In fiscal year 2003, $615,000 of federal funding was spent on the NightCAP program. About 50 percent of the events were in rural areas of Minnesota. Safe & Sober is a project involving municipal and county law enforcement agencies that target impaired driving and occupant protection issues through a combination of enhanced law enforcement and publicity. According to state officials, in fiscal year 2003, $1,335,600 in federal funding was spent on the program. Approximately 50 percent of this program is carried out in rural areas of the state. In addition, in 2003 the state Department of Transportation completed a statewide audit of high crash cost intersections and corridors. The audit ranked the top 200 intersections and 150 corridors with the highest crash costs. Of the top 200 intersections identified, 54 were located in rural areas; of the top 150 corridors identified, 53 were located in rural areas. The Department of Transportation’s goal is to address 40 of these high crash cost intersections and corridors for safety improvements each year in the State Transportation Improvement Plan. Further, according to state officials, the Department of Transportation has made extensive use of shoulder rumble strips and is beginning to use centerline rumble strips on two-lane roadways. Approximately $9 million in federal funds was transferred from construction to safety activities in 2003 because Minnesota’s laws in regards to repeat drunk drivers did not meet federal requirements. Officials at the state Department of Public Safety said that they plan to use half of those funds for hazard elimination projects such as replacing twisted-end guardrails and researching the visibility effects of installing wider edge lines and reflective wet pavement markings. Officials believe that this will have a major impact on preventing or reducing the severity of run-off-the-road crashes. The Department of Public Safety plans to use the other half to address impaired driving. Specifically, Minnesota plans to upgrade its driver license information system to improve the tracking of problem drivers, focusing on impaired driving. The state also plans to implement traffic safety programs promoting safety belt use and discouraging drinking and driving among 21 to 34 year olds. To improve emergency medical services in rural areas, Minnesota plans to reduce the amount of “dead spots”—areas where messages cannot be heard—so that law enforcement, emergency medical services, and transportation officials can communicate with each other in more remote areas of the state. Pennsylvania In 2002, there were 1,001 fatalities on Pennsylvania’s rural roads. When adjusted for miles traveled, Pennsylvania’s fatality rate on rural roads is 2.15 fatalities per 100 million vehicle miles traveled—less than the national average of 2.29. Rural fatalities accounted for approximately 62 percent of all state roadway fatalities in 2002. Pennsylvania received about $1.4 billion in federal-aid highway funds in fiscal year 2003. Of these funds, about $21.4 million were provided for hazard elimination projects for construction-related safety improvements and for improving safety at rail-highway crossings. During fiscal year 2003, Pennsylvania received about $11.6 million in NHTSA funding designed to improve behavioral activities. State officials could not provide a breakdown of how much of these funds were used for rural road safety projects. The Pennsylvania Department of Transportation has a goal of reducing road fatalities by 10 percent between 2002 and 2005. The department has begun several engineering and behavioral improvement initiatives to help reach this goal. For example, to maximize safety in the design and construction of highway projects, the department performs Roadway Safety Audits. These audits are formal examinations of roadways by an independent team of trained specialists that assess their crash potential and safety performance. The team identifies safety problems so that project officials can evaluate, justify, and select appropriate design changes. In 1997, the Pennsylvania Department of Transportation was the first transportation agency in the United States to pilot the program. Since its inception, about 40 audits have been completed. According to the state department of transportation, the audits have prevented undesirable changes during design or construction, maximized opportunities to enhance safety, and minimized missed opportunities to enhance safety. Pennsylvania has introduced two other infrastructure safety modifications aimed at improving rural road safety. First, the state installed 300 miles of centerline rumble strips in an effort to help warn drivers that they have strayed from their lane. State transportation officials estimated that rumble strips could reduce vehicle run-off-the-road crashes by 25 percent. In addition, Pennsylvania implemented a “dot” tailgating treatment program in which dots are painted on the state’s roadways, including rural two-lane roads, to help drivers determine a safe following distance. The spacing of the dots is based on the roadway’s speed limit. Each vehicle is expected to maintain a distance equal to at least two dot lengths from the vehicle ahead of it. The Pennsylvania Department of Transportation also has several initiatives to modify unsafe driving behavior to help reach its 2005 goal. Sobriety checkpoints, roving patrols, and mobile awareness patrols have been implemented to combat drunk driving. In 2002, 129 mobile awareness patrols were conducted. The state also has a program to install ignition interlock devices on the vehicles of those convicted of second or subsequent driving-under-the-influence offenses. The device must remain in the vehicle for 1 year following a 12-month suspension of driving privileges. Since its inception in 2000, the state reports the program has stopped 10,142 attempts to operate a vehicle on the state’s roadways when the operator had a blood-alcohol content equal to or greater than .025 percent. The state also has several initiatives to improve safety belt use. Although the state has a secondary safety belt law, it received approval to use the “Click It or Ticket” initiative encouraged by NHTSA. Transportation safety officials are also involved in increasing safety belt use among middle and high school students and in improving the use and incidence of child passenger seats through educational and training programs. State traffic safety officials also informed us of programs targeting increased safety belt use among light truck and pickup truck drivers who state officials believe are more prevalent in rural areas and generally decline to wear safety belts. Texas During 2002, 2,096 people were killed on rural roads in Texas—the highest total in the nation. When adjusted for miles traveled, the fatality rate on rural roads in Texas is about 2.68 fatalities per 100 million vehicle miles traveled—greater than the national average of 2.29. Rural fatalities accounted for approximately 56 percent of all state roadway fatalities in 2002. In fiscal year 2003, FHWA provided Texas with about $2.2 billion in federal- aid highway funds. About $57.6 million of these funds were provided for Hazard Elimination and Rail-Highway Crossing Programs. The state’s safety funding under the Surface Transportation Program provided construction-related safety improvements on public roads, transportation facilities, bicycle or pedestrian pathways or trails, and for the rail-highway crossing safety programs. Texas also received about $26.4 million of federal funds administered by NHTSA in fiscal year 2003, mainly to improve roadway safety through activities designed to influence driving behavior. Texas has appropriated $40 million in state funds to supplement FHWA funding for the Hazard Elimination Program, according to Texas Department of Transportation officials. Texas officials identified several intiatives being undertaken to reduce fatalities on the state’s rural roads: Texas Department of Transportation officials identified 235 hazard elimination projects that they plan on undertaking in fiscal year 2004. These $43.4 million in projects, most of which are on rural roads, include adding intersection beacon lights, widening lanes, adding rumble strips, and removing trees near roads. Due to concerns about high fatality rates on narrow rural two-lane highways, particularly those with limited or no shoulders, district engineers assessed 30,000 miles of rural two-lane highways in 2003, checking the appropriateness of speed limits, the condition of signs and pavement markings, and assessing pavement edge drop-offs or curve warnings. Based on these assessments, changes will be made to address the most important findings. The state is installing shoulder rumble strips on all rural four-lane divided highways and researching the use of edgeline and centerline rumble strips on other roads. Because the state’s alcohol-related crashes were the leading cause of motor vehicle fatalities in Texas during 2001, state officials told us they have worked with NHTSA and others to identify the nature of the problem and assess programs that could reduce impaired driving. As part of this effort, the state funded 13 projects aimed at reducing impaired driving in rural areas through increased enforcement and education programs. The state has initiated programs to aid rural crash victims, including new training for emergency medical technicians and first-aid training for police officers and bystanders. Texas is in the process of upgrading its crash data system to make data more timely. Texas is about 2 ½ years behind in entering crash data from police accident reports into its data system. State officials pointed out that without more timely data, it is difficult to determine if the actions taken on a stretch of road had the intended effect. Texas plans to have a new system in place by fiscal year 2005, at a cost of $14 million. The new Texas system will encourage local law enforcement agencies to collect, validate, and report crash data electronically. It will also provide centralized analysis, review, and data reporting to agencies that plan and conduct state highway safety programs. Low-Cost Safety Improvements The Federal Highway Administration (FHWA) has identified more than 40 low-cost best practices as alternatives to capital construction at high-crash locations. These improvements are presented to state and local traffic engineers in FHWA’s Low-Cost Safety Improvements Workshops. In addition, FHWA has qualified the strategies as proven, tried, or experimental. Proven include those strategies that have been used in one or more locations and for which properly designed evaluations have been conducted that show them to be effective. Tried countermeasures are those that have been implemented in a number of locations and that may even be accepted as standards or standard approaches but for which there have not been found valid evaluations. Experimental strategies are those that have been suggested and that at least one agency has considered sufficiently promising to try on a small scale in at least one location. Table 2 summarizes the low-cost alternatives and identifies potential safety impacts that were identified in the course materials and whether the countermeasure is proven, tried, or experimental. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Traffic crashes are a major cause of death and injury in the United States. In 2002, there were 42,815 fatalities and over 2.9 million injuries on the nation's highways. Crashes on rural roads (roads in areas with populations of less than 5,000) account for over 60 percent of the deaths nationwide, or about 70 deaths each day. Further, the rate of fatalities per vehicle mile traveled on rural roads was over twice the urban fatality rate. GAO identified (1) the factors contributing to rural road fatalities, (2) federal and state efforts to improve safety on the nation's rural roads, and (3) the challenges that may hinder making improvements in rural road safety. GAO obtained information from the Federal Highway Administration (FHWA), the National Highway Traffic Safety Administration (NHTSA), and other organizations with knowledge of these issues. In addition, GAO analyzed fatal crash data on rural roads from Department of Transportation databases and visited five states that account for about 20 percent of the nation's rural road mileage. GAO also contacted academic experts and examined legislative proposals for improving rural road safety. We provided copies of a draft of this report to the Department of Transportation for its review and comment. In discussing this report, agency officials noted that safety should be part of every project designed and built with federal-aid highway funds. Four primary factors contribute to rural road fatalities--human behavior, roadway environment, vehicles, and the care victims receive after a crash. Human behavior involves the actions taken by or the condition of the driver and passengers. Human behaviors are important because almost 70 percent of the unrestrained (unbelted) fatalities between 2000 and 2002 occurred in rural crashes. Additionally, the majority of alcohol- and speeding-related fatalities occurred on rural roads. Roadway characteristics that contribute to rural crashes include narrow lanes, sharp curves, trees, and animals. Vehicle factors include problems that arise due to the design of vehicles and are important for both urban and rural roads. Care of crash victims also contributes to rural fatalities because of the additional time needed to provide medical attention and the quality of rural trauma care. In fiscal year 2003, FHWA provided about $27.4 billion in federal-aid highway funds to states. While many projects using these funds have safety features, the amount used for safety is not tracked. However, about $648 million of these funds went to the Hazard Elimination and Rail-Highway Crossings Programs and were specifically provided for safety purposes--about $330 million of which went to improve rural road safety. NHTSA provided about $671 million to states for activities that influence both rural and urban drivers' behavior in such areas as safety belt use, drunk driving, or speeding. States are ultimately responsible for selecting the projects to support with federal funding. The five states we visited used a portion of the funding received for rural road safety. Many challenges hinder efforts to improve rural road safety--for example, not all states have adopted safety belt and drunk driving laws that might curb behavior contributing to rural road fatalities. In addition, states are limited in using federal-aid highway funds for certain rural roads, and most rural roads are the responsibility of local governments that may lack the resources to undertake costly projects to improve road safety. Further, some states lack adequate crash data to support planning and evaluation of safety projects. Lastly, the nature of rural areas makes it difficult to provide adequate emergency medical care.
Background Through CHIP, states provide health insurance coverage for children in families whose household incomes are too high to qualify for Medicaid. CHIP is funded jointly by the federal government and states. States administer CHIP under federal standards, and the state programs may vary, for example, in the services covered, costs to individuals and families, and eligibility standards. Specifically, CHIP income eligibility standards vary across states, with most states’ upper income eligibility levels between 200 and 300 percent of the federal poverty level and the highest eligibility level being 400 percent of the federal poverty level. PPACA requires states to maintain their current eligibility levels for children in CHIP and Medicaid through fiscal year 2019. Thus, under current law, some states could choose to eliminate or scale back coverage for children in their CHIP and Medicaid programs beginning in fiscal year 2020, even if federal funds for these programs are available. PPACA required the establishment of exchanges by January 1, 2014, to allow consumers to compare individual health insurance options available in each state and enroll in coverage. As of June 2015, 17 states had established state-based exchanges and 34 states had FFEs. The exchanges include certified QHPs offered in the states by the participating issuers of coverage. QHPs are required to meet certain benefit design, consumer protection, and other standards. Issuers may offer multiple QHPs and may also offer other health insurance products outside of the exchange, such as a CHIP managed care plan, Medicare Advantage plan, Medicaid managed care plan, or other commercial insurance products. While the CHIP program was created to address the health care needs of children in low-income families, QHPs offered through the exchanges established by PPACA are intended to target a broader population. Specifically, PPACA contained a number of provisions that were intended to make coverage more available and affordable for individuals seeking coverage in the private individual and small group health insurance markets. Some of these provisions established new rules that limit how much issuers can vary the premiums they charge certain individuals or groups, as well as prohibiting issuers from denying coverage based on an individual’s health status, among other things. With the introduction of QHPs in 2014, researchers have found that issuers have increasingly employed cost-containment tools—such as creating narrow networks that include a smaller group of providers and hospitals—as well as by tiering networks—that is, creating several networks of differing levels of coverage that reflect different arrangements of out-of-pocket costs that may be incurred by an enrollee. The federal government and states each play a role in overseeing CHIP, CHIP plans, exchanges, and QHPs: CHIP. CMS is the federal agency responsible for overseeing states’ implementation and administration of their CHIP programs, including establishing federal standards for these programs and ensuring that states take steps to adequately oversee issuers’ compliance with these standards. At the state level, state agencies such as the Medicaid agencies or departments of health or social services are responsible for administering CHIP programs and overseeing CHIP plans. For CHIP programs operated through the use of managed care, the relevant state agencies contract with managed care organizations to provide services to CHIP enrollees. State departments of insurance may also play a role in overseeing CHIP plans, to the extent these plans are subject to state insurance rules. QHPs. Regardless of the exchange type, CMS has direct oversight responsibilities for the PPACA exchanges, as CMS is responsible for certifying state exchanges for operation and directly operating the FFE. In addition, CMS is responsible for establishing minimum QHP certification requirements that all QHPs must meet in order to participate in an exchange. In the FFE states, CMS oversees compliance with these requirements; in the states with state-based exchanges, the states are responsible for ensuring the plans comply. Federal regulations require that all exchanges have procedures to annually certify QHPs to ensure they are in compliance with exchange requirements. While we and others have reported on CHIP enrollees’ experiences with access to health care compared to those with private insurance or without insurance and comparisons of other aspects of CHIP and QHPs, little is known about whether the provider networks used by QHPs are adequate to address the health care needs of children or how CHIP networks compare with those of QHPs. Specifically, we reported in November 2013 that survey data indicated that CHIP enrollees reported comparably positive responses regarding their ability to obtain care when compared with responses for enrollees with private insurance, but that approximately 18 percent of CHIP enrollees reported difficulties seeing a specialist. We also reported in February 2015 that coverage of services by selected CHIP plans in five selected states was generally comparable to that of the selected QHPs, with the notable exceptions of pediatric dental and certain enabling services such as translation and transportation services, which were covered more frequently by the CHIP plans. However, as noted by MACPAC in its March 2015 report, little has been reported on the provider network differences among CHIP, Medicaid, and QHPs. HHS’ Assistant Secretary for Planning and Evaluation contracted for studies looking at provider networks in CHIP, Medicaid, and qualified health plans in six urban areas; but, as of November 2015, HHS had not published the studies. Federal Network Adequacy Standards Are Broad for Both CHIP Plans and QHPs, and Selected States Are More Focused on Pediatric Providers for CHIP Plans Than for QHPs At the federal level, broad network adequacy standards apply to CHIP plans and QHPs. At the state level, most of the five states we reviewed required CHIP plans to adhere to network adequacy standards that related specifically to pediatric provider types. The selected states required QHPs to follow fewer pediatric provider-specific standards. Broad Federal Network Adequacy Standards Apply to CHIP Plans and QHPs Broad federal network adequacy standards apply to CHIP plans and QHPs. States that administer their CHIP programs through managed care plans must adhere to federal requirements governing CHIP managed care organizations, while QHPs in both state-based exchanges and FFEs are also subject to federal requirements for provider network adequacy. Specifically: Federal law requires that CHIP managed care plans provide assurances that, within their service areas, they have the capacity to serve their expected enrollment; that they maintain an adequate number, mix and distribution of providers; and that they offer an appropriate range of services and access to preventive and primary care services for the expected population. Because CHIP managed care plans primarily cover children, these plans are thus required to include a sufficient network of pediatric providers. Federal regulations require QHPs to maintain networks that are sufficient in number and types of providers in order to ensure that all services are accessible to enrollees without unreasonable delay. Regulations also require that QHP networks include “essential community providers” to ensure reasonable and timely access to a broad range of providers for low-income and medically underserved individuals. CMS has established more specific network adequacy criteria applicable to QHPs participating in the FFE, which CMS operates. For example, in its annual certification guidance to QHP issuers in FFE states for benefit years 2015 and 2016, CMS instructed issuers to submit a list of providers and their geographic locations so that CMS could determine whether an issuer met the “reasonable access” standard—that is, that the issuer maintains networks that are sufficient in number and types of providers in order to ensure that all services are accessible to enrollees without unreasonable delay. CMS also noted that it considers a QHP network to meet the essential community provider requirement when: The network includes at least 30 percent of available essential community providers in the QHP’s service area and The network covers at least one provider in each essential community provider category in each county where an essential community provider in that category is available. For QHPs participating in the FFE, CMS specified that essential community provider categories include, but are not limited to: federally qualified health centers, Indian Health providers, and hospitals. None of these specified categories are specific to pediatric providers. CMS is considering changes to CHIP plan and QHP network adequacy requirements. With regard to the CHIP program, CMS issued a proposed rule in June 2015 that, if finalized as proposed, would amend current Medicaid and CHIP managed care regulations to reduce variation in how states evaluate and define network adequacy. With regard to QHPs in FFE states, CMS issued a proposed rule in December 2015 that, if finalized as proposed, would allow states in which an FFE operates to select a quantifiable network adequacy standard—such as a travel time or distance standard for the proximity of network providers’ locations to enrollees’ residences—applicable to QHPs in that state. If the state does not adopt such a standard or does not review QHP network adequacy, a default federal standard imposing specific time and distance requirements would apply to QHPs in the state. CMS indicated that the agency followed proposed changes to the NAIC network adequacy model in considering modifications to the QHP network adequacy standards. Selected States Also Had Network Adequacy Standards, but Held CHIP Plans More Often Than QHPs to Pediatric-Specific Standards Overall Network Adequacy Standards in Selected States The five selected states we examined had one or more specific network adequacy standards for CHIP plans and QHPs. These standards included the following: Provider-to-enrollee ratios or quantitative standards for a minimum number of providers per enrollee or set of enrollees. Two of the selected states—Massachusetts and Washington—required CHIP plan networks to follow specific provider-to-enrollee ratios. For instance, in Massachusetts, managed care plans in which CHIP children are enrolled must include one primary care provider for every 200 enrollees. These same two states also required at least some QHPs to have a minimum provider-to-enrollee ratio for certain provider types, such as primary care providers. These primary care providers may include, but were not exclusive to, pediatric primary care providers. Travel time or distance standards for the proximity of network providers’ locations to all or some proportion of enrollees’ residences; such standards may differ for rural and urban areas. All five states required CHIP plans to adhere to specific and quantitative travel time standards, travel distance standards, or both. For example, Alabama required that, for 90 percent of enrollees, one hospital must be available within 30 miles of enrollees’ homes, and two behavioral health providers must be available within 10 miles of enrollees’ homes in urban areas or 45 miles in rural areas. All five selected states also had specific quantitative travel time or distance standards for QHPs for certain provider types, such as primary care providers. These primary care providers may include, but were not necessarily exclusive to, pediatric primary care providers. Capacity or availability standards, which may include requirements that a certain number or proportion of providers are accepting new patients or may require specific limits on appointment wait times. Three of the selected states—Massachusetts, Texas, and Washington—required CHIP plan networks to follow provider capacity or availability standards. Washington, for example, required CHIP plans to ensure that non-emergency, routine primary care be available within 10 days. These same three states also required QHPs to take into account the capacity or availability of network providers. Specific Network Adequacy Standards for Pediatric Provider Types in Selected States While federal network adequacy standards for QHPs do not impose requirements specifically related to pediatric providers, individual states may adopt such requirements. For CHIP plans, most selected states had specific requirements for pediatric provider types, but, for QHPs, only two states had specific requirements for pediatric provider types. (See fig. 1.) Specifically, four of the selected states—Alabama, Massachusetts, Texas, and Washington—required CHIP plans to meet certain pediatric provider standards. This was particularly true for travel time and distance standards, as well as capacity or availability standards. For example, Texas required CHIP plans to include in their networks one age- appropriate primary care provider within 30 miles of enrollees’ homes for 90 percent of enrollees, and Alabama required CHIP plan networks to provide access to two pediatric primary care providers within a 20-mile radius of enrollees’ homes for 90 percent of enrollees. In addition, Texas required CHIP plans to make preventive health service appointments for children available within a timeframe that is in accordance with standards set by a national pediatric provider group. For QHPs, fewer of the selected states had adopted standards related to pediatric provider types than for CHIP plans. Specifically, two of the five selected states—Texas and Washington—had QHP standards containing requirements specific to pediatric provider types. For example, Washington required QHP issuers to demonstrate that 80 percent of the covered children in a given service area have access to a pediatrician within 30 miles of their homes for an urban area or 60 miles for a rural area and to pediatric specialty services within 60 miles of their homes for an urban area and 90 miles for a rural area. In another example, Texas required that QHP issuers provide routine care for children within two months of the request. Nearly All Selected Issuers Included at Least One Children’s Hospital in Their Networks, but Many Expressed Challenges Recruiting Certain Specialists Eighteen of 19 selected issuers that offered CHIP plans, QHPs, or both, in the most populous counties of the five selected states reported including at least one children’s hospital in their provider networks for their CHIP plans and QHPs. Most of them—16 of the 18—reported including more than one children’s hospital. Representatives from one QHP-only issuer told us they did not include a children’s hospital in their QHP network, but they instead provide access to children’s pediatric services—such as neonatal intensive care and general pediatric surgery—through an agreement with four hospitals that treat children but are not limited to children. All of the selected issuers of CHIP plans and QHPs told us they had a policy to allow for enrollees to obtain case-by- case exceptions when certain services or providers are unavailable in- network. Officials representing some of the children’s hospitals we spoke with, however, raised concerns around not being included in all plan networks and the potential effect on children’s access to specialty care they may need. Representatives from all nine selected children’s hospitals we contacted in the selected states told us that their hospitals are currently included in networks of many—but not all—CHIP plans and QHPs that are offered in the selected counties. Representatives from five of the nine children’s hospitals located in four different states noted concerns about some aspects of network inclusion that could affect access for children who need specialty care through their hospitals. Specifically: Representatives from three of these five children’s hospitals told us that, in some QHPs that have tiered networks, their hospitals are included in tiers that are associated with higher enrollee cost-sharing. Representatives from two of these five children’s hospitals told us they were concerned about their future inclusion in CHIP and QHP networks, explaining that their hospital’s inclusion in networks could vary from year to year. A representative from one of these two children’s hospitals also noted that the fundamental network adequacy issue for the pediatric population is the small percentage of children with complex health care needs, which typically account for a large percentage of pediatric medical costs. Representatives from three of these five children’s hospitals noted that when their hospitals are not in a CHIP or QHP network, treating CHIP or QHP enrollees at their facilities increases the administrative burden placed on the hospitals as they have to arrange case-by-case exceptions with plan issuers. The selected issuers that offered both CHIP plans and QHPs told us they had the same or similar provider networks for their CHIP plans and QHPs. For example, one issuer told us that in 2014 its Medicaid and CHIP plan networks were different in that some Medicaid providers did not initially join QHP networks. However, the issuer told us there was an increase in the number of Medicaid providers willing to join QHP networks in 2015. Representatives from another issuer told us they had one provider network for all plans, including CHIP plans and QHPs. All of the 19 selected issuers we contacted indicated that pediatric specialists are included in each of their networks. However, many expressed challenges recruiting certain types of pediatric specialists. Many of the challenges related to location or compensation as well as reflecting provider availability nationwide: Location. Representatives from four issuers—one QHP issuer, two issuers of CHIP plans, and one issuer of both a CHIP plan and QHP—in two states told us that it is difficult to recruit and retain pediatric behavioral health providers. These representatives further noted that this problem is not specific to their county or state, but related to a nationwide shortage of children’s behavioral health providers. In addition, representatives from four issuers of CHIP plans in three states told us that recruiting specialists in metropolitan counties is generally not as difficult as recruiting specialists in rural counties, and difficulties recruiting specialists in rural counties is a problem affecting all of their insurance plan networks, not just CHIP plans. Compensation. Representatives from one issuer of a CHIP plan told us that some specialists generally require significantly higher compensation than CHIP plans typically pay, making it difficult for the issuer to recruit certain pediatric specialists—such as cardiologists, cardiovascular surgeons, neurologists, neurosurgeons, and urologists—to its network. In addition, the issuer noted that these specialists are difficult to contract with due to the limited number of providers practicing in these specialties. CMS Monitors State Oversight of Network Adequacy for CHIP Plans and Directly Monitors Adequacy for QHPs in Federally Facilitated Exchanges; Selected States’ Monitoring Varied CMS Monitored State Oversight of CHIP Network Adequacy through Contract and State Plan Reviews, and Directly Monitored Adequacy for QHPs in Federally Facilitated Exchange States States have primary responsibility for administering CHIP and for overseeing CHIP plan compliance with network adequacy standards, and CMS monitors these state oversight activities. CMS officials reported conducting certain monitoring activities for QHPs to assess the adequacy of provider networks in FFE states. Officials from most of the selected states’ CHIP agencies and departments of insurance reported monitoring issuers’ compliance with state CHIP and QHP standards, but states’ frequency of monitoring varied. Federal Monitoring of CHIP Network Adequacy CMS officials told us that the agency monitors the oversight activities of states, which have primary responsibility for administering CHIP and for overseeing CHIP plan compliance with network adequacy standards, primarily by reviewing state contracts with plan issuers and requiring certain assurances from states and issuers. Federal law requires states to establish standards for access to care under CHIP managed care plans to ensure that covered services are available within reasonable timeframes and in a manner that ensures both continuity of care and adequate primary care and specialized services capacity; states must also provide assurances to CMS that these standards are met. These standards may include, for example, provider-to-enrollee ratios, travel time or distance standards, and capacity or availability standards. CMS monitors states’ CHIP oversight activities in the following ways: Reviews state contracts with issuers of CHIP plans. Since July 1, 2009, CMS has required states to submit for CMS review all new, extended, renewed, or amended CHIP managed care contracts that states enter with managed care organizations to ensure these contracts comply with federal requirements, including those relating to access to care. Requires states to develop and implement plans that include access standards. States must operate their CHIP programs in accordance with a CMS-approved state CHIP plan that must include a description of the methods the states use to ensure the quality and appropriateness of care provided under the plan. Each state that contracts with managed care organizations to provide CHIP benefits also must develop and implement a Quality Assessment and Improvement Strategy, which must include access to care standards that ensure covered services are available within reasonable timeframes and in a manner that ensures continuity of care and adequate primary care and specialized services capacity. CMS is required to monitor the development and implementation of this plan. In addition, each contract that a state enters into with a managed care organizations to provide CHIP benefits must include a requirement for an annual external independent review to ensure the plan’s quality and timeliness of, and access to, covered items and services under the contract. CMS officials told us that they are not aware of any concerns about children enrolled in CHIP not having access to pediatric specialists, and that they think states make a concerted effort in establishing provider networks for their CHIP plans for children to ensure sufficient pediatricians and pediatric specialists. Federal Monitoring of QHP Network Adequacy In contrast with its indirect oversight role over CHIP plans, CMS is responsible for directly monitoring QHPs’ compliance with QHP certification standards in FFE states. CMS officials reported using three types of monitoring activities to assess the adequacy of QHPs’ provider networks in FFE states—through the annual QHP certification process, comprehensive issuer compliance reviews, and post-certification reviews, as follows: Annual QHP certification process. CMS conducts an annual certification process of QHPs in FFE states. CMS officials told us that during this process they assess QHPs’ provider networks using the “reasonable access” standard in order to identify networks that potentially fail to provide access without unreasonable delay, as required by federal regulations. CMS officials told us they do not assess QHP networks for their adequacy of pediatric providers or pediatric specialists because there have not historically been network adequacy concerns with these types of providers. During the certification process, CMS officials told us they analyze issuers’ QHP provider network data on the providers and types of providers in the networks for each service area using a computerized geographic mapping and analytics tool. CMS compares the QHPs’ network data against internal CMS metrics, including time and distance standards for certain provider categories that have historically raised network adequacy concerns—hospitals, mental health, oncology, primary care, and dental. According to CMS officials, 17 QHP issuers were flagged as having potential network adequacy concerns during the certification process for benefit year 2015, resulting in CMS communicating with the issuers through an iterative process to obtain more information. CMS officials reported that these issuers either provided what CMS officials deemed to be a reasonable justification for the lack of providers, such as a lack of available providers in a specialty or patterns of care that reasonably justify the lack of providers, or they provided CMS with data to indicate they included additional providers in their networks since their initial data submission. CMS officials said that, for benefit years 2014 and 2015, all issuers ended up providing adequate information about their networks to be able to attain QHP certification. Comprehensive issuer compliance reviews. CMS officials reported monitoring QHPs’ compliance with provider network standards in FFE states through comprehensive issuer compliance reviews, though network adequacy is only one of many elements in these reviews. During a compliance review, CMS reviews an issuer’s policies and procedures related to CMS’s internally established availability and accessibility standards and also reviews issuers’ compliance with other federal standards, such as QHPs’ rates, benefit design, and marketing. CMS officials reported that for benefit year 2014 they conducted compliance reviews of 21 issuers; for benefit year 2015, CMS reported having conducted such reviews of 30 issuers. Post-certification reviews. CMS officials reported that they also conduct post-certification reviews, which focus on a specific topic and may be conducted for a sample of issuers or for all issuers, depending on the focus of the review. For example, prior to the start of benefit year 2015, officials said they reviewed all certified QHP issuers’ websites to make sure the links to their provider directories were compliant with CMS network adequacy standards—that is, that the links worked and were easily accessible. In addition to these monitoring activities, CMS officials told us they also receive and respond to consumer complaints about QHPs. According to officials, when a complaint of that nature reaches CMS, the agency will follow up with the consumer on an ad hoc basis. While officials reported that they have heard anecdotally of problems with network adequacy from advocacy groups, they were not aware of any complaints specific to pediatric providers. Most Selected States Monitored CHIP and QHP Network Adequacy, but the Frequency of Monitoring Varied Selected States’ Monitoring of CHIP Network Adequacy Officials from CHIP agencies in three of the five selected states— Massachusetts, Texas, and Washington—told us they regularly monitor CHIP plan issuers’ compliance with the states’ CHIP network adequacy standards, but the frequency with which they reported doing this varied. Specifically: CHIP officials from these three states told us they require CHIP plan issuers to submit certain provider network information at the time the plan and network are established and then quarterly or annually thereafter. For example, in Washington, issuers must demonstrate the ability to service 80 percent of the eligible CHIP population in a given service area. Washington CHIP officials told us that issuers must submit information at least quarterly on all of their providers in each service area; this information is entered into a computerized geographic access program that assesses the locations of providers in relation to all potential CHIP enrollees in a service area and measures the results against the state’s distance standards. The officials said they specifically focus on an issuer’s network inclusion of 17 provider types, 6 of which they deem to be “critical” for CHIP and Medicaid, including hospitals, pharmacies, primary care providers, pediatric primary care providers, obstetricians, and behavioral health providers. Additionally, issuers must annually report information to the state CHIP agency, such as their provider-to-enrollee ratios and provider utilization ratios. In contrast, CHIP officials in the other two selected states—Alabama and Pennsylvania—told us they assess CHIP plan issuers’ compliance with state network adequacy standards at the time the network is established and then on an ad hoc basis thereafter. For example, officials from Pennsylvania told us they would request network information if they received a complaint about the network or if a provider group or hospital left the network. CHIP officials in all five states also told us that they track any consumer complaints received about CHIP plan provider networks. Selected States’ Monitoring of QHP Network Adequacy Officials from most of the selected states—Massachusetts, Pennsylvania, Texas, and Washington—told us that they rely primarily on complaints, network changes, and other concerns to prompt the frequency with which they monitor QHPs’ network adequacy. For example: Department of insurance officials from Texas— an FFE state—noted that QHP issuers must re-submit provider network information when there is a material change to the network, and, if the updated network is no longer adequate, the issuer must also submit an access plan and a request for a waiver in order to continue to offer QHPs in that service area. Department of insurance officials from Pennsylvania—another FFE state—told us that if they receive an access complaint about a QHP, staff will investigate and alert CMS to the problem. Department of insurance officials from one FFE state—Alabama—told us that they do not assess or monitor QHP provider networks, nor do they track consumer complaints. Officials from the departments of insurance in Massachusetts, Pennsylvania, and Washington told us that, as of mid-2015, they had received very few or no complaints about QHPs’ provider networks in 2014 and in 2015. Agency Comments We provided a draft of this report to the Secretary of Health and Human Services. HHS provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Selection Criteria for States, Issuers, and Children’s Hospitals This appendix describes the methodology we used to select states, issuers, and children’s hospitals to address our objectives to examine: (1) federal and state provider network adequacy standards State Children’s Health Insurance Program (CHIP) plans must meet, particularly for pediatric providers, and how these compare to standards for qualified health plans (QHP); (2) the extent to which selected issuers of CHIP plans and QHPs include children’s hospitals in their networks and otherwise help to ensure access to pediatric specialists; and (3) how the federal government and selected states monitor CHIP plan and QHP compliance with provider network adequacy standards. State selection We selected five states that administered CHIP separate from their Medicaid program for the majority of their CHIP enrollees, and covered children ages 0 to 18 in their separate CHIP programs—Alabama, Massachusetts, Pennsylvania, Texas, and Washington. The five selected states varied in the type of health insurance exchange where QHPs are sold (i.e., federally facilitated exchange or state-based exchange); the size of their child population, the number of children enrolled in their separate CHIP program as of 2013; the estimated number of children enrolled in a QHP for 2015, and the 2014 CHIP upper income eligibility standard. (See table 1.) Within each selected state, we identified the most populous county (based on 2013 U.S. Census data) from which we selected a set of issuers of CHIP plans and QHPs. To do this, we obtained data on 2014 or 2015 enrollment for all CHIP plans and QHPs offered in each selected county, as well as QHP 2014 or 2015 premium data, from officials at the Centers for Medicare & Medicaid Services (CMS) and agencies in each selected state that administer CHIP, departments of insurance, and exchanges (in the two state-based exchange states). From a total of 37 issuers of CHIP plans and QHPs that offered plans in the five selected counties, we selected 19—4 issuers that only offered a CHIP plan, 8 issuers that only offered QHPs, and 7 issuers that offered both a CHIP plan and QHPs. The 19 issuers we selected included issuers of the largest CHIP plan and largest QHP in each selected county, based on enrollment for benefit year 2014 or 2015, in order to obtain information on issuers who cover a large share of CHIP and QHP enrollees. Because QHP issuers offered more than one QHP in a given county, we selected QHP issuers based on total county enrollment in each issuer’s silver plan with the lowest premium for 2014 or 2015. Where possible, within each state we selected at least one issuer that offered: (1) only a CHIP plan, (2) only QHPs, and (3) both a CHIP plan and QHPs. The CHIP plans and lowest- cost silver plans offered by the 19 selected issuers provided coverage to at least 73 percent of enrollment in CHIP managed care plans and at least 84 percent of enrollment in lowest-cost silver QHPs in each selected county. See table 2 for the selected counties and the number of issuers that offered CHIP plans, QHPs, or both in each county. In four of the five states, we selected a set of hospitals in each selected county whose mission was to primarily serve children—referred to as children’s hospitals. In the fifth selected state, the selected county did not have a children’s hospital, so we contacted children’s hospitals in a neighboring county. We contacted a total of nine children’s hospitals—at least one in each selected state—and interviewed or received written information from all of them. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kim Yamane, Assistant Director; Sandra George; Kate Nast Jones; Laurie Pachter; and Nina Verevkina made key contributions to this report.
Federal funding for CHIP expires at the end of fiscal year 2017. Any state with insufficient federal CHIP funding is required to have procedures to enroll eligible children in QHPs certified by HHS as comparable to CHIP, if any such QHPs are available. Little is known about how provider networks offered in QHPs compare with those in CHIP plans. GAO was asked to review the inclusion of pediatric providers, including children's hospitals—where many children access pediatric specialists—in CHIP and QHP networks. This report examines (1) federal and selected state CHIP and QHP network adequacy standards, (2) the extent to which selected issuers of CHIP plans and QHPs include children's hospitals and otherwise help ensure access to pediatric specialists, and (3) how CMS and selected states monitor CHIP plan and QHP compliance with adequacy standards. GAO selected five states—Alabama, Massachusetts, Pennsylvania, Texas, and Washington—that varied based on whether the state or CMS operated the exchange on which QHPs were offered, as well as in the number of children in CHIP and in the state overall. GAO then selected issuers of the largest CHIP plan and QHP in the states' largest county, based on the most recently available enrollment data, and at least one children's hospital in each state. GAO reviewed federal and state laws and regulations and interviewed officials from CMS, the selected states, issuers, and children's hospitals. GAO findings on selected states and entities are not generalizable. Broad federal provider network adequacy standards apply to health plans in the joint federal-state State Children's Health Insurance Program (CHIP) and to qualified health plans (QHP)—private health plans offered on health insurance exchanges. These standards measure the adequacy of the networks of physicians, hospitals, and other providers participating in each plan. The five selected states GAO reviewed had one or more specific network adequacy standards, including: All five states required CHIP plans and QHPs to adhere to specific and quantitative standards for travel time or distance for the proximity of network providers' locations to enrollees' residences; some had both. Three selected states required CHIP plan and QHP networks to follow provider capacity or availability standards, including, for example, specific limits on appointment wait times. Two selected states required CHIP plan and QHP networks to follow specific provider-to-enrollee ratios. More of the five states that GAO reviewed had child-focused network adequacy standards for CHIP plans than for QHPs. For CHIP plans, four of the five states had specific requirements for pediatric provider types, but, for QHPs, two of the five selected states had requirements for pediatric provider types. Nearly all of the 19 selected issuers that GAO interviewed stated that they included at least one children's hospital in their CHIP and QHP networks. Most of the issuers noted they included more than one. One of the selected issuers—a QHP-only issuer—informed GAO that it did not include any children's hospitals, but noted having an arrangement with another hospital to provide certain pediatric services. Officials from most of the nine selected children's hospitals GAO interviewed raised concerns around not being included in all plan networks and the potential effect of this on children's access to specialty care they may need. Officials from the selected issuers also noted challenges recruiting certain types of pediatric specialists related to geographic location and compensation. The Centers for Medicare & Medicaid Services (CMS)—the federal agency that oversees CHIP and QHPs—monitors state oversight of network adequacy for CHIP plans and is responsible for directly monitoring QHPs' network adequacy in states with federally facilitated exchanges. For CHIP, CMS officials told GAO they review state contracts and plans to assure compliance with access requirements, and, for QHPs, they monitor network adequacy through an annual certification process as well as other types of review. Officials from most of the five selected states told GAO they also monitored issuers' network adequacy compliance, but the frequency of monitoring varied. For example, three of the five selected states told GAO they require CHIP plan issuers to submit certain provider network information when the plan and network are established, then quarterly or annually thereafter. Officials from most of the selected states told GAO that they rely primarily on complaints, network changes, and other concerns to prompt the frequency with which they monitor QHPs' network adequacy. The Department of Health and Human Services (HHS) provided technical comments on a draft of this report that GAO incorporated, as appropriate.
Background VA serves veterans of the U.S. armed forces, and provides health, pension, burial, and other benefits. In fiscal year 2015, VA spent about $20 billion on goods and services via contracts—more than a quarter of its discretionary budget. As shown in the organizational chart below, these contracts were awarded by VA’s eight heads of contracting activity (HCAs). The department’s three operational administrations—VHA, the Veterans Benefits Administration, and the National Cemetery Administration—operate largely independently from one another. In addition to the operating administrations, several VA procurement organizations have department-wide roles: The Office of Acquisition, Logistics, and Construction (OALC) is a VA headquarters organization responsible for directing the acquisition, logistics, construction, and leasing functions within VA. The Office of Acquisition Operations (OAO), which falls under OALC’s purview, conducts procurement activities for customers across the department and has two primary operating divisions—the Technology Acquisition Center (TAC), which focuses on IT purchasing, and the Strategic Acquisition Center (SAC), which is responsible for procurement of certain types of goods and services for the operating administrations, such as VHA. The Office of Acquisition and Logistics (OAL) is responsible for oversight of contracting across VA, including setting policy and issuing warrants to contracting officers. o The National Acquisition Center (NAC) is an OAL contracting organization which serves VHA by providing contracting for certain health care-related goods and services. VHA provides medical care to veterans and is by far the largest administration in VA, with a budget of $61.1 billion for fiscal year 2016, representing the majority of VA’s $75 billion discretionary budget. Its 167 medical centers are currently organized into 19 Veterans Integrated Service Networks (VISN), regional networks that manage some aspects of operations. VHA has 19 Network Contracting Offices, each of which serves one of the 19 VISNs. VA has some organizational and programmatic changes in progress that affect procurement. In July 2015, the Secretary of Veterans Affairs announced an organizational transformation for the department called MyVA. In a related effort, responsibility for the medical-surgical prime vendor (MSPV) program—a logistics provider that facilitates ordering and delivery of supplies to medical centers from many different contractors— was recently transferred from NAC to SAC. VA’s Complex Procurement Structure Creates Challenges for Users Given VA procurement’s highly decentralized structure, a given customer—such as a department in a medical center or a program office—may need to work with multiple contracting entities to meet its procurement needs. Figure 2 illustrates the complex working relationship between contracting offices and their customers across VA. This can contribute to confusion. Several of the contracting officials we spoke with stated that they were, at times, uncertain about which contracting office handled what requirements. VA issued a memorandum in 2013 to clarify areas of responsibility for the national contracting organizations, but confusion remains. VA’s Acting Chief Acquisition Officer stated that he is aware of overlap in the functions of some contracting organizations, especially the NAC and the SAC. At one VISN we visited, an official reported procuring one type of high-tech medical equipment through the SAC even though this area is specifically designated as NAC’s responsibility because she expected that the SAC could execute the purchase more quickly. Without clearly delineated organizational roles and customer relationships—beyond what was provided in the 2013 memorandum—the possibility of duplication in these roles and relationships is increased, and customers lack clear guidance on which organization to approach for certain types of procurements. In our September 2016 report, we recommended that OALC assess whether additional policy or guidance is needed to clarify the roles of VA’s national contracting organizations. The Acting Chief Acquisition Officer, OALC said that the department agreed with this recommendation. VA Procurement Policies Are Outdated and Not Always Cohesive and Effectively Communicated Key VA procurement policies are outdated and difficult for contracting officers to use. Standards for Internal Control in the Federal Government state that it is important for an organization’s management to update its policies over time to reflect changing statutes or conditions, and that those policies should be communicated to those who need to implement them. However, many of VA’s regulations and policies are outdated, most notably the VA Acquisition Regulation (VAAR), which has not been updated since 2008. The department has issued a patchwork of policy documents in the interim to fill this gap. VA asks contracting officers to refer to two different versions of the VAAR, one from 1997 and the other from 2008. This causes confusion among contracting officers. In addition, VA communicates interim procurement policies in a number of different forms, some of which can be duplicative. Figure 3 illustrates the numerous sources that contracting officers must turn to for guidance. The sheer volume and number of different forms of communications— many of which are outdated—are confusing and present challenges for contracting officials seeking appropriate guidance. While VA recently fully rescinded the 1997 VAAR after our inquiries, the 2008 version remains out of date. A new revision of the VAAR is also in development, but has faced delays. VA began the process in 2011 but does not plan to finalize the new VAAR until December 2018, including the required rulemaking process. The lengthy delay in updating this fundamental source of policy impedes contracting officers’ abilities to effectively carry out their duties. In our September 2016 report, we recommended that VA identify measures to expedite the revision of the VAAR, and take interim steps to clarify its policy framework; the Acting Chief Acquisition Officer, OALC stated that the department agreed with both of these recommendations. VA Can Improve Its Processes for Medical Supply Purchasing and Identify Other Cost Savings Opportunities VA medical centers use contractors called medical-surgical prime vendors to obtain many of the supplies they use on a daily basis, such as bandages and surgical sutures. Officials known as ordering officers, who work at the medical centers, regularly place orders. In turn, the prime vendor delivers those orders via a local warehouse. The prices for these medical supplies are established by VA national contracts, which typically provide significant discounts over the Federal Supply Schedule prices— an estimated 30 percent on average, according to a senior NAC official. Use of these national contracts is also required by VA policy and regulation. Figure 4 provides an overview of the MSPV process. However, the current MSPV process is confusing and cumbersome. Most orders are placed through the Integrated Funds Distribution Control Point Activity, Accounting and Procurement (IFCAP) system, a decades-old IT system with a text-based interface, which does not include a tool to look up items that are available on the national contracts. For instance, ordering officers must know the exact item number—which is different for each vendor—to enter into IFCAP. The existing tools to look up available national contracts are also cumbersome. Along with discounted items on national contracts, the MSPV system also allows ordering officers to buy thousands of items directly from VA’s Federal Supply Schedule contracts, which lack the degree of discounted pricing of the national contracts. Because of the challenges posed by the system, ordering officers in some cases purchase items directly from the Federal Supply Schedules, and might miss opportunities to obtain discounts on the national contracts. Administration of the MSPV program is being transferred from NAC to SAC, and, along with this transfer, VHA and SAC are making changes to the MSPV program in an effort to address the issues discussed above and streamline the process. To support the next generation MSPV, SAC has already awarded new prime vendor contracts and is in the process of awarding the supporting national contracts for individual types of supplies. VHA and SAC also plan to implement a new online ordering interface, developed by a contractor for VHA, which will provide ordering officers a more intuitive interface for the outdated and difficult-to-use IFCAP system. Further, unlike the current system, this new interface will only permit ordering officers to purchase items from a specific catalog of items, not the wider range of Federal Supply Schedule items. VA estimates that this catalog will eventually contain 8,000 to 10,000 items to meet the needs of its medical centers. However, there have been some delays in VHA’s development of supply requirements and SAC’s award of new supply contracts, with only about 1,800 items on national contracts as of July 2016. VA does not anticipate that SAC will be able to award contracts for the full catalog by the time the new MSPV contracts become operational in December 2016. In the interim, SAC and VHA officials stated that they will allow ordering of Federal Supply Schedule items (approximately 4,500) that are not on national contracts, to ease the transition. Work remains to ensure that the transition to this new approach will be successful. Updating the MSPV process affects how essential supplies are ordered and delivered at 167 medical centers on a daily basis, and facility logistics staff, including ordering officers, must be able to implement the new approach. VHA has an outreach plan in place, but chief logistics officers at medical centers we visited expressed some concerns about the transition—for instance, one reported that his office’s analysis found 14 items deemed critical to the function of the medical center were not on a preliminary list of supplies available through the new MSPV, nor were acceptable substitutes. If medical centers instead purchase items through their local contracting offices because the new MSPV does not meet their needs, it will undermine the program’s potential to increase efficiency and cost savings. In our September 2016 report, we recommended that VA take steps to facilitate the transition to the new MSPV process, including ensuring that SAC collects data to monitor the use of national contracts in the new system, that SAC and VHA establish achievable time frames for eliminating Federal Supply Schedule items from the MSPV catalog once national contracts are in place, and that the new ordering interface clearly distinguish between items on national contracts and the 4,500 items on the Federal Supply Schedules. The Acting Chief Acquisition Officer, OALC said that the department agreed with this recommendation. VA’s substantial buying power presents many opportunities for procurement cost savings, but the department has not consistently taken advantage of them. A key aspect of strategic sourcing is consolidating similar requirements to manage them collectively, reaping cost savings and efficiency gains. VA has done this successfully in some areas, such as pharmaceuticals, and the planned changes to the MSPV program could result in greater use of discounted national contracts for medical supplies if they are successfully implemented. There are opportunities to better apply strategic sourcing principles at the regional level, as well. Within VHA, each of the 19 VISNs is responsible for a regional network of multiple medical centers and clinics. Individual medical centers within each VISN procure many goods and services separately, despite the fact that their requirements are similar. Consolidating these requirements—such as security services, elevator maintenance, and eyeglasses for patients—can realize both cost savings and greater efficiency in awarding and administering contracts. We found efforts underway to consolidate requirements at the regional level, but local autonomy and limited planning capacity pose obstacles. For instance, one VISN we visited recently began an initiative to consolidate requirements for purchases made by all of its medical centers, especially services. VISN managers explained that they began with the easiest requirements, such as landscaping services and parking administration. They issued a draft memorandum with plans to broaden this approach to most purchases, but medical center staff provided feedback that they preferred their own local contracts and did not want VISN-wide contracts to become the default approach. In our review of 37 selected contracts, we did find several instances of VISN and contracting officials consolidating requirements for greater efficiency and to obtain better pricing. This indicates that consolidating procurement is possible with leadership buy-in, and that there are opportunities to share lessons learned across VISNs. Within VHA, in VISNs where there is not a consistent push by local leadership to pursue consolidation, it is challenging for efforts driven by individual departments or contracting personnel to overcome cultural obstacles. To provide the necessary leadership commitment to take advantage of these opportunities, we recommended in our September 2016 report that VHA Procurement and Logistics conduct a review of VISN-level strategic sourcing efforts, identify best practices, and, if needed, issue guidance. The Acting Chief Acquisition Officer, OALC said that the department agreed with this recommendation. Chairman Coffman, Ranking Member Kuster, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgements If you or your staff have any questions about this statement, please contact Michele Mackin at (202) 512-4841 or MackinM@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to the report on which this testimony is based are Lisa Gardner, Assistant Director; Emily Bond; George Bustamante; Margaret Hettinger; Julia Kennon; Katherine Lenane; Ethan Levy; Teague Lyons; Jean McSween; Sylvia Schatz; Erin Stockdale; and Roxanna Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's September 2016 report, entitled Veterans Affairs Contracting: Improvements in Policies and Processes Could Yield Cost Savings and Efficiency ( GAO-16-810 ). GAO found opportunities for the Department of Veterans Affairs (VA) to improve the efficiency and effectiveness of its multi-billion dollar annual procurement spending in several areas including data systems, procurement policies and oversight, acquisition workforce, and contract management. Shortcomings in VA's recording of procurement data limit its visibility into the full extent of its spending. A recent policy directing that medical-surgical supply orders be captured in VA's procurement system is a step in the right direction, but proper implementation is at risk because procedures are not in place to ensure all obligations are recorded. VA's procurement policy framework is outdated and fragmented. As a result, contracting officers are unclear where to turn for current guidance. VA has been revising its overarching procurement regulation since 2011 but completion is not expected until 2018. Meanwhile, contracting officers must consult two versions of this regulation, as well as other policy related documents. Clear policies are key to ensuring VA conducts procurements effectively on behalf of veterans. The figure below depicts the various sources of regulations, policy, and guidance. Sources of Veterans Affairs (VA) Procurement Policy as of June 2016 Managing workload is a challenge for VA's contracting officers and their representatives in customer offices. A 2014 directive created contract liaisons at medical centers in part to address this issue, but medical centers have not consistently implemented this initiative, and VA officials have not identified the reasons for uneven implementation. VA can improve its procurement processes and achieve cost savings by complying with applicable policy and regulation to obtain available discounts when procuring medical supplies; leveraging its buying power through strategic sourcing; ensuring key documents are included in the contract file, as GAO found that more than a third of the 37 contract files lacked key documents; and ensuring that compliance reviews identify all contract file shortcomings.
Background The concept behind the EZ/EC program originated in Great Britain in 1978 with the inception of the Enterprise Zone program. The main objective of the Enterprise Zone program was to foster an attractive business environment in specific areas where economic growth was lacking. In the United States, some states began to administer similar state Enterprise Zones in the 1980s. In 1993, the federal government established the federal EZ/EC program to help reduce unemployment and revitalize economically distressed areas. The authorizing legislation established the eligibility requirements and the package of grants and tax benefits for the EZ/EC program (table 1). Multiagency teams from HHS, HUD, USDA, and other federal agencies reviewed the applications in Round I, and HUD and USDA issued designations based on the effectiveness of communities’ strategic plans, assurances that the plans would be implemented, and geographic diversity. In Round I, HUD designated a total of 8 urban EZs and 65 urban ECs, and USDA designated 3 rural EZs and 30 rural ECs. HHS provided Round I EZs and ECs with a total of $1 billion in EZ/EC grant funds. EZs and ECs were allowed to use the EZ/EC grants for a broader range of activities than was generally allowed with those types of HHS funds. For instance, EZs and ECs could use funding for “traditional” activities, such as skills training programs for disadvantaged youth or drug and alcohol treatment programs, as well as for additional activities, such as the purchase of land or facilities related to an eligible program or the capitalization of a revolving loan fund. EZs and ECs were also permitted to use grant funds to cover some administrative costs and to change their goals and activities over time, with approval from HUD or USDA. In addition, HUD and USDA expected EZs and ECs to use the EZ/EC grant to leverage additional investment. Businesses operating in EZs and ECs were eligible for a substantial amount of program tax benefits. In 1993, the Joint Committee on Taxation estimated that the tax benefits available to businesses in Round I communities would result in a $2.5 billion reduction in tax revenues between 1994 and 1998. In 2000, the committee estimated that the combination of EZ/EC program tax benefits and the Renewal Community tax benefits would reduce tax revenues by a total of $10.9 billion between 2001 and 2010. The tax benefits for ECs expired in 2004, and the tax benefits for all EZs and Renewal Communities are currently set to expire at the end of 2009. Four federal agencies are responsible for administering the program in Round I. Oversight responsibilities for Round I were divided among three agencies, with HHS providing fiscal oversight and HUD and USDA providing program oversight (fig. 1). HHS issued grants to the states, which served as pass-through entities—that is, they distributed funds to individual EZs and ECs. According to their regulations, HUD and USDA are required to evaluate the progress each EZ and EC made on its strategic plan based on information gathered on site visits and on information reported to them by the designated communities. In addition, IRS is responsible for administering the program tax benefits. In assessing the extent of EZ/EC program improvements, it is useful to understand the overall national trends in poverty, unemployment, and economic growth. National trends in these indicators have varied since Round I of the program was established. As shown in table 2, the national poverty and unemployment rates showed improvements (i.e., declines) in 2000 compared with 1990, but both were somewhat higher in 2004. In 1990, Round I EZs and ECs had poverty and unemployment rates that exceeded these national averages, as was required for program eligibility. In terms of economic growth, the table shows that the number of businesses increased gradually between 1990 and 2003, and the number of jobs increased from 1990 to 2000 but fell slightly between 2000 and 2003. Round I EZs and ECs Have Used Their Grant Funds to Implement a Wide Range of Program Activities EZs and ECs used most of the program grant funds to implement a wide range of activities to carry out their respective revitalization strategies. In total, as of March 31, 2006, EZs and ECs had used all but 15 percent of the available grants. EZs and ECs implemented a variety of activities, but, in general, focused more on community development than economic opportunity. In addition, all designated communities reported leveraging additional resources, though a lack of reliable data prevented us from determining how much. Several designees also noted other accomplishments, such as increasing local coordination and capacity. The governance structures that Round I EZs and ECs established to implement these activities varied and included organizations to manage the day-to-day operations of the EZs, boards, and advisory committees. Most EZ/EC Grant Funds Have Been Expended, but Many EZs and Some ECs Received Grant Extensions As of March 31, 2006, Round I EZs and ECs had spent all but 15 percent of the program grant funds they received. HHS data show that 20 percent of the program grant funds provided to EZs and 2 percent of the funds provided to ECs were unspent (table 3). In addition, HUD data show that the Cleveland and Los Angeles EZs, which originally received Supplemental EZ designations, had used significant portions of the Economic Development Initiative grants and Section 108 Loan Guarantees that came with their designations. Specifically, each of them had spent slightly more than 70 percent of their grants; Cleveland had used 72 percent of its loan guarantees, but Los Angeles had used less—about 33 percent. Most of the remaining $151 million in EZ/EC grants consists of the funds of four urban EZs: Atlanta, New York, Philadelphia-Camden, and Chicago, with Atlanta and New York accounting for the majority of the unspent funds (fig. 2). When the Atlanta EZ received a Renewal Community designation from HUD in 2002, the EZ designation was terminated, but HHS allowed the city of Atlanta to continue spending its remaining EZ grant funds through December 2009. The city of Atlanta elected to administer its remaining EZ grants in conjunction with its Renewal Community initiative, and prepared a strategic plan to address administration of both the remaining HHS funds and the HUD-designated Renewal Community. The Atlanta Renewal Community officials told us that they did not use the EZ funds for about 4 years after receiving the designation because of the time required for start-up but added that they planned to begin utilizing the funds soon. The New York EZ received matching funds from both the state and city governments, for a total of $300 million. New York EZ officials stated that they used equal parts of funding from these three sources for each activity, potentially explaining why they have drawn down funds at a slower rate than other EZs. Although the grant period for Round I EZs and ECs was originally scheduled to end December 21, 2004, several EZs and some ECs received extensions from HHS to continue drawing down their remaining funds. The recipients had to demonstrate a legitimate need to complete project activities outlined in their strategic plans. Eight of the 11 EZs (6 urban, 2 rural) and 17 of the 95 ECs (11 urban and 6 rural) received extensions of their grants until December 31, 2009. In addition, 1 urban EZ and 9 ECs (6 urban and 3 rural) received extensions for a shorter time frame, such as 2005, 2006, or 2007. EZs and ECs Implemented a Wide Variety of Activities, Most Related to Community Development The designated communities were encouraged to implement both community and economic development activities as part of their revitalization strategies. The EZ/EC program was designed to be tailored to address local needs, and the type of grant funds most EZs and ECs received from HHS allowed them to implement a wide range of activities. Overall, both EZs and ECs used the program grants to implement a larger number of community development activities—such as education, health care, and infrastructure—than economic opportunity activities—such as workforce development and providing assistance to businesses (fig. 3). The activities most often implemented by urban EZs and ECs were workforce development, human services, education, and assistance to businesses, which accounted for more than 50 percent of the activities in urban EZs and 60 percent of the activities in urban ECs (fig. 4). For example, the Baltimore EZ implemented a customized training program that provided EZ residents with individualized training and a stipend during the training period. In the Bronx portion of the New York EZ, stakeholders explained that they had funded an organization that trained women to become child care providers, a program that not only provided job skills and employment opportunities but also improved the availability of child care in the area. In addition, the Atlanta EZ and the Camden portion of the Philadelphia-Camden EZ implemented educational programs for EZ youth, such as after-school or summer programs. Also, stakeholders from the Upper Manhattan portion of the New York EZ mentioned contributing financial assistance to the business development of the Harlem USA project, a 275,000-square-foot retail development located in the EZ. Moreover, stakeholders from the Providence EC said they provided grants to a nonprofit that offered job training to youth and business development programs, such as “business incubators” that offered office space and technical assistance to new small businesses. Rural EZs and ECs implemented many of the same types of activities as urban designees, such as business development and job training, but often included activities related to health care and public infrastructure. For example, stakeholders from the Kentucky Highlands and Mid-Delta Mississippi EZs said that they had attracted businesses to the areas using EZ loans, grants, or tax benefits, and stakeholders from the Rio Grande Valley EZ reported funding job training for EZ residents. In addition, stakeholders from Kentucky Highlands said the EZ purchased ambulances for an area that previously did not have those services. All three rural EZs reported using the EZ/EC grant to improve the water or sewerage infrastructure in their EZs, which some said was needed to foster additional economic development. Finally, stakeholders from the Fayette-Haywood EC reported having implemented several activities related to health care, such as recruiting doctors and providing funding to reopen a clinic that had been closed for several years. For more information on the types of activities implemented by the individual communities we visited, see appendix IV. EZs and ECs Used Program Grants to Leverage Additional Funds, but Reliable Data on the Extent of Leveraging Are Not Available HUD and USDA also expected designees to use their grants to leverage additional investment. Stakeholders from all EZs and ECs we visited and all EC survey respondents reported having used their EZ/EC grants to leverage other resources, including both monetary and in-kind donations. EZs and ECs developed different policies that may have affected the extent to which they leveraged funds. For example, the Mid-Delta EZ required that direct grant recipients obtain at least 65 percent of their funding from other sources. Some other communities, such as the Atlanta EZ, did not have similar requirements for subgrantees, although in some cases subgrantees did leverage funds on their own initiative. EC survey respondents reported using the EZ/EC grants to leverage additional resources for capital improvements, social services, and funding for businesses, among other things. Some EC survey respondents also mentioned that the designation had helped them to leverage funds to implement additional programs or to expand EC programs. All EZs and ECs that provided us with a definition of leveraging said that they included all non-EZ/EC grant funds that were used in EZ/EC-funded programs. But only two of the four EZs that used the program tax-exempt bond included the amount of the bonds in their total leveraged funds. In addition, some EZs reported as leveraged funds other investments made in the EZ area, aside from those directly funded with the EZ/EC grant funds, although other designated communities did not. For example, the Baltimore EZ included all business investments made subsequent to infrastructure improvements the EZ made to an industrial park. USDA encouraged rural EZs and ECs to report all investment in the EZ as leveraged funds, not only those projects that received EZ/EC funds. For example, at USDA’s instruction, the Fayette-Haywood EC included funding from other USDA programs operating in the EC, even when EC funds were not involved. However, not all rural sites used this broad definition of leveraging. Similarly, at one HUD official’s instruction, the Cleveland EZ included as leveraged funds other investments made within the EZ, such as city Community Development Block Grant funds invested in the area. However, there was no written guidance telling the Cleveland EZ to include other investments, and it no longer includes these other investments as leveraged funds in performance reports. Although communities reported using the EZ/EC grants to leverage additional resources, we could not verify the actual amounts. HUD’s and USDA’s performance reporting systems include information on the amount of funds leveraged for each activity, but for the sample of activities we reviewed, either supporting documentation showed an amount conflicting with the reported amount or documentation could not be found. In addition, the definition of “leveraged” varied across sites, as the federal agencies did not provide EZs and ECs with a consistent definition of what leveraged funds should include. As a result, designated communities included different types of funds in the amounts they reported as leveraged. Designees Reported Other Accomplishments In addition to the activities that were implemented, EZ and EC stakeholders with whom we spoke mentioned other accomplishments that were not as easy to quantify and report in the performance systems. For example, one of the aims of the EZ/EC program was to increase collaboration among local governments, nonprofits, community members, and the business community. Stakeholders from several sites we visited commented on how the designation facilitated increased collaboration among different groups of people and organizations. For instance, several stakeholders from the Rio Grande Valley EZ noted the value of having different communities and people work together, something that had not happened prior to the EZ/EC program. Several EC survey respondents also mentioned the importance of collaboration and partnerships in carrying out the EC program. Stakeholders from some sites we visited mentioned that the EZ/EC program had helped to empower local residents by giving them a better understanding of how government worked. In addition, stakeholders from some EZs said that the EZ/EC program had helped to build the capacity of local organizations. In Cleveland, local stakeholders said that the funding provided by the EZ had helped increase the organizational capacity of four local community development corporations and that participation in the governance of the EZ helped to foster communication between the groups. Designees Reported Implementation Challenges EZ stakeholders also mentioned some issues that had made implementing the EZ/EC program more challenging. Stakeholders from some EZs noted that an initial lack of experience or expertise on the part of EZ officials had made it difficult to implement the program. In addition, stakeholders from the Camden portion of the Philadelphia-Camden EZ and the Rio Grande Valley EZ said that local subgrantee organizations generally had a low level of organizational capacity, which sometimes made it difficult to choose qualified applicants to implement EZ programs. Stakeholders from several sites also said that it was difficult to manage the expectations of both the EZ community and of residents and businesses that were not located in the zones and were not eligible for EZ/EC program benefits, especially when the individuals and businesses were located just across the street from the designated area. EZs and ECs Established a Variety of Governance Structures and Encouraged Community Participation In addition to choosing the activities that their EZs or ECs implemented, designated communities were permitted to determine the structure they would use to govern and operate the program. Generally, these structures included an EZ/EC management entity—either a nonprofit organization or an entity that was part of the local government. Two urban EZs—New York and Philadelphia-Camden—became two separate entities that were managed by different types of organizations that split the $100 million EZ grant. In the Philadelphia-Camden EZ, for example, the Philadelphia portion was run by the city of Philadelphia and the Camden portion by a nonprofit organization. All designees had at least one board, and, in some cases, EZs included community advisory groups or separate “subzone” boards, which represented specific areas of the EZ in their governance structures. All three rural EZ boards made decisions about EZ activities without the direct involvement of local government entities. However, the extent of government involvement in urban EZ boards varied, regardless of whether the EZ was managed by a nonprofit or local government organization (fig. 5). For example, in two EZs, Cleveland and Chicago, local government had extensive control of the program, but in other EZs, such as Detroit, the board of the nonprofit organization that managed the EZ shared partial decision-making authority with the mayor and city council. Other EZs were operated with minimal local government involvement, with the boards determining which activities to implement, allocating resources, and deciding which entities would implement the programs. Appendix IV provides more details on the governance structures of the EZs we visited. Another program expectation was to encourage community participation within the designated communities. Regardless of the type of governance structure they used, EZs and ECs involved community participants in the planning and carrying out of program activities. According to stakeholders from all the EZs and the ECs we visited, residents were involved in meetings such as “visioning sessions” and town hall gatherings during the strategic planning process. Community groups, such as local colleges and universities, development corporations, and businesses, were also involved prior to designation. In addition, 56 out of 58 ECs responding to our survey reported that EC residents attended listening sessions, generated ideas for activities, or helped to establish priorities. Respondents also indicated that a variety of other groups participated in the strategic planning process for the ECs, including local government officials and representatives from community-based organizations. After designation, stakeholders from the EZs and ECs we visited said that residents often served on boards, and some stakeholders noted they relied on the boards to capture a wide range of viewpoints. Most EZs and ECs we visited also included as participants business representatives, officials from nonprofits, and clergy, among others. Some EZs and ECs also included residents from specific neighborhoods within the designated area or individuals with special expertise, such as in the areas of health care and housing. Oversight Was Hindered by Limited Program Data and Variation in Monitoring According to our federal standards, federal agencies should oversee the use of public resources and ensure that ongoing monitoring occurs. However, HHS, HUD, and USDA did not collect data on how program funds were spent. In addition, HHS did not provide the states, EZs, and ECs with clear guidance on how to monitor the program grant funds, and the types and extent of monitoring performed by state and local participants varied. The lack of reporting requirements may be related to the program’s design, which was intended to give communities flexibility in using program funds and relied on multiple agencies for oversight. However, these limitations have hindered the agencies’ efforts to determine whether the public resources are being used effectively and program goals are met. Federal Agencies Are Required to Oversee the Use of Public Funds and Provide Ongoing Monitoring According to federal standards established in the Standards for Internal Control in the Federal Government, program managers need both program and fiscal data to determine whether public resources are being used effectively and program goals are being met. In the case of the EZ/EC program, fiscal data would include not only the aggregate amount of program grant funding designated communities spent, but also data on the amount of funds spent on specific types of activities. Program data would include descriptions of the activities implemented and program outputs, such as the number of individuals trained in a job training program. The standards also state that federal agencies should ensure that ongoing monitoring occurs in the course of normal operations. For instance, the federal agencies should provide guidelines on what monitoring should occur, including whether on-site reviews or reporting are required. For the EZ/EC program, HHS regulations require states, EZs, and ECs to maintain fiscal control of program funds and accounting procedures sufficient to enable them to prepare reports and ensure the funds were not used in violation of the applicable statute. The Federal Agencies’ Oversight Efforts Had Shortcomings in Data Collection None of the federal agencies collected data showing how program funds had been spent. As we have noted, the EZ/EC grants were special Social Services Block Grants that gave recipients expanded flexibility in using the funds. The regulations for most grants of this type require states to report on, among other things, the amount of funding spent on each type of activity. However, because HHS did not require this level of reporting for the EZ/EC program, the agency’s data show how much of each grant was used but not how much was spent on specific activities or types of activities. Further, HHS’s data sometimes do not show how much of the grant a specific EC used, since states could aggregate drawdowns for multiple communities. For example, there are five urban ECs in Texas, but the data reported to HHS show only the aggregate amount of funds these ECs used, not the amount used by each. Similarly, although HUD’s and USDA’s reporting systems contained some information on the amount of EZ/EC grants budgeted for specific activities, the systems did not account for the amounts actually spent on those activities. Moreover, we found that the data on the amount of EZ/EC grant funding were often not reliable, as some EZs and ECs reported budgeted amounts and others reported actual amounts spent. Further, in our assessments of the reliability of these data, we found documentation showing that the designated communities had undertaken certain activities with program funding, but we were often unable to find documentation of the actual amounts allocated or expended. Program Monitoring by State and Local Participants Varied Although HHS regulations require states, EZs, and ECs to maintain fiscal control of program grant funds, the agency also did not provide guidance detailing the steps state and local authorities should take to monitor the program. In the absence of clear guidance, the type and level of monitoring conducted at the state and local levels varied. For example, some state and EZ/EC officials applied guidelines from other programs, such as the Community Development Block Grant program, or developed their own policies. Officials from almost all states we interviewed said they reviewed audits of the EZs and ECs and were required to submit aggregate data to HHS, and most had performed site visits at least once during the program. State officials also said they reviewed requests to draw down grant funds, approving expenditures if the requests met the goals outlined in the strategic plans. However, most states did not maintain records showing the types of activities designated communities undertook. Some states said that they had taken corrective actions, such as withholding payments when designated communities had not properly reported how funds were used. However, only a few states also completed program monitoring activities, such as reviewing whether a project took place or benefited EZ or EC residents, in conjunction with their fiscal reviews. Most of the EZs and ECs we visited conducted on-site monitoring of subgrantees and reviewed their financial and performance data, and some communities required annual audits of their subgrantees. For example, the Rio Grande Valley EZ assigned a program staff member to monitor each subgrantee activity and required annual audits. In contrast, the Fayette-Haywood EC did not perform any site visits and relied on other funding organizations to monitor subgrantees. Some instances of misuse of program funds did occur during the EZ/EC program. For example, officials at the Mid-Delta EZ reported two cases of embezzlement by EZ personnel. According to an EZ official, in one case that was discovered through an independent audit, an individual was prosecuted for embezzling $28,000 in 1996 (only $1,800 was recouped). The second case of embezzlement of $31,000 by two EZ staff, discovered when the staff turned themselves in, is currently under joint State of Mississippi and FBI investigation as part of a larger investigation of misuse of EZ funds starting as early as 1996. In addition, three audits by the state of Georgia found that almost all the administrative funds designated for the Atlanta EZ ($4 million) had been used in the first 3 ½ years of the program, including approximately $44,000 used for questionable costs related to personnel and travel expenditures. To address this issue, the Atlanta EZ repaid some of the costs in question, provided additional documentation, and instituted better recordkeeping procedures. The city of Atlanta also initiated a restructuring of the EZ and fired the majority of EZ staff. Limitations in EZ/EC Oversight May Have Resulted from the Program Design As discussed earlier, the EZ/EC program was designed to give the designated communities increased flexibility in deciding how to use program funds and used states as pass-through entities for providing funds. Part of the philosophy behind the program was to relieve states and localities of the burden of excessive reporting requirements. Furthermore, no single federal agency had sole responsibility for oversight of Round I of the EZ/EC program, although federal standards require that agencies provide adequate oversight over public resources. In the beginning, the agencies made some efforts to share information, but these efforts were not maintained. For example, HUD officials said that they had received fiscal data from HHS and reconciled that information with their program data on the activities implemented in the early years of the program. According to HUD, the agency made additional attempts to obtain data from HHS but only recently received a report. An HHS official said the agency no longer regularly shared detailed data with HUD and USDA, which the official said was likely due to a lack of program staff. These limitations do not necessarily apply to Rounds II and III of the EZ/EC program. For example, both fiscal and program oversight of the urban and rural EZs and ECs were provided directly through HUD and USDA in Round II because the program funding came directly through HUD and USDA appropriations. Officials from both agencies explained that information on the activity for which funds were used was linked to each drawdown of program funds. In addition, a HUD official said they had issued improved monitoring guidance in Round II, since designees receive funds directly from HUD. However, a USDA official said that they provided similar monitoring guidance to designees in Rounds I, II, and III. Because this report focuses on Round I of the program, we did not determine the effectiveness of the oversight of future rounds of the program. Lack of Detailed Tax Data Made It Difficult to Assess the Use of Program Tax Benefits A lack of detailed tax data limited our ability to assess the extent to which businesses in the EZs and ECs used program tax benefits. We have previously reported that information on tax expenditures should be collected to ensure that these expenditures are achieving their intended purpose. IRS collects data on the use of some of the program tax benefits, but not all of them, and none of the data can be linked to the individual communities where the benefits were claimed. We also recommended that HUD, USDA, and IRS work together to identify the data needed to measure the use of EZ/EC tax benefits and the cost-effectiveness of collecting the information, but the three agencies did not reach agreement on a cost-effective approach. Officials from some EZs and ECs reported that some of the tax benefits were being used, but this information was not sufficient to allow us to determine the actual extent of usage. IRS Data on the Use of Program Tax Benefits Are Limited Previously, we have noted that information on tax expenditures should be collected in order to evaluate their effectiveness as a means of accomplishing federal objectives and to ensure that they are achieving their intended purpose. Inadequate or missing data can impede such studies, especially given the difficulties in quantifying the benefits of tax expenditures. Nevertheless, we have stated that the nation’s current and projected fiscal imbalance serves to reinforce the importance of engaging in such evaluations. However, as described in our 2004 report, the IRS collects limited data on the EZ/EC tax benefits. It does not collect data on benefits used in individual designated sites and for some benefits it does not have any data. For example, the IRS collects some information on EZ businesses’ use of tax credits for employing EZ residents. However, the data cannot be separated to show how much was claimed in individual EZs. In addition, IRS does not have data on the use of the increased expensing deduction for depreciable property, because taxpayers do not report this benefit as a separate line item on their returns. The lack of data on the use of program tax benefits is consistent with findings of other reports we prepared citing data challenges in other similar community and economic development programs, such as the Liberty Zone program. Our 2004 report recommended that HUD, IRS, and USDA collaborate to identify a cost-effective means of collecting the data needed to assess the use of the tax benefits. In response, HUD, IRS, and USDA identified two methods for collecting the information—through a national survey or by modifying the tax forms. However, the three agencies did not reach agreement on a cost-effective method for collecting additional data. Given the lack of information at the federal level, we, some EZs, and other researchers have tried to assess the use of EZ/EC tax benefits by surveying businesses. However, these surveys have had low response rates and a high number of undeliverable surveys, suggesting that the results might not be representative. Reasons associated with the low response rates were cited in previous reports, including the difficulty of locating someone at the businesses who knew whether the tax benefit had been claimed and issues associated with multiple business locations. In addition, some EZ officials said that businesses were not willing to share their tax information. Further, a high rate of small business closures was determined to be a contributing factor to the high number of undeliverable surveys. We initiated a survey of businesses as a part of the audit work for this engagement, but discontinued the survey due to a low response rate. In the absence of other data, we relied on testimonial information to assess how often the EZ tax benefits were used and who used them. Although stakeholders from all EZs told us that they did not have any data on the extent to which EZ businesses had used program tax benefits, they provided us with some information that was consistent with the findings of past studies. For example, during our site visits, EZ stakeholders told us that they believed large businesses, which tend to use tax professionals who know and understand the benefits, were more likely to use the tax benefits than small businesses. They also noted that small businesses were less likely to make enough in profits to take advantage of the tax benefits. The stakeholders stated further that the credit for employing EZ residents was the most frequently used of the three original tax benefits. A few EZ officials commented that retail businesses were more likely to use the employment credit and manufacturing businesses were more likely to use the increased expensing deduction. Stakeholders from only 4 of the 11 EZs and 2 of the 58 ECs that responded to our EC survey told us that the tax-exempt bond benefit had been used in their communities. EZ stakeholders and EC survey respondents cited a variety of reasons that the tax-exempt bond financing had not been more widely used. For instance, some said that the bonds were not used because of the availability of the Industrial Development Revenue Bond, which EZ stakeholders explained had fewer restrictions and could be issued for larger amounts. In addition, some EZ stakeholders and one EC survey respondent said that it was difficult to find a large pool of qualified EZ residents to satisfy the employment requirement for the bond, which required that at least 35 percent of the workforce be EZ residents. Some EZ stakeholders also told us that the legal fees for an EZ bond were higher than for other types of bonds because the restrictions made the EZ bond more complex. For this reason, stakeholders explained, the cost of issuing the EZ bond was high relative to the bond cap, particularly early in the program. Finally, some EC survey respondents noted other reasons for not using the bond, such as the complicated nature of the bond or a lack of interested businesses or viable projects. IRS Officials Reported that They Have Data Sufficient to Enforce the Tax Code, but This Information Is Insufficient for Assessing the Extent of Usage IRS officials said that the limited data the agency collected did not affect its ability to enforce compliance with the tax code. They told us that IRS’s role is to administer tax laws and said that collecting more comprehensive data on the use of program tax benefits would not help the agency to achieve this objective. Further, they said that they allocate their resources based on the potential effect of abuse on federal revenue and noted that these tax benefits are not considered high risk, since the amount claimed is small compared with revenues collected from other tax provisions or the amount of potential losses from abusive tax schemes. Furthermore, both IRS officials and our previous reports have suggested that IRS generally does not collect information on the frequency of use or types of businesses claiming tax benefits unless legislatively mandated to do so. Although the total program tax benefits were estimated to be much larger than the federal grant funding—over $2.5 billion compared with the $1 billion in EZ/EC grants—we do not, as we have noted, know the actual amount of tax benefits claimed by Round I EZs and ECs nationwide or the amounts used in individual communities. As a result, we could not assess differences in the rates of usage among the designated communities. Although we understand IRS’s concerns, the lack of data is likely to become increasingly problematic in light of the fact that future rounds of the EZ/EC program and the Renewal Community program rely heavily on tax benefits to achieve revitalization goals. It may also be a concern with the Gulf Opportunity Zone Act, which provides tax benefits in counties and parishes affected by the 2005 Gulf Coast hurricanes. In Aggregate, EZs and ECs Showed Some Improvements, but Our Analysis Did Not Definitively Link These Changes to the Program Although EZs and ECs showed some improvements in poverty, unemployment, and economic growth, we did not find a definitive connection between these changes and the EZ/EC program. As mentioned in our previous report, measuring the effect of initiatives such as the EZ/EC program is difficult for a number of reasons, such as data limitations and the difficulty of determining what would have happened in the absence of the program. In some cases, communities saw decreases in poverty and unemployment and increases in economic growth. But, we could not conclusively determine whether these changes were a response to the EZ/EC program or to other economic conditions. EZ stakeholders and EC survey respondents said that program-related factors had influenced changes in their communities but that other unrelated factors also had an effect. Although the overall effects of the EZ/EC program remain unclear, having data on the use of program grants and tax benefits would have allowed for a richer assessment of the program. A Number of Challenges Affected Our Efforts to Measure the Effects of the EZ/EC Program We attempted to assess the effects of the program on four indicators: poverty, unemployment, and two measures of economic growth—the number of businesses and the number of jobs. Although we used several quantitative and qualitative methods, including an econometric analysis to try to isolate the EZ/EC program’s effect, we could not differentiate between the effects of the program and other factors. Among the challenges we encountered were the following: A lack of adequate data on the use of program benefits. As mentioned earlier, data on the use of EZ/EC grant funds and tax benefits were very limited. Limited demographic data. We used poverty and unemployment data from the 1990 and 2000 censuses, but these dates do not correspond well to the program dates, as communities were designated in 1994 and in some cases are still operating. Demonstrating what would have happened in the absence of the program. For example, we attempted to identify comparison areas that did not receive EZ or EC designations and that reflected similar community characteristics of EZs and ECs. However, the designated communities sometimes had the highest poverty levels in the area, making it difficult to find exact matches among nearby census tracts. Accounting for the spillover effects of the program to other areas, the effects of similar public and private programs, and the effects of regional and local economic trends. Accounting for bias in the choice of program areas. For example, if program officials tended to pick census tracts that were already experiencing gentrification prior to 1994, we may be overstating the effect of the EZ designation. Conversely, if officials tended to choose census tracts that were experiencing economic declines prior to 1994, such as areas affected by the loss of major employers, we may be understating the program’s impact. Several program-specific factors also limited our ability to assess the effects of the program. First, the program was designed to be tailored to the local sites, and each community was given broad latitude to determine its own needs and the program activities it thought would address those needs. Thus, each designee may or may not have selected program activities that directly related to the three factors—poverty, unemployment, and economic growth—mandated for our evaluation. Second, the time frame of actual program implementation may have varied among the designees. For instance, some EZ stakeholders mentioned that their programs took 2 or 3 years to get started, while others were able to begin drawing down funds in the first year. Third, the nature of the EZ/EC program, which focuses on changes in geographic areas rather than on individuals, makes it difficult to determine how the program affected residents who lived in an EZ/EC in 1994 but later moved. Stakeholders from most of the EZs and ECs we visited said that residents were moving out of the designated areas, often after finding a job. If true, this phenomenon may have masked some of the program’s effects on poverty and unemployment, since these individuals would not be captured in the 2000 data. In Some Cases, EZs and ECs Showed Improvements in Poverty, Unemployment, and Economic Growth Some EZs and ECs saw improvements in poverty, unemployment, and economic growth. Four of the 11 EZs—Cleveland, Detroit, Philadelphia- Camden, and Kentucky Highlands—showed improvements in both poverty and unemployment between 1990 and 2000 and at least one measure of economic growth between 1995 and 2004 (fig. 6). Some ECs also experienced similar improvements. For example, 25 out of 95 ECs saw positive changes in poverty and unemployment and at least one measure of economic growth. None of the EZs and ECs experienced negative changes in all three indicators, but many experienced negative changes in at least one. For instance, the Atlanta EZ experienced negative changes in unemployment and both measures of economic growth. However, the extent of these changes varied, particularly in our two measures of economic growth. For those EZs that saw improvements in the number of jobs, the increases ranged from a low of 2.6 percent in the Philadelphia- Camden EZ to a high of 67.8 percent in the Kentucky Highlands EZ. Of those EZs that saw decreases in the number of businesses, the amount varied from 2.7 percent in the Detroit EZ to 20.8 percent in the Atlanta EZ. Most EZs and ECs Saw Some Decrease in the Poverty Rate, but These Changes Could Not Be Tied Definitively to the EZ/EC Program In most of the 11 EZs and 95 ECs, both urban and rural, poverty rates fell between 1990 and 2000 (fig. 7). Most communities experienced statistically significant decreases in the poverty rate that ranged from 2.6 to 14.6 percent. Specifically, our analysis showed the following: Almost all urban EZs experienced significant decreases ranging from a low of 4.1 percentage points in the New York EZ to 10.9 percentage points in the Detroit EZ. All three rural EZs showed significant decreases—7.3 percentage points in the Rio Grande Valley EZ, 10.1 percentage points in the Kentucky Highlands EZ, and 10.7 percentage points the Mid-Delta EZ. 44 out of the 65 urban ECs also saw significant decreases in poverty, with declines ranging from 2.6 percentage points in the Boston, Massachusetts EC to 14.6 percentage points in the Minneapolis, Minnesota EC. Most rural ECs saw significant decreases, ranging from 3.4 percentage points in the Imperial County, California EC to 12.2 percentage points in the Eastern Arkansas EC. We also compared changes in poverty in designated areas and comparison areas and across urban and rural communities for both EZs and ECs. Our analysis showed the following: When combining urban and rural areas, the poverty rate in the designated areas fell more than in the comparison areas—5.4 percentage points overall, compared with 3.9 percentage points in the comparison areas (fig. 8). Rural designees experienced a larger significant decrease in poverty than urban designees—7.2 and 5 percentage points, respectively. Urban and rural EZs experienced greater decreases in poverty than both their comparison areas and the ECs. Because we could not separate the program’s effects from other factors in these analyses, we developed an econometric model for the eight urban EZs and their comparison areas that considered a variety of factors related to the poverty rate. Among the nonprogram factors we considered were high school dropouts, the presence of households headed by females, and vacant housing units as reported in the 1990 Census. Our models indicated that the poverty rate in the comparison areas fell slightly more than in the EZs themselves (app. II). This result did not demonstrate that the declines in poverty in the EZs were directly associated with the EZ program. Finally, we conducted interviews of EZ stakeholders and surveyed EC officials to determine their views of the effects of the EZ/EC program on their communities. Their responses were consistent with the inconclusive results of our other analyses: in general, they believed that both the EZ/EC program and additional factors had affected the prevalence of poverty in their communities. Some EZ and EC stakeholders said that the EZ/EC designation and program activities had addressed poverty by bringing in jobs and helping to stabilize the area. For instance, stakeholders from several EZs, including the Chicago, Mid-Delta, and Kentucky Highlands EZs, mentioned the role of the EZ in job creation. In addition, stakeholders from other EZs, such as Detroit and Rio Grande Valley, mentioned the role of EZ programs that were related to housing. EC survey respondents commented that the EC designation gave them the opportunity to focus on initiatives that could improve poverty in the area, such as job creation, infrastructure and physical improvements, and housing. However, EZ and EC stakeholders also mentioned external factors that may have affected the changes in poverty, such as changes in the local population when original residents moved away and gentrification. In addition, stakeholders from three EZs mentioned the positive effects of changes to welfare policy during the EZ/EC program. In ECs where our data showed that the poverty rate fell, some EC survey respondents also mentioned an increase in the availability of social services as a contributing factor. At EZs where stakeholders had mixed opinions on the changes in poverty, some cited a loss of industry or shifts in the national economy. Of the three EC survey respondents in areas where poverty either remained the same or increased, respondents mentioned the decrease in the number of jobs, increase in housing and utility costs, and the out-migration of residents with middle or high incomes. Decreases in the Unemployment Rate in Some Communities Also Could Not Be Definitively Tied to the EZ/EC Program As we did for the poverty rate, we analyzed changes in the unemployment rate in EZs and ECs, using the same quantitative and qualitative methods. We found an overall decline in unemployment across communities; but, once again we could not tie the decrease definitively to the program’s presence. Further, fewer than half of the individual EZs and ECs experienced a decrease in unemployment (fig. 9), with declines ranging from 1.5 to 11.7 percentage points, and a number saw significant increases—up to 6.5 percentage points. Many communities did not experience a significant change. Specifically, our analysis showed the following: Four of the eight urban EZs saw unemployment fall, with rates declining from 2.9 percentage points in the Philadelphia-Camden EZ to 10 percentage points in the Cleveland EZ. Two of the EZs saw unemployment rise—2 percentage points in New York and 6 percentage points in Atlanta—and two did not see a statistically significant change. Changes in the unemployment rates of the rural EZs were also mixed. For example, unemployment in the Kentucky Highlands EZ fell 2 percentage points, but it rose 3.1 percentage points in the Mid-Delta EZ and did not change significantly in the Rio Grande Valley EZ. Twenty-seven, or fewer than half, of the 65 urban ECs saw significant decreases from 1.5 percentage points (San Diego, California) to 8.7 percentage points (Flint, Michigan). Eleven saw a significant increase of between 2.1 percentage points (Rochester, New York) and 6.5 percentage points (Charlotte, North Carolina), while 27 did not experience a significant change. Almost half of the rural ECs saw significant decreases, with declines ranging from 2.7 percentage points (Fayette-Haywood, Tennessee) to 11.7 percentage points (Lake County, Michigan). The unemployment rate remained about the same in 12 rural ECs, but 4 showed increases of between 2.8 and 3.5 percentage points (Williamsburg-Lake City, South Carolina and Central Savannah River Area, Georgia, respectively). Our analysis also looked at changes in unemployment across urban and rural communities and compared changes in designated areas and comparison areas for both EZs and ECs. The analysis showed the following results: The designated areas saw a statistically significant decrease in unemployment of 1.4 percentage points, compared with a decrease of just under 1 percentage point in the comparison areas (fig. 10). In general, rural designees saw unemployment fall more than urban designees, although these differences were not as marked as those we identified in our analysis of the changes in poverty. Urban EZs and ECs saw a greater decrease in unemployment than their comparison areas, where the rates did not show a statistically significant change. Unemployment in rural EZs and their comparison areas remained about the same, while rural ECs and their comparison areas both experienced a significant decrease of about 2 percentage points. Although our analyses of changes again showed that EZs experienced a larger decrease in unemployment than the comparison areas, these analyses did not separate the effect of the program from other factors. We again used an econometric model for the eight urban EZs that considered other factors, such as average household income and the presence of individuals with a high school diploma as reported in the 1990 Census. This analysis showed that the EZs experienced a decrease that was slightly greater than in the comparison areas, but the difference was not statistically significant (app. II). We also looked at the observations of EZ stakeholders that we interviewed and the responses to our EC survey. Once again, these observations generally saw both program and external factors as affecting the changes in unemployment. Some EZ stakeholders cited EZ programs—such as providing financial assistance to EZ businesses, fostering job creation, and offering job training—as helping to reduce unemployment. For example, the Upper Manhattan and Bronx portions of the New York EZ and the Chicago EZ required subgrantees and borrowers to create a certain number of jobs based on the size of the EZ grant or loan received. Similarly, EC survey respondents also mentioned the EC’s involvement in creating jobs, attracting new businesses, and offering loans and technical assistance to businesses, along with a variety of social service programs designed to support employment. EZ stakeholders and EC survey respondents also noted additional factors that may have been associated with changes in unemployment. For example, some EZs cited the availability of social services not sponsored by the EZ as factors that influenced unemployment—for instance, daycare, transportation, and adult education or job placement programs. Some EZ stakeholders also suggested that changes in the national economy and in welfare policy had helped to reduce unemployment. Many survey respondents in ECs where unemployment fell reported that the decreases could be attributed to activities that may or may not have been part of the EC program, including adult educational services, higher skill levels among area residents, and social services such as childcare, programs for the homeless, and substance abuse treatment. Stakeholders from EZs where unemployment did not change or rose explained that EZ residents faced barriers to employment such as a lack of education or job skills, drug dependency, and criminal histories. Our Measures Showed that Some Economic Growth Occurred, but Results from Our Econometric Model Were Not Conclusive A number of indicators can be used to measure economic growth, including data on the change in the number of local businesses, sales volumes, or home values. Our poverty and unemployment analyses used specific variables available in Census data, but to measure economic growth, we chose two measures—the number of businesses and the number of jobs. Overall, our analysis showed that most EZs and ECs experienced an increase in at least one measure of economic growth between 1995 and 2004 (fig. 11). Specifically: Two of the eight urban EZs experienced significant increases in the number of both businesses and jobs, and three more experienced significant increases in one measure. The increases in businesses ranged from 4.2 percent in the Philadelphia-Camden EZ to 23.6 percent in the New York EZ. The increases in jobs ranged from 2.6 percent in the Philadelphia-Camden EZ to 30.5 percent in the Detroit EZ. However, some urban EZs experienced decreases in the number of businesses or jobs, some of which were large. Five experienced decreases in the number of businesses, ranging from 2.7 percent in the Detroit EZ to 20.8 percent in the Atlanta EZ, and four experienced decreases in the number of jobs, from 5.2 percent in the Los Angeles EZ to 22.3 percent in the Atlanta EZ. All three rural EZs experienced increases in both businesses and jobs, with businesses increasing between 15.6 percent in the Mid-Delta EZ and 33 percent in the Kentucky Highlands EZ, and jobs rising between 5 and 67.8 percent in the same two EZs, respectively. Fourteen of the 64 urban ECs experienced an increase in both economic growth measures, and an additional 24 saw an increase in one of the measures. However, 26 urban ECs saw a decrease in both measures. Like rural EZs, the majority of the rural ECs experienced an increase in both measures of economic growth. Like the analyses of poverty and unemployment, our analysis of the changes in economic growth compared urban and rural designees, designated and comparison areas, and EZs and ECs (fig. 12). In aggregate, both designated and comparison areas saw little change in the number of businesses, and both experienced an increase in the number of jobs of about 7 percent. Overall, urban designees saw a decrease in the number of businesses, while rural designees saw a substantial increase. Both urban and rural designees saw an increase in the number of jobs, but the aggregate increase in rural areas was much greater (23.6 percent) than in urban areas (5.7 percent). Urban and rural comparison areas generally experienced changes similar to the designated areas. Urban EZs experienced a decrease in the number of businesses, while the number in comparison areas remained about the same. But urban EZs saw an increase in the number of jobs, while their comparison areas saw a decrease. Rural EZs fared better than their comparison areas in both measures of economic growth. As explained earlier, our descriptive analyses could not isolate the effects of the EZ/EC program from other factors affecting the designated and comparison areas. We conducted an econometric analysis that incorporated other factors, such as the percentage of vacant housing units and population density as reported in the 1990 Census. However, the results of our models explained little of the relative changes in the number of businesses or jobs in the urban EZs with respect to their comparison areas (app. II). Because our proxy measures—the number of businesses and jobs—were not the only indicators representative of economic growth, we tested our models using different measures, such as the number of home mortgage originations, but found similar results. As a result, we could not determine with a reasonable degree of confidence the role that the EZs might have played in the changes in economic growth that we observed. We also reviewed the perceptions of EZ stakeholders interviewed and respondents to our survey of ECs on economic growth in their communities. These observations cited several aspects of the program that contributed to economic growth, including loan programs and other benefits that aided small businesses, infrastructure improvements, and tax benefits, especially when the tax benefits were combined with other federal, state, and local benefits. Additionally, several stakeholders mentioned that their EZ or EC had acted as a catalyst for other local development. EZ stakeholders also noted several external factors that affected the change in economic growth, such as the increase of jobs in businesses located within the EZ or EC, the role of other state and local initiatives in attracting businesses, and trends in the national economy. In ECs where our data showed an increase in the number of businesses or jobs, some survey respondents reported that the result was due to an increase in technical assistance for area businesses, such as entrepreneurial training programs, and others reported that financial assistance to businesses contributed to the growth, both of which may or may not have been EC programs. EZ stakeholders also mentioned challenges facing their communities, including the lack of infrastructure and residents with incomes that were not high enough to support local businesses. In ECs where our data showed a decrease in the number of businesses or jobs, survey respondents pointed to a decrease in the number of area businesses and downsizing of existing businesses as contributing factors. Additional Program Data Could Facilitate Evaluations of the Effects of the EZ/EC and Similar Programs Our efforts to analyze the effects of Round I designation on poverty, unemployment, and economic growth were limited by the absence of data on the use of program grant funds, the amount of funds leveraged, and the use of tax benefits. Without these data, we could not account for the amount of funds EZs used to carry out specific activities, the extent to which they leveraged other resources, or how extensively businesses used the tax benefits. As a result, we could not assess differences in program implementation. In addition, as we reported in 2004, we could not evaluate the effectiveness of the tax benefits, although later rounds of the EZ/EC program have relied heavily on them. While we recognize, and discussed in our prior report on the EZ/EC program, the difficulties inherent in evaluating economic development programs, having more specific data would facilitate evaluations of this and similar programs. For example, the precision of our econometric models might have been improved by combining data on how program funds were used—such as the amounts used for assisting businesses—and the use of program tax benefits with other data we obtained, such as data on businesses and area jobs. Also, additional data would have allowed us to do in-depth evaluations of the extent to which various tax benefits were being used within each community, the size and type of businesses utilizing them, and the potential competitive advantages of using these benefits. Our previous reports have recommended that information on outlay programs and tax expenditures be collected to evaluate the most effective methods for accomplishing federal objectives. Observations The EZ/EC program, one of the most recent large-scale federal programs aimed at revitalizing distressed urban and rural communities, resulted in a variety of activities intended to improve social and economic conditions in the nation’s high-poverty communities. As of March 31, 2006, all but 15 percent of the $1 billion in program grant funds provided to Round I communities had been expended, and the program was reaching its end. All three rounds of the EZ/EC program are scheduled to end no later than December 31, 2009. However, given our findings from this evaluation of Round I EZs and ECs, the following observations should be considered if these or similar programs are authorized in the future. Based on our review, we found that oversight for Round I of the program was limited because the three agencies—HHS, HUD, and USDA—did not collect data on how program funds were used, and HHS did not provide state and local entities with guidance sufficient to ensure monitoring of the program. These limitations may be related in part to the design of the program, which offered increased flexibility in the use of funds and relied on multiple agencies for oversight. However, limited data and variation in monitoring hindered federal oversight efforts. In addition, the lack of data on the use of program grant funds, the extent of leveraging, and extent to which program tax benefits were used also limited our ability and the ability of others to evaluate the effects of the program. The lack of data on the use of tax benefits is of particular concern, since the estimated amount of the tax benefits was far greater than the amount of grant funds dedicated to the program. In response to the recommendation in our 2004 report, HUD, IRS, and USDA discussed options for collecting additional data on program tax benefits and determined two methods for collecting the information—through a national survey or the modification of tax forms. The three agencies, however, did not reach agreement on a cost-effective method for collecting the additional data. In our and others’ prior attempts to obtain this information using surveys, survey response rates were low and thus did not produce reliable information on the use of program tax benefits. We acknowledge that the collection of additional tax data by IRS would introduce additional costs to both IRS and taxpayers. Nonetheless, a lack of data on tax benefits is significant given that subsequent rounds of the EZ/EC program and the Renewal Community program rely almost exclusively on tax benefits, and other federal economic development programs, such as the recent Gulf Opportunity Zone initiative, involve substantial amounts of tax benefits. Furthermore, the nation’s current and projected fiscal imbalance serves to reinforce the importance of understanding the benefits of such tax expenditures. If Congress authorizes similar programs that rely heavily on tax benefits in the future, it would be prudent for federal agencies responsible for administering the program to collect information necessary for determining whether the tax benefits are effective in achieving program goals. Agency Comments and Our Evaluation We provided a draft of this report for review and comment to HHS, HUD, IRS, and USDA. We received comments from HHS, HUD, and USDA. In general, the agencies provided comments related to the oversight of the program, the availability of data, and the methodology used to carry out the work. Their written comments appear in appendixes V through VII, respectively, and our responses to HUD’s more detailed comments also appear in appendix VI. HHS, HUD, and USDA also provided technical comments, which we have incorporated into the report where appropriate. HHS commented that a statement made in our report—that the agency did not provide guidance detailing the steps state and local authorities should take to monitor the program—unfairly represented the relationship between HHS and the other federal agencies that administered the EZ/EC program. Specifically, HHS emphasized its responsibility for fiscal as opposed to programmatic oversight of the program. We note in our report that program design may have led to a lack of clarity in oversight, as no single federal agency had sole oversight responsibility. While this lack of clarity in oversight may be related in part to the design of the program, which offered increased flexibility in the use of funds and relied on multiple agencies for oversight, limited data and variation in monitoring hindered federal oversight efforts. Moreover, we believe that, in accordance with federal standards, each of the federal agencies that administered the program bore at least some responsibility for ensuring that public resources were being used effectively and that program goals were being met. HUD disagreed with GAO’s observation that there was a lack of data on the use of program grant funds, the amount of funds leveraged, and the use of the tax benefits. HUD indicated that we could obtain data on the use of program funds and the amount of funds leveraged from its performance reporting system. As we discussed in our report, we used information from HUD’s reporting system to report on the types of activities that designated communities implemented. We also noted that HUD maintained some information on the amount of EZ/EC grants budgeted for specific activities. Although we found evidence that activities were carried out with program funds, information contained in the performance reporting system on the amounts of funds used and the amount leveraged was not reliable. For example, we found evidence that communities had undertaken certain activities with program funding, but we were often unable to find documentation of the actual amounts allocated or expended. HUD also indicated that it did not agree that data on the use of the tax benefits were lacking. However, HUD indicated that the agency itself had attempted to gather such data by collaborating with IRS in identifying ways to collect data on tax benefits, by developing a methodology to administer a survey to businesses, and by compiling anecdotal evidence of the use of program tax benefits. We continue to believe that the lack of data on program tax benefits limits the ability of the agencies to administer and evaluate the EZ/EC program. Further, the lack of such data is likely to become increasingly problematic in light of the fact that future rounds of the EZ/EC program and the Renewal Community program rely heavily on tax benefits to achieve revitalization goals. HUD concurred that limitations in the oversight of the EZ/EC program may have resulted from the design of the program as no single federal agency had sole responsibility for oversight. HUD also recommended that we make clear that more oversight was not allowed in Round I and we include a statement that it met agency requirements to undertake periodic performance reviews and described some of its efforts to monitor the program according to applicable regulations. We do not believe that more oversight was not allowed. For example, early in the program HUD and HHS made some efforts to share information. Specifically, HUD officials said that they had received fiscal data from HHS and reconciled that information with their program data on the activities implemented, but these efforts to share information were not maintained. Further, as we previously stated, while we recognize that program design may have led to a lack of clarity in oversight, we believe that in accordance with federal standards, each of the federal agencies that administered the program bore at least some responsibility for ensuring that public resources were being used effectively and that program goals were being met. HUD also described changes it had made to ensure better oversight of program funds for Round II. We acknowledge HUD’s efforts to improve oversight of the program and, as discussed in our report, the oversight limitations that we identified in Round I of the program may not apply to later rounds. HUD provided several comments related to the methodology we used to carry out our work. For example, HUD suggested that we measure the successes of the Round I program in meeting the four key principles of the program, which the designated communities were required to include in their strategic plans. Additionally, HUD commented that the indices we used to assess the effects of the EZ/EC program—poverty, unemployment and economic growth—were used in the application process for the program but were not intended to be used as performance measures. While we appreciate HUD’s suggestions on our methodology, our congressional mandate was to determine the effect of the EZ/EC program on poverty, unemployment and economic growth. In designing our methodology, we conducted extensive research on evaluations that had been conducted on the EZ/EC program, including HUD’s 2001 Interim Assessment, and spoke with several experts in the urban studies field. USDA stated that data and analyses on the effectiveness of programs such as EZ/EC were useful and offered areas to consider for future evaluations of economic development programs involving rural areas. For example, USDA mentioned issues involved in collecting data on rural areas, such as the limited availability of economic and demographic data for small rural populations, and discussed USDA’s efforts for developing a methodology that focuses on economic impacts using county-level economic data. USDA also said it is especially important in rural areas to have a clear and adequately funded data collection process for program evaluations. In addition, USDA noted that evaluations of the EZ/EC program could go beyond the indicators of poverty, unemployment and economic growth to include measures on economic development capacity and collaboration. We agree that collecting data for rural areas is a challenge and appreciate USDA’s effort to develop a methodology that focuses on economic impacts using county-level economic data and captures the short-term Gross Domestic Product changes in the impacted rural counties. Further, we appreciate USDA’s suggestion that additional measures be considered in future evaluations of economic development programs and that a broader perspective on program results might be useful. USDA also commented that its performance reporting system was intended to be used as a management tool for both USDA and the individual EZs and ECs. According to USDA, the system was not designed to be an accounting tool but has been useful for providing a picture of each designated community’s achievements. As we discussed in our report, we used information from USDA’s reporting system to report on the types of activities that designated communities implemented and also noted that USDA maintained some information on the amounts of EZ/EC grants budgeted for specific activities. Moreover, while we recognize the system was not intended to be used as an accounting tool, we found that the data on the amounts of the EZ/EC grant funding were not reliable. For example, in our assessment of the reliability of data contained in USDA’s performance reporting system, we were often unable to find documentation of the actual amounts allocated or expended for specific activities. USDA further commented that it had encouraged designated communities to report all investment that contributed to the EZ or EC in accomplishing its strategic plan as leveraged funds. We recognize USDA’s efforts to encourage leveraging in the designated communities and to report such information in its performance reporting system. Our report notes that stakeholders from all EZs and ECs we visited and EC survey respondents reported having used their EZ/EC grants to leverage other resources. However, we were unable to evaluate the amounts of funds leveraged because the data contained in USDA’s performance reporting system were not reliable. For example, USDA’s performance reporting system included information on the amounts of funds leveraged for each activity, but for the sample of activities we reviewed, either supporting documentation showed an amount conflicting with the reported amount or documentation could not be found. Moreover, as we discuss in our report, the definition of leveraging used among the designated communities was inconsistent. We are sending copies of this report to interested Members of Congress, the Secretary of Health and Human Services, the Secretary of Housing and Urban Development, the Secretary of Treasury, the Commissioner of the Internal Revenue Service, and the Secretary of Agriculture. We will make copies of this report available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or ShearW@gao.gov if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. Objectives, Scope, and Methodology The objectives of this study were to (1) describe how Round I of the Empowerment Zone and Enterprise Community (EZ/EC) program was implemented by the designated communities; (2) evaluate the extent of federal, state, and local oversight of the program; (3) examine the extent to which data are available to assess the use of program tax benefits; and (4) analyze the effects the Round I EZs and ECs had on poverty, unemployment, and economic growth in their communities. To address each of our objectives, we completed site visits to all Round I EZs and two Round I ECs and administered a survey to all ECs that did not receive subsequent designations, such as a Round II EZ designation. At each site, we asked uniform questions on implementation, oversight, tax benefits, and changes observed in the EZ and ECs. We also surveyed 60 ECs that were in operation as of June 2005 and did not receive later designations and asked about similar topics. We performed a qualitative analysis to identify common themes from our interview data and open-ended survey responses. To address our second objective, we also interviewed federal and state program participants, reviewed oversight guidance and documentation, and verified a sample of reported performance data by tracing it to EZ and EC records. To address our third objective, we attempted to administer a survey of EZ businesses, but discontinued it due to a low response rate. To address our fourth objective, we obtained demographic and socioeconomic data from the 1990 and 2000 decennial censuses and business data for 1995, 1999, and 2004 from a private data vendor, Claritas. We used 1990 Census data to select areas similar to the EZ and EC areas for purposes of comparison. We then calculated the percent changes in poverty, unemployment, and economic growth observed in the EZs and ECs and their comparison areas. In addition, for the eight urban EZs, we used an econometric model to estimate the effect of the program, by controlling for certain factors, such as average household income, in the EZs and their comparison areas. Finally, we used information gathered from our qualitative analysis to provide context for the changes observed in the EZs and ECs. Methodology for Site Visits To answer our objectives, we completed site visits to all 11 EZs and 2 of the 95 ECs, one urban and one rural. These EZs and ECs were located in: Atlanta, Georgia (EZ) Baltimore, Maryland (EZ) Chicago, Illinois (EZ) Cleveland, Ohio (EZ) Detroit, Michigan (EZ) Los Angeles, California (EZ) New York, New York (EZ) Philadelphia, Pennsylvania and Camden, New Jersey (EZ) rural Kentucky (Kentucky Highlands EZ) rural Mississippi (Mid-Delta EZ) rural Texas (Rio Grande Valley EZ) Providence, Rhode Island (EC) rural Tennessee (Fayette-Haywood EC) We interviewed stakeholders from each site on the implementation, governance, oversight, and tax benefits of the EZ or EC and asked about the changes the stakeholders had observed in their communities. Using a standardized interview guide, we interviewed some combination of the following program stakeholders at each location: EZ/EC officials, board members (including some EZ/EC residents), representatives of subgrantee organizations, and Chamber of Commerce representatives or individuals able to provide the perspective of the business community (table 4). We identified participants to interview at each site by soliciting opinions from EZ/EC officials and the current board chair. For each site, we reviewed strategic plans, organizational charts, and documentation on oversight procedures. In addition, we toured the EZ/EC to see some of activities implemented. Methodology for Survey of EC Officials To gather similar information from the ECs, we administered an e-mail survey to officials from the 60 Round I ECs that were still in operation as of June 2005 and did not receive a subsequent designation. We chose to exclude the 34 ECs that received subsequent designations, because we did not want their responses to be influenced by those programs. A version of the survey showing aggregated responses can be viewed at www.gao.gov/cgi-bin/getrpt?GAO-06-734SP. We developed survey questions from existing program literature and interview data collected from Department of Housing and Urban Development (HUD) and U.S. Department of Agriculture (USDA) headquarters officials as well as our site visits to Round I EZs and ECs. The questionnaire items covered the implementation of the program, the types of governance structures used, usage of the program tax-exempt bond, and stakeholders’ views of factors that influenced the changes they observed in poverty, unemployment, and economic growth in their ECs. We created two versions of the questionnaire, one for urban ECs and another for rural ECs, in order to tailor items to urban or rural sites. Department of Health and Human Services (HHS), HUD, and USDA officials reviewed the survey for content, and we conducted pretests at four urban and two rural ECs. Since the survey was administered by e-mail, a usability pretest was conducted at one urban EC (Akron, Ohio) to observe the respondent answering the questionnaire as it would appear when opened and displayed on their computer screen. In administering the survey, we took the following steps to increase the response rate. To identify survey participants, we obtained contact information for the Round I ECs that did not receive a subsequent designation from HUD and USDA in April 2005. We then sent a notification e-mail to inform the ECs of the survey, to identify the correct point of contact, and to ensure the e-mail account was active. Those who did not respond to the first e-mail received follow up e-mails and telephone calls. The questionnaire was e-mailed on August 25, 2005 to 27 rural ECs and 33 urban ECs, and participants were given the option to respond via e-mail, fax, or post mail. Between September and December 2005, multiple follow up e-mails and calls were made to increase the response rate. When the survey closed on December 20, 2005, all of the rural ECs and 31 of the 33 urban ECs had completed it. The overall response rate was high at 97 percent, with the response rates for the rural ECs at 100 percent and urban ECs at 94 percent. We did not attempt to verify the respondents’ answers against an independent source of information. However, we used two techniques to verify the reliability of questionnaire items. First, we used in- depth interviewing techniques to evaluate the answers of pretest participants, and interviewers judged that all the respondents’ answers to the questions were based on reliable information. Second, for the items that asked about changes to poverty, unemployment, and economic growth in the EC, we asked respondents to provide a source of data for their response. Responses to those questions that did not include a data source were excluded from our analysis of those items. The practical difficulties of conducting any survey may introduce certain types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We sought to minimize these errors by taking the following steps: conducting pretests, making follow-up contacts with participants to increase response rates, performing statistical analyses to identify logical inconsistencies, and having a second independent analyst review the statistical analyses. Returned surveys were reviewed for consistency before the data were entered into an electronic database. All keypunched or inputted data were 100-percent verified—that is, the data were electronically entered twice. Further, a random sample of the surveys was verified for completeness and accuracy. We used statistical software to analyze responses to close-ended questions and performed a qualitative analysis on open-ended questions to identify common themes. Methodology for Qualitative Analysis of Site Visit and EC Survey Data To summarize the information collected at our site visits, we conducted a qualitative analysis of interview data. The goal of the analysis was to create a summary that would produce an overall “story” or brief description of the program as implemented in each site. In this process, we reviewed data from over 200 interviews to identify information pertaining to the following six broad topics: strategic planning and census tract selection; goals, implemented activities, leveraging activities, and sustainability; governance structure and process; perceptions of the use of tax benefits; and perceptions of poverty, unemployment, economic growth, and other changes within the zone. Based on initial reviews of the interview data, we produced general outlines for each topic. For example, a description of the governance structure and process included identifying the type of governance structure used, roles within the structure, opportunities for community involvement, the process for decision making, and successes and challenges related to governance. One reviewer was assigned to each of the six topics for an individual site. The reviewer examined all interviews completed at an individual site and created a topical summary based on interview data. Each summary was verified by (1) presenting the summaries to the group of six interview reviewers to ensure accuracy, clarity, and completeness and (2) having a second reviewer trace the summaries back to source documents. We also performed a qualitative analysis of the open-ended responses in the EC survey to determine reasons why the tax-exempt bond was not more widely used; why poverty, unemployment, and economic growth may have remained the same over the designation; and what role the EC played in changes in poverty, unemployment and economic growth, as well as obtaining general comments about the program. Responses to these questions were first reviewed by an analyst to identify common categories within the responses and then independently verified by a second analyst. Methodology for Review of Program Oversight We interviewed and obtained documentation from federal, state, and local program participants regarding program oversight. We interviewed officials from the federal agencies involved with the program and obtained and analyzed fiscal and program data from the agencies. In addition, since the states were the pass-through entities for grant funds provided to the EZs and ECs—that is, they distributed federal funding to the communities—we conducted telephone interviews with state officials and obtained relevant documents in the 13 states containing EZs and ECs we visited. Finally, we interviewed EZ and EC officials on their oversight of subgrantees as well as the oversight they received from federal and state entities. We did not perform financial audits of the EZs and ECs. To determine the reliability of data in HUD and USDA Internet-based performance reporting systems, we randomly selected activities at each EZ and EC we visited and conducted a file review to determine the accuracy of the data. In the files, we searched related documentation for the amounts reported in the system for certain categories, including EZ/EC grant funding, leveraged funds, and program outputs. We also determined whether, at a minimum, documentation existed to support that the activity was implemented. We then assigned each item we verified a code (table 5). Finally, we averaged the information for each site by category and calculated the average score for each urban and rural community. We found sufficient documentation that most EZ/EC activities contained in the Internet-based reporting systems had occurred, with average codes of 2.0 for urban areas and 1.9 for rural areas. We found that data on EZ/EC grant funding, leveraged funds, and program outputs were not sufficiently reliable for our purposes because only weak or no documentation could be found at most sites. Methodology for Survey of EZ Businesses To assess the use of program tax benefits, we attempted to administer a survey to EZ businesses; however, we discontinued the survey due to a very low response rate. Based on past post-mailed and phone-administered surveys of EZ businesses, we knew that this would be a challenging population to survey. In fact, surveys we and Abt Associates conducted in 1998 obtained response rates of only 42 and 35 percent, respectively. In addition, both surveys had a relatively high number of undeliverable surveys. In anticipation of these issues, we attempted to administer a concise, high-level survey via mail to a stratified random sample (n=517) of EZ businesses. We implemented a sampling procedure using the 2004 Claritas Business Facts dataset that stratified businesses located in the EZ by three strata: urban small businesses (less than 50 employees), urban large businesses (50 or more employees), and rural businesses. The survey was targeted to private businesses rather than public and nonprofit businesses, since these for-profit businesses were the ones eligible for the tax benefits. Public and nonprofit businesses were excluded from the sample by the primary industry code identifier included in the Claritas data. A few of these types of businesses that were not initially excluded based on their industry code were later removed from the sample because the respondents said that they were not eligible for the tax benefits. We developed our survey after reviewing surveys used in previous studies, interviewing business owners, and conducting pretests with EZ businesses. The questionnaire was brief—containing 21 closed-ended items and 1 optional open-ended item—and took most pretest respondents approximately five minutes to complete. When we conducted pretests with 10 businesses from Baltimore, Philadelphia, and rural Kentucky, all pretest participants found the survey to be easy to complete and said that it did not ask for sensitive information. These business owners, however, often lacked complete information about their company’s tax filings and were not always able to answer all of the survey questions. Several indicated that they would be unlikely to complete the survey because the topic was not relevant to them. We administered the survey according to standard survey data collection practices. We sent a letter notifying the 517 businesses of our survey about a week prior to the survey mailing, mailed a copy of the survey, and followed that mailing with a reminder postcard. We received a total of 63 responses after our initial mailing, a response rate of 12 percent. Our mailings to 104 businesses (20 percent) could not be delivered and were returned because of incorrect addresses or contact information. Methodology for Assessing the Effect of the Program on Poverty, Unemployment, and Economic Growth To determine the effect of the EZ/EC program on changes in poverty, unemployment, and economic growth, we used a variety of quantitative methods that examined changes in the designated program areas and areas we identified as comparison areas. In addition, we incorporated interview data in our qualitative analysis to provide context for the changes observed. We calculated percent changes of demographic, socioeconomic, and business data between two points in time for the all Round I EZs and ECs. However, we used only urban EZs in our econometric analysis because of data limitations in rural areas and the amount of funds awarded to ECs. Description of Data Sources To assess the changes in poverty and unemployment, we used census tract- level data on poverty rates and unemployment rates from the 1990 and 2000 decennial censuses. To determine changes in economic growth in EZ and ECs, we defined economic growth in terms of the number of private businesses created and the total number of jobs in the areas. We obtained year-end data on these variables for years 1995, 1999, and 2004 from the Business-Facts Database maintained by Claritas, a private data processing company. We explored several public and private data sources that contained the number of businesses and jobs at the census tract level and selected Claritas because it (1) maintained archival data, (2) provided data with a high level of reliability at the census tract level, and (3) used techniques to ensure the representation of small businesses. We also explored a variety of other data options to enhance our analysis, but were ultimately not able to use them. For example, we tried to acquire data throughout the period of the program, such as state unemployment data, local building permit and crime data, and data on students receiving free or reduced lunches. However, we were not able to use these data because they were not captured consistently across sites, not available at the census tract level, or not sufficiently reliable for our purposes. The decennial census data used are from the census long form that is administered to a sample of respondents. Because census data used in this analysis are estimated based on a probability sample, each estimate is based on just one of a large number of samples that could have been drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For example, the estimated percent change in the poverty rate of EZs is a decrease of 6.1 percent, and the 95 percent confidence interval for this estimate ranges from 4.9 to 7.2 percent. This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All Census variables based on percentages, such as poverty rate and unemployment rate, have 95 percent confidence intervals of plus or minus 5 percentage points or less. The confidence intervals for average household income and average owner- occupied housing value are shown in table 6. In addition to sampling errors, Census data (both sampled and 100 percent data) are subject to nonsampling errors that may occur during the operations used to collect and process census data. Examples of nonsampling errors are not enumerating every housing unit or person in the sample, failing to obtain all required information from a respondent, obtaining incorrect information, and recording information incorrectly. Operations such as field review of enumerator’s work, clerical handling of questionnaires, and electronic processing of questionnaires also may introduce nonsampling errors in the data. The Census Bureau discusses sources of nonsampling errors and makes attempts to limit them. Choosing Comparison Areas Using the Propensity Score To provide context for the changes we observed in the EZs and ECs, we calculated the percent change of the designated areas as well as areas, called comparison areas, that most closely resembled the EZ/EC program areas. To select comparison areas for our analysis, we used a statistical matching method called the propensity score. The propensity score predicts the probability that a tract could have been designated based on having characteristics similar to those found in the tracts selected for the program. We used five factors to calculate the propensity scores, as shown in table 7. To ensure that our comparison areas were similar to the designated areas in terms of geography, we explored two selection methods, one that included tracts in the same county as the EZ/EC and in adjacent counties, and another that selected tracts within a 5-mile radius of the EZ/EC. We excluded tracts that received a subsequent designation in the EZ/EC or Renewal Community programs in 1998 and 2002 in order to remove the possibility of tracts that may have received similar benefits affecting our analysis. After mapping the resulting comparison tracts using these two methods, we decided to use tracts selected within a 5-mile radius of the EZs and ECs because this method provided more contiguous areas, while the results of the county and the adjacent counties method yielded comparison tracts in other states where political structures and types of funds could differ. Using the computed propensity scores, we selected comparison tracts whose scores were greater than 0.1. This threshold was chosen because most EZ tracts had propensity scores of 0.1 or higher; therefore, comparison tracts with propensity scores of at least 0.1 were the most similar to the EZ tracts. This threshold also yielded approximately the same number of comparison tracts as EZ tracts in most of the eight urban EZs. In addition, we tested this threshold by running our models with comparison tracts whose propensity scores were greater than 0.05 or 0.15 and found that the results did not change significantly. Some limitations exist with this method. For example, since many of census tracts chosen for the program may have had the highest level of poverty, it was difficult to find tracts with the same level of poverty. Our Descriptive and Econometric Analyses We calculated the percent changes at the program wide level for our four indicators of poverty, unemployment, and economic growth for both designated and comparison areas. We also calculated the changes for urban and rural designees and EZs and ECs separately, so that we could make comparisons between those groups. In addition, for the eight urban Round I EZs, we calculated the percentages separately for each EZ and EZ comparison area to show differences between zones. Although the comparison areas were sufficient to use in our program wide analyses, for rural EZs and urban and rural ECs, we did not use comparison areas for site-level analyses because there were too few comparison tracts. For example, the Providence, Rhode Island EC consisted of 13 tracts, but the area had only four eligible comparison tracts. We also completed an econometric analysis of the eight urban EZs. We used a standard econometric approach, the weighted least squares model, which allowed us to analyze the change from 1990 to 2000 and compare it with the 1990 value of several explanatory variables. The benefit of this approach is that the program, officially implemented in 1994, would not affect the 1990 values of the explanatory variables. In addition, we spoke with several experts in the urban studies field on our methodology. For more information on the methods used in our econometric analysis and a full discussion of our results, please see appendix II. Methodology for and Results of Our Econometric Models This appendix describes our efforts to isolate the effect of the EZ/EC program on the changes in poverty, unemployment, and economic growth, by conducting an econometric analysis of all urban EZ census tracts. In our analysis of percent changes, we found that poverty and unemployment had decreased and that some economic growth had occurred. However, when we used the econometric models to control for other area characteristics, our results did not definitively suggest that the observed changes in poverty and unemployment were associated with the EZ program in urban areas. In addition, our models did not adequately explain the observed changes in the proxy measures we used for economic growth; thus, the results did not allow us to conclude whether there is an association between the EZ program and economic growth. As mentioned in the report, there were several challenges that limited our ability to determine the effect of the program. First, data at the census tract-level for the program years were limited. We used data from the 1990 and 2000 decennial censuses to show the changes in poverty and unemployment. In addition, we primarily used two measures for economic growth—the number of businesses and the number of jobs from Claritas Business-Facts dataset for years 1995, 1999, and 2004—in our models of economic growth. Second, we were not able to account for the spillover effects of EZ designation into their neighboring areas. For example, if the EZ/EC program affected comparison tracts as well as the designated communities, our analyses would not find any significant differences between the designated and comparison tracts. The result may be an obscuring of the extent of the statistical association between the urban EZ program and the study variables. Third, the analyses did not account for the confounding effects of other public or private programs, such as those intended to reduce poverty or unemployment or increase the number of area jobs. As a result, estimates for the EZ program in our analyses may under or overstate the extent of EZ program’s correlation with poverty, unemployment, and economic growth. Fourth, our estimations did not fully account for the economic trends that were affecting the choice of areas selected for the program. For example, if program officials tended to pick census tracts that were already experiencing gentrification prior to 1994, our estimations could overstate the effect of the EZ designation. Conversely, if officials tended to choose census tracts that were experiencing economic declines prior to 1994, such as those in which major employers had closed, we might understate the program’s impact. We did include a variable from Census data—new housing construction between 1990 and 1994—that measured one dimension of economic trends prior to EZ designation, but we did not include other dimensions, such as employment trends at the tract level, in the models. Description of Our Models We used a weighted least square regression for our analyses. Our dependent variables were (1) the difference in the poverty rate between 1990 and 2000, (2) the difference in the unemployment rate between 1990 and 2000 (3) the difference in the number of businesses between 1995 and 1999, and (4) the number of jobs between 1995 and 1999. For the basic model, we measured the difference in each dependent variable against the 1990 value of some explanatory variables. The benefit of this approach is that 1990 values of the explanatory variables would not have been affected by the program, which was implemented in 1994. We also ran an expanded version of the model that included variables for each of the EZs to determine whether there were differences among the EZs, and we included variables for the EZs and their surrounding areas to account for economic trends at the metropolitan level, such as the growing or declining output of local industries. Some of the explanatory variables for which we controlled included socioeconomic factors, such as percent of population with a high school diploma. In addition to these socioeconomic factors, we also considered the five factors we used to select the comparison tracts: percent of minority population in 1990, average household income in 1990, population density in 1990, poverty rate in 1990, and unemployment rate in 1990. We included these variables because the comparison tracts may not be perfectly matched to the EZ tracts; including these factors allowed us to further account for differences between EZ and comparison tracts. Moreover, we weighted the estimations by the geometric mean of 1990 and 2000 household counts of each tract to account for differences in the number of households in each tract. The purpose of this decision was to put more weight on the tracts with large numbers of households, because these tracts would tend to have smaller sampling errors. The coefficients for the EZ program variables represent the EZs with respect to the comparison areas, and the positive or negative values suggest whether the EZs fared better or worse than the comparison areas. For instance, a positive coefficient in the models for poverty and unemployment would mean that the EZs did not fare as well as the comparison areas—that is, they had either a greater increase or a smaller decrease in poverty or unemployment. See our discussion of the results of each model for more information. Results of Our Models for Poverty Although our comparison of the percentage change between 1990 and 2000 showed that poverty decreased in most urban EZs, the results of our models did not conclusively suggest that the change in poverty was associated with the EZ program. Our analysis of the percentage changes showed that the poverty rate fell more in the EZs than in the comparison areas. But, when we controlled for other factors in our models, we found in the basic model that poverty decreased less in the EZs than in the comparison areas, although the difference was very small (table 8). In addition, many of the variables used in selection of comparison tracts were significant, suggesting that the choice of areas selected for the program might have affected the differences between the urban EZs and the comparison areas in the change in poverty. When accounting for the different urban EZs and their comparison tracts, the poverty rate decreased more in some urban EZs but less in others with respect to the comparison tracts, although the only significant result was in the Los Angeles EZ, which experienced a greater increase in poverty than the comparison areas. The differences among EZs may be a result of the local factors. In addition, one researcher found that there was a nationwide decrease in the number of people living in high poverty neighborhoods, defined as census tracts with poverty rates of 40 percent or higher, between 1990 and 2000—a trend that might be a factor affecting our results. Results of Our Models for Unemployment Like our models for poverty, our models for the unemployment did not conclusively suggest that the changes in unemployment were associated with the EZ program. The results of our basic model suggested that unemployment decreased more in the EZs than in the comparison areas, but the difference was very small and was not statistically significant (table 9). All five of the variables we used to select comparison tracts were statistically significant, suggesting that the choice of areas selected for the program might have affected the difference in the change in unemployment rate between EZ and comparison tracts. Like the model for poverty, our model showed that the unemployment rate decreased more in some urban EZs but less in others, although the only EZ that experienced a significant change was the Cleveland EZ, which showed a significantly greater decrease in unemployment than the comparison areas. As with poverty rate, local factors may have accounted for the difference between the various urban EZs with respect to the comparison tracts. Results of Our Models for Economic Growth To estimate the statistical relationship between the EZ program and economic growth, we used two proxy measures: (1) the number of businesses excluding establishments that were not eligible for program tax benefits such as nonprofit and governmental organizations and (2) the number of jobs in the EZ. In order to be consistent with our analyses of poverty rate and unemployment, which covered the time period between 1990 and 2000, we used 1995 and 1999 data for our models of economic growth. We also tested the model using Home Mortgage Disclosure Act data on the number of loan originations for new home purchases and the mean loan amount for new home purchases as other possible measures of economic growth, but found consistent results, which are not presented here. On the basis of the results of our models, we were not able to determine whether there is a statistical association between the EZ program and economic growth because the explanatory variables we used explained little of the variation in the changes in the number of businesses or jobs between 1995 and 1999 (tables 10 and 11). Not surprisingly, most explanatory variables were also not significant. The low explanatory power of our models could be the result of not having considered the right variables; however, we explored many combinations of variables, all of which yielded consistent results. This lack of explanatory power might also be the result of the fact that our proxy measures—the number of businesses and jobs—were not strongly representative of economic growth. Nevertheless, similar to the models of the change in poverty and unemployment, the models of the change in economic growth reflect variation between the EZs with respect to the comparison areas, but none of the results were statistically significant. Other Variables Tested for Use in Our Econometric Models In addition to the variables presented in the models above, we explored many alternative dependent variables and explanatory variables to test the robustness of the models we used (table 12). In particular, we experimented with several alternative measures for economic growth. To test how our results might change in response to the selection of comparison tracts, we also reestimated the models using comparison tracts selected with different propensity scores. We also ran the models excluding the Los Angeles and Cleveland EZs, because these EZs received a slightly different package of benefits when they were initially designated as Supplemental EZs. These tests all yielded results consistent with our models, so they are not presented here. List of Communities Designated in Round I of the EZ/EC Program Description of the Empowerment Zones and Enterprise Communities We Visited This appendix contains detailed information we gathered from our site visits to the 11 Round I EZs and 2 ECs. The appendix describes how the EZs and ECs were governed; the activities they implemented; changes in poverty, unemployment, and economic growth; and stakeholders’ perceptions of factors influencing those changes. It also includes the percent changes in variables used in the econometric model. Comments from the Department of Health and Human Services Comments from the Department of Housing and Urban Development The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated August 17, 2006. GAO Comments 1. HUD commented that GAO should include details on the amount of funding and tax incentives provided for Rounds II and III of the EZ/EC program. We noted in our report that communities designated in Rounds II and III received a smaller amount of funding and more tax benefits than those designated in Round I. Our statement does not provide further details on Rounds II and III because the focus of the report is Round I. 2. We recognize that Round I designees were required to address four key principles as part of their strategic plans. However, our mandate was to assess the effectiveness of the EZ/EC program on poverty, unemployment, and economic growth. Assessing the extent to which communities addressed the key principles would not have been useful in meeting our mandate because, among other things, there is not a clear relationship between the key principles and poverty, unemployment, and economic growth. Further, while the report did not evaluate the extent to which communities met the key principles, it included many examples of activities carried out under them. The report also indicated that communities had implemented a larger percentage of community development activities than economic opportunity activities but did not comment on the appropriateness of the distribution of activities. 3. Our mandate was to assess the effects of the EZ/EC program on poverty, unemployment, and economic growth. Our report stated that communities were required to submit strategic plans that addressed the four key principles. However, because communities were able to modify their strategic plans over time, it would have been difficult to establish set criteria for assessing performance. Nonetheless, our report does contain numerous examples of activities undertaken by the communities, including examples mentioned in a separate appendix focusing on the 13 designated communities we visited. 4. HUD commented that because GAO found that a lack of data on how program funds were used was a limiting factor in determining the effectiveness of the EZ/EC program, we should make use of information in the agency’s performance reporting system and in communities’ strategic plans. However, we reported that our file review to determine the accuracy of data in HUD’s performance reporting system found that the data were not sufficiently reliable for our purposes. For example, we found evidence that communities had undertaken certain activities with program funding, but we were often unable to find documentation of the actual amounts allocated or expended. As a result, we were unable to rely on information contained in the agency’s performance reporting system on the amounts of program funds allocated or expended on specific activities. 5. We found that data in HUD’s performance reporting system on the amounts of funds used and the amounts leveraged were not reliable. For example, we found that HUD’s system included information on the amount of funds leveraged. But for the sample of activities we reviewed, the supporting documentation either showed an amount conflicting with the reported amount or was not available. Moreover, we found that the definition of “leveraging” varied across EZ and EC sites. HUD further commented that Table 5 in the report showed that the agency’s performance reporting system received a code of 2.0, showing that leveraging data had strong documentation. However, HUD appears to have misinterpreted the information we presented on this matter. We found that HUD’s data on leveraging received an average code of 1.0, indicating that such information had weak documentation. Lastly, HUD recommended that it be allowed to alleviate GAO’s concerns about the reliability of its leveraging data by demonstrating how the data were tracked and recorded in its performance reporting system. However, the data reliability problems we found during the course of this work were due not to concerns about the system used to track and record the data, but rather to the frequent lack of supporting documentation for the data entered into the system. 6. HUD commented that our report did not adequately address HUD’s performance reporting system and its role in HUD’s oversight of the urban Round I EZ and ECs. We acknowledge that HUD established the system in response to an earlier GAO recommendation and has since used it to oversee Round I EZs and ECs. Moreover, we agree that the system contains a variety of information and data elements, including activities implemented and program outputs. We also acknowledge that the performance reporting system is not intended to be a financial system for Round I. However, as discussed in our report, we found that because the system did not always contain information on what was spent on activities and did not always contain reliable information, HUD and the other federal agencies were limited in their ability to oversee the program. 7. HUD commented that the program’s design was significant because it provided insight about the nature and extent of the federal, state, and local attitudes that existed at the time of the first Round of EZs/ECs. HUD also stated that it did not conduct monitoring of the SSBG funds because monitoring those funds was the responsibility of HHS. HUD’s statement further supports our discussion on the limitation in the oversight of the EZ/EC program that may have resulted from the program’s design. Although we found program oversight was hindered, we also reported that no single federal agency had sole responsibility for oversight. We do not agree with HUD’s recommendation that we make clear that more oversight was not allowed in Round I. For example, early in the program HUD and HHS made some efforts to share information. Specifically, HUD officials said that they had received fiscal data from HHS and reconciled that information with their program data on the activities implemented, but these efforts to share information were not maintained. Regarding the second recommendation, although HUD described some of its efforts to monitor the program according to applicable regulations, the oversight concerns we identified in the report remain. 8. We reported that limitations in the oversight of the EZ/EC program may have resulted from the design of the program. 9. We stated in the report that the concerns raised about program oversight for the Round I EZ/EC program may not apply to future rounds of the EZ/EC program. We also acknowledge that HUD may have made changes in its oversight of later rounds of the program. However, an evaluation of later rounds of the EZ/EC and Renewal Community programs is beyond the scope of this report. 10. In our report, we acknowledged HUD’s as well as the other agencies’ response to the recommendation in our 2004 report to identify a cost- effective means of collecting the data needed to assess the use of the tax benefits. 11. Our report acknowledged the collaboration among HUD, IRS, and USDA in addressing our previous recommendation and summarizes the outcome of their discussions, including the identification of two data collection methods—through a national survey or by modifying the tax forms. In addition, our report also acknowledged that IRS did not have any data for some program tax benefits. The lack of data on the use of tax benefits continues to be a source of concern that limits an assessment of the effect of the EZ/EC program. 12. We agree that HUD’s efforts to develop a methodology to administer a survey to businesses to assess the use of the program tax benefits is a useful step in gathering such information. 13. We recognize the efforts between HUD and Treasury on sharing national-level data on EZ businesses’ use of tax credits for employing EZ residents. However, as we mention in our report, data on the EZ employment tax benefit were limited because they could not be linked to the specific EZ claiming the benefit. 14. In the absence of other data, we acknowledge HUD’s efforts to capture anecdotal information on the use of program tax benefits by EZ businesses. 15. We recognize HUD’s efforts to market the EZ/EC program tax benefits. 16. We appreciate HUD’s suggestion on how to approach evaluations of later rounds of the EZ/EC and Renewal Community programs and welcome the opportunity to discuss these ideas. 17. We appreciate HUD’s comments on the descriptive information on EZs and ECs we visited that are discussed in appendix IV. 18. HUD commented that the measures used in our report---poverty, unemployment and economic growth—were used in the application process and were not intended to be used as performance measures. However, as mentioned earlier our mandate was to assess the effects of the EZ/EC program on poverty, unemployment, and economic growth. 19. HUD suggested that we consider additional methodologies for measuring the effects of the EZ/EC program, such as trend analysis using data from 1990 through 1995 and 1995 through 2000. To conduct our work, we used 1990 and 2000 data to measure changes in poverty and unemployment and 1995, 1999, and 2004 data to measure changes in economic growth. We chose these dates because data were available at the census tract level for these years. Moreover, in designing our methodology for our econometric analysis, we conducted a literature review and discussed our methodology with several experts in the urban studies field and determined that the approach presented in this report was effective in answering the objectives of our mandate. As mentioned in Appendix II, we also conducted different tests to ensure the robustness of our models, which all yielded results consistent with our models. The approach that HUD suggested controlled for trends that began before the EZs were designated in 1994. Because we did not have data on poverty or unemployment for 1995 we were unable to use this approach. However, our use of housing trends between 1990 and 1994 in our econometric model controlled for some trends that were in place prior to EZ designation. HUD also suggested a longitudinal case study approach might be the best way to assess the effectiveness of this type of program. Although a longitudinal case study approach would be informative, it is unlikely that a successful retrospective longitudinal study could be designed at the end of the program. As HUD noted, this intervention was intended to be implemented over a ten-year period. However, a longitudinal case study approach would necessitate data collection beginning at the inception of the program and continuing for the duration of the program as well as some period of time after it ends. Comments from the U.S. Department of Agriculture GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, Charles Wilson, Jr., Assistant Director, Carl Barden, Mark Braza, Marta Chaffee, Emily Chalmers, Nadine Garrick, Kenrick Isaac, DuEwa Kamara, Austin Kelly, Terence Lam, John Larsen, Alison Martin, Denise McCabe, John McGrail, John Mingus, Jr., Marc Molino, Gretchen Maier Pattison, James Vitarello, and Daniel Zeno made key contributions to this report.
The Empowerment Zone/Enterprise Community (EZ/EC) program is one of the most recent large-scale federal effort intended to revitalize impoverished urban and rural communities. There have been three rounds of EZs and two rounds of ECs, all of which are scheduled to end no later than December 2009. The Community Renewal Tax Relief Act of 2000 mandated that GAO audit and report in 2004, 2007, and 2010 on the EZ/EC program and its effect on poverty, unemployment, and economic growth. This report, which focuses on the first round of the program starting in 1994, discusses program implementation; program oversight; data available on the use of program tax benefits; and the program's effect on poverty, unemployment, and economic growth. In conducting this work, GAO made site visits to all Round I EZs, conducted an e-mail survey of 60 Round I ECs, and used several statistical methods to analyze program effects. Round I Empowerment Zones (EZ) and Enterprise Communities (EC) implemented a variety of activities using $1 billion in federal grant funding from the Department of Health and Human Services (HHS), and as of March 2006, the designated communities had expended all but 15 percent of this funding. Most of the activities that the grant recipients put in place were community development projects, such as projects supporting education and housing. Other activities included economic opportunity initiatives such as job training and loan programs. Although all EZs and ECs also reported using the program grants to leverage funds from other sources, reliable data on the extent of leveraging were not available. According to federal standards, agencies should oversee the use of public resources and ensure that ongoing monitoring occurs. However, none of the federal agencies that were responsible for program oversight--including HHS and the departments of Housing and Urban Development (HUD) and Agriculture (USDA)--collected data on the amount of program grant funds used to implement specific program activities. This lack of data limited both federal oversight and GAO's ability to assess the effect of the program. Moreover, because HHS did not provide the states and designated communities with clear guidance on how to monitor the program grant funds, the extent of monitoring varied across the sites. In addition, detailed Internal Revenue Service (IRS) data on the use of EZ/EC program tax benefits were not available. Previously, GAO cited similar challenges in assessing the use of tax benefits in other federal programs and stated that information on tax expenditures should be collected to ensure that these expenditures are achieving their intended purpose. Although GAO recommended in 2004 that HUD, USDA, and IRS work together to identify the data needed to assess the EZ/EC tax benefits and the cost effectiveness of collecting the information, the three agencies did not reach agreement on an approach. Without adequate data on the use of program grant funds or tax benefits, neither the responsible federal agencies nor GAO could determine whether the EZ/EC funds had been spent effectively or that the tax benefits had in fact been used as intended. Using the data that were available, GAO attempted to analyze changes in several indicators--poverty and unemployment rates and two measures of economic growth. Although improvements in poverty, unemployment, and economic growth had occurred in the EZs and ECs, our econometric analysis of the eight urban EZs could not tie these changes definitively to the EZ designation.
Background The U.S. surface and maritime transportation systems facilitate mobility through an extensive network of infrastructure and operators, as well as through the vehicles and vessels that permit passengers and freight to move within the systems. The systems include 3.9 million miles of public roads, 121,000 miles of major private railroad networks, and 25,000 miles of commercially navigable waterways. They also include over 500 major urban public transit operators in addition to numerous private transit operators, and more than 300 ports on the coasts, Great Lakes, and inland waterways. Maintaining transportation systems is critical to sustaining America’s economic growth. Efficient mobility systems significantly affect economic development: cities could not exist and global trade could not occur without systems to transport people and goods. The pressures on the existing transportation system are mounting, however, as both passenger and freight travel are expected to increase over the next 10 years, according to Department of Transportation (DOT) projections. Passenger vehicle travel on public roads is expected to grow by 24.7 percent from 2000 to 2010. Passenger travel on transit systems is expected to increase by 17.2 percent over the same period. Amtrak has estimated that intercity passenger rail ridership will increase by 25.9 percent from 2001 to 2010. Preliminary estimates by DOT indicate that tons of freight moved on all surface and maritime modes—truck, rail, and water—are expected to increase by 43 percent from 1998 through 2010, with the largest increase expected to be in the truck sector. The key factors behind increases in passenger travel, and the modes travelers choose, are expected to be population growth, the aging of the population, and rising affluence. For freight movements, economic growth, increasing international trade, and the increasing value of cargo shipped may affect future travel levels and the modes used to move freight. The relative roles of each sector involved in surface and maritime transportation activities—including the federal government, other levels of government, and the private sector—vary across modes. For public roads, the federal government owns few roads but has played a major role in funding the nation’s highways. With the completion of the interstate highway system in the 1980s—and continuing with passage of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and its successor legislation, the Transportation Equity Act for the 21st Century (TEA-21), in 1998—the federal government shifted its focus toward preserving and enhancing the capacity of the system. While the federal government’s primary role has been to provide capital funding for the interstate system and other highway projects, state and local governments provide the bulk of the funding for public roads in the United States and are responsible for operating and maintaining all nonfederal roads, including the interstate system. For transit systems—which include a variety of multiple-occupancy vehicle services designed to transport passengers on local and regional routes—the federal government provides financial assistance to state and local transit operators to develop new transit systems and improve, maintain, and operate existing systems. The largest portion of capital funding for transit comes from the federal government, while the primary source for operating funds comes from passenger fares. The respective roles of the public and private sector and the revenue sources vary for passenger as compared with freight railroads. For passenger railroads, the Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Since its founding, Amtrak has rebuilt rail equipment and benefited from significant public investment in track and stations, especially in the Northeast corridor, which runs between Boston and Washington, D.C. The role of the federal government in providing financial support to Amtrak is currently under review amid concerns about the corporation’s financial viability and discussions about the future direction of federal policy toward intercity rail service. For freight railroads, the private sector owns, operates, and provides almost all of the financing for freight railroads. Currently, the federal government plays a relatively small role in financing freight railroad infrastructure by offering some credit assistance to state and local governments and railroads for capital improvements. The U.S. maritime transportation system primarily consists of waterways, ports, the intermodal connections (e.g., inland rail and roadways) that permit passengers and cargo to reach marine facilities, and the vessels and vehicles that move cargo and people within the system. The maritime infrastructure is owned and operated by an aggregation of state and local agencies and private companies, with some federal funding provided by the Corps of Engineers, the U.S. Coast Guard, and DOT’s Maritime Administration. Funding authorization for several key federal surface transportation programs will expire soon. For example, TEA-21’s authorization of appropriations expires in fiscal year 2003 and the Amtrak Reform and Accountability Act of 1997 authorized federal appropriations for Amtrak through the end of fiscal year 2002. In addition, the federal funding processes and mechanisms for the maritime transportation system are currently under review by two interagency groups. Key Mobility Challenges Include Growing Congestion and Other Problems There are several challenges to mobility. Three of the most significant are growing congestion, ensuring access to transportation for certain underserved populations, and addressing the transportation system’s negative effects on the environment and communities. Congestion Ensuring continued mobility involves preventing congestion from overwhelming the transportation system. Congestion is growing at localized bottlenecks (places where the capacity of the transportation system is most limited) and at peak travel times on public roads, transit systems, freight rail lines, and at freight hubs such as ports and borders where freight is transferred from one mode to another. In particular: For local urban travel, a study by the Texas Transportation Instituteshowed that the amount of traffic experiencing congestion during peak travel periods doubled from 33 percent in 1982 to 66 percent in 2000 in the 75 metropolitan areas studied. In addition, the average time per day that roads were congested increased over this period, from about 4.5 hours in 1982 to about 7 hours in 2000. Increased road congestion can also affect public bus and other transit systems that operate on roads. Some transit systems are also experiencing increasing rail congestion at peak travel times. In addition, concerns have been raised about how intercity and tourist travel interacts with local traffic in metropolitan areas and in smaller towns and rural areas, and how this interaction will evolve in the future. According to a report sponsored by the World Business Council for Sustainable Development, Mobility 2001, capacity problems for intercity travelers are severe in certain heavily traveled corridors, such as the Northeast corridor, which links Washington, D.C., New York, and Boston. In addition, the study said that intercity travel may constitute a substantial proportion of total traffic passing through smaller towns and rural areas. Congestion is expected to increase on major freight transportation networks at specific bottlenecks, particularly where intermodal connections occur, and at peak travel times. This expectation raises concerns about how interactions between freight and passenger travel and how increases in both types of travel will affect mobility in the future. Trucks contribute to congestion in metropolitan and other areas where they generally move on the same roads and highways as personal vehicles, particularly during peak periods of travel. In addition, high demand for freight, particularly freight moved on trucks, exists in metropolitan areas where overall congestion tends to be the worst. With international trade an increasing part of the economy and with larger containerships being built, some panelists indicated that more pressure will be placed on the already congested road and rail connections to major U.S. seaports and at the border crossings with Canada and Mexico. According to a DOT report, more than one-half of the ports responding to a 1997 survey of port access issues identified traffic impediments on local truck routes as the major infrastructure problem. This congestion has considerable implications for our economy given that 95 percent of our overseas trade tonnage moves by water, and the cargo moving through the U.S. marine transportation system contributes billions of dollars to the U.S. gross domestic product. Railroads are beginning to experience more severe capacity constraints in heavily used corridors, such as the Northeast corridor, and within major metropolitan areas, especially where commuter and intercity passenger rail services share tracks with freight railroads. Capacity constraints at these bottlenecks are expected to worsen in the future. On the inland waterways, congestion is increasing at aging and increasingly unreliable locks. According to the Corps of Engineers, the number of hours that locks were unavailable due to lock failures increased in recent years, from about 35,000 hours in 1991 to 55,000 hours in 1999, occurring primarily on the upper Mississippi and Illinois rivers. Also according to the Corps of Engineers, with expected growth in freight travel, 15 of 26 locks that they studied are expected to exceed 80 percent of their capacity by 2020, as compared to 4 that had reached that level in 1999. Some of the systemic factors that contribute to congestion include (1) barriers to building enough capacity to accommodate growing levels of travel; (2) challenges to effectively managing and operating transportation systems; and (3) barriers to effectively managing how, and the extent to which, transportation systems are used. First, there is insufficient capacity at bottlenecks and during peak travel times to accommodate traffic levels for a variety of reasons. For example, transportation infrastructure (which is generally provided by the public sector, except for freight railroads) takes a long time to plan and build, is often costly, and can conflict with other social goals such as environmental preservation and community maintenance. Furthermore, funding and planning rigidities in the public institutions responsible for providing transportation infrastructure tend to promote one mode of transportation, rather than a combination of balanced transportation choices, making it more difficult to deal effectively with congestion. In addition, some bottlenecks occur where modes connect, and because funding is generally mode-specific, dealing with congestion at these intermodal connections is not easily addressed. Second, many factors related to the management and operation of transportation systems can contribute to increasing congestion. Congestion on highways is in part due to poor management of traffic flows on the connectors between highways and poor management in clearing roads that are blocked due to accidents, inclement weather, or construction. For example, in the 75 metropolitan areas studied by the Texas Transportation Institute, 54 percent of annual vehicle delays in 2000 were due to incidents such as breakdowns or crashes. In addition, the Oak Ridge National Laboratory reported that, nationwide, significant delays are caused by work zones on highways; poorly timed traffic signals; and snow, ice, and fog. Third, some panelists said that congestion on transportation systems is also due in part to inefficient pricing of the infrastructure because users— whether they are drivers on a highway or barge operators moving through a lock—do not pay the full costs they impose on the system and on other users for their use of the system. If travelers and freight carriers had to pay a higher cost for using transportation systems during peak periods to reflect the full costs they impose, they might have an incentive to avoid or reschedule some trips and to load vehicles more fully, possibly resulting in less congestion. Panelists also noted that the types of congestion problems that are expected to worsen involve interactions between long-distance and local traffic and between passengers and freight. Existing institutions may not have the capacity or the authority to address them. For example, some local bottlenecks may hinder traffic that has regional or national significance, such as national freight flows from major coastal ports, or can affect the economies and traffic in more than one state. Current state and local planning organizations may have difficulty considering all the costs and benefits related to national or international traffic flows that affect other jurisdictions as well as their own. Furthermore, in our recent survey of states, most states reported that the increasing volume of both car and truck traffic over the next decade would negatively affect the physical condition of pavement and bridges and the safety of their interstate highways. Other Mobility Challenges Besides dealing with the challenge of congestion, ensuring mobility also involves ensuring access to transportation for certain underserved populations. Settlement patterns and dependence on automobiles limit access to transportation systems for some elderly people and low-income households, and in rural areas where populations are expected to expand. The elderly have different mobility challenges than other populations because they are less likely to have drivers’ licenses, have more serious health problems, and may require special services and facilities, according to the Department of Transportation’s 1999 Conditions and Performance report. People who cannot drive themselves tend to rely on family, other caregivers, or friends to drive them, or find alternative means of transportation. Many of the elderly also may have difficulty using public transportation due to physical ailments. As a result, according to the 1999 Conditions and Performance report and a 1998 report about mobility for older drivers, they experience increased waiting times, uncertainty, and inconvenience, and they are required to do more advance trip planning. These factors can lead to fewer trips taken for necessary business and for recreation, as well as restrictions on times and places that healthcare can be obtained. As the population of elderly individuals increases over the next 10 years, issues pertaining to access are expected to become more prominent in society. Lower income levels can also be a significant barrier to transportation access. The cost of purchasing, insuring, and maintaining a car is prohibitive to some households, and 26 percent of low-income households do not own a car, compared with 4 percent of other households, according to the 1999 Conditions and Performance report. Among all low-income households, about 8 percent of trips are made in cars that are owned by others as compared to 1 percent for other income groups. Furthermore, similar uncertainties and inconveniences apply to this group as to the elderly regarding relying on others for transportation. In addition, in case studies of access to jobs for low-income populations, Federal Transit Administration (FTA) researchers found that transportation barriers to job access included gaps in transit service, lack of knowledge of where transit services are provided, and high transportation costs resulting from multiple transfers and long distances traveled. Rural populations, which according to the 2000 Census grew by 10 percent over the last 10 years, also face access problems. Access to some form of transportation is necessary to connect rural populations to jobs and other amenities in city centers or, increasingly, in the suburbs. Trips by rural residents tend to be longer due to lower population densities and the relative isolation of small communities. Therefore, transportation can be a challenge to provide in rural areas, especially for persons without access to private automobiles. A report prepared for the FTA in 2001 found that 1 in 13 rural residents lives in a household without a personal vehicle. In addition, according to a report by the Coordinating Council on Access and Mobility, while almost 60 percent of all nonmetropolitan counties had some public transportation services in 2000, many of these operations were small and offered services only to limited geographic areas during limited times. Finally, transportation can also negatively affect the environment and communities by increasing the levels of air and water pollution. As a result of the negative consequences of transportation, tradeoffs must be made between facilitating increased mobility and giving due regard to environmental and other social goals. For example, transportation vehicles are major sources of local, urban, and regional air pollution because they depend on fossil fuels to operate. Emissions from vehicles include sulfur dioxide, lead, carbon monoxide, volatile organic compounds, particulate matter, and nitrous oxides. Vehicle emissions in congested areas can trigger respiratory and other illnesses, and runoff from impervious surfaces, such as highways, can carry pollutants into lakes, streams, and rivers, thus threatening aquatic environments. Freight transportation also has significant environmental effects. Trucks are significant contributors to air pollution. According to the American Trucking Association, trucks were responsible for 18.5 percent of nitrous oxide emissions and 27.5 percent of other particulate emissions from mobile sources in the United States. The Mobility 2001 report states that freight trains also contribute to emissions of hydrocarbons, carbon monoxide, and nitrous oxide, although generally at levels considerably lower than trucks. In addition, while large shipping vessels are more energy efficient than trucks or trains, they are also major sources of nitrogen, sulfur dioxide, and diesel particulate emissions. According to the International Maritime Organization, ocean shipping is responsible for 22 percent of the wastes dumped into the sea on an annual basis. Three Strategies for Addressing Mobility Challenges The experts we consulted presented numerous approaches for addressing the types of challenges discussed throughout this statement, but they emphasized that no single strategy would be sufficient. From these discussions and our literature review, we have identified three key strategies that may help transportation decisionmakers at all levels of government address mobility challenges and the institutional barriers that contribute to them. The strategies include (1) focusing on systemwide outcomes, (2) using a full range of techniques, and (3) providing options for financing surface and maritime transportation. Focus on the Entire Surface and Maritime Transportation System Rather Than on Specific Modes or Types of Travel to Achieve Desired Mobility Outcomes. Shifting the focus of government transportation agencies at the federal, state, and local levels to consider all modes and types of travel in addressing mobility challenges—as opposed to focusing on a specific mode or type of travel in planning and implementing mobility improvements—could help achieve enhanced mobility. Addressing the types of mobility challenges discussed earlier in this statement can require a scope beyond a local jurisdiction, state line, or one mode or type of travel. For example, congestion challenges often occur where modes connect or should connect—such as ports or freight hubs where freight is transferred from one mode to another, or airports that passengers need to access by car, bus, or rail. These connections require coordination of more than one mode of transportation and cooperation among multiple transportation providers and planners, such as port authorities, metropolitan planning organizations (MPO), and private freight railroads. Therefore, a systemwide approach to transportation planning and funding, as opposed to focus on a single mode or type of travel, could improve focus on outcomes related to user or community needs. The experts we consulted provided a number of examples of alternative transportation planning and funding systems that might better focus on outcomes that users and communities desire, including the following: Performance-oriented funding system. The federal government would first define certain national interests of the transportation system—such as maintaining the entire interstate highway system or identifying freight corridors of importance to the national economy—then set national performance standards for those systems that states and localities must meet. Federal funds would be distributed to those entities that address national interests and meet the established standards. Any federal funds remaining after meeting the performance standards could then be used for whatever transportation purpose the state or locality deems most appropriate to achieve state or local mobility goals. Federal financial reward-based system. Federal support would reward those states or localities that apply federal money to gain efficiencies in their transportation systems, or tie transportation projects to land use and other local policies to achieve community and environmental goals, as well as mobility goals. System with different federal matching criteria for different types of expenditures that might reflect federal priorities. For example, if infrastructure preservation became a higher national priority than building new capacity, matching requirements could be changed to a 50 percent federal share for building new physical capacity and an 80 percent federal share for preservation. System in which state and local governments pay for a larger share of transportation projects, which might provide them with incentives to invest in more cost-effective projects. Reducing the federal match for projects in all modes may give states and localities more fiscal responsibility for projects they are planning. If cost savings resulted, these entities might have more funds available to address other mobility challenges. Making federal matching requirements equal for all modes may avoid creating incentives to pursue projects in one mode that might be less effective than projects in other modes. In addition, we recently reported on the need to view various transportation modes, and freight movement in particular, from an integrated standpoint, particularly for the purposes of developing a federal investment strategy and considering alternative funding approaches. We identified four key components of a systematic framework to guide transportation investment decisions including (1) establishing national goals for the system, (2) clearly defining the federal role relative to other stakeholders, (3) determining the funding tools and other approaches that will maximize the impact of any federal investment, and (4) ensuring that a process is in place for evaluating performance and accountability. Use a Full Range of Techniques to Address Mobility Challenges Using a range of techniques to address mobility challenges may help control congestion and improve access. This approach involves a strategic mix of construction, corrective and preventive maintenance, rehabilitation, operations and system management, and managing system use through pricing or other techniques. No one type of technique would be sufficient to address mobility challenges. Although these techniques are currently in use, the experts we consulted indicated that planners should more consistently consider a full range of techniques, as follows: Build new infrastructure. Building additional infrastructure is perhaps the most familiar technique for addressing congestion and improving access to surface and maritime transportation. Although there is a lot of unused capacity in the transportation system, certain bottlenecks and key corridors require new infrastructure. Increase infrastructure maintenance and rehabilitation. An emphasis on enhancing capacity from existing infrastructure through increased corrective and preventive maintenance and rehabilitation is an important supplement to, and sometimes a substitute for, building new infrastructure. Maintaining and rehabilitating transportation systems can improve the speed and reliability of passenger and freight travel, thereby optimizing capital investments. Improve management and operations. Better management and operation of existing surface and maritime transportation infrastructure is another technique for enhancing mobility because it may allow the existing transportation system to accommodate additional travel without having to add new infrastructure. For example, the Texas Transportation Institute reported that coordinating traffic signal timing with changing traffic conditions could improve flow on congested roadways. One panelist noted that shifting the focus of transportation planning from building capital facilities to an “operations mindset” will require a cultural shift in many transportation institutions, particularly in the public sector, so that the organizational structure, hierarchy, and rewards and incentives are all focused on improving transportation management and operations. Increase investment in technology. Increasing public sector investment in Intelligent Transportation System (ITS) technologies that are designed to enhance the safety, efficiency, and effectiveness of the transportation network, can serve as a way of increasing capacity and mobility without making major capital investments. ITS includes technologies that improve traffic flow by adjusting signals, facilitating traffic flow at toll plazas, alerting emergency management services to the locations of crashes, increasing the efficiency of transit fare payment systems, and other actions. Other technological improvements include increasing information available to users of the transportation system to help people avoid congested areas and to improve customer satisfaction with the system. Use demand management techniques. Another approach to reducing congestion without making major capital investments is to use demand management techniques to reduce the number of vehicles traveling at the most congested times and on the most congested routes. One type of demand management for travel on public roads is to make greater use of pricing incentives. In particular, some economists have proposed using congestion pricing that involves charging surcharges or tolls to drivers who choose to travel during peak periods when their use of the roads increases congestion. These surcharges might help reduce congestion by providing incentives for travelers to share rides, use transit, travel at less congested (generally off-peak) times and on less congested routes, or make other adjustments—and at the same time, generate more revenues that can be targeted to alleviating congestion in those specific corridors. In addition to pricing incentives, other demand management techniques that encourage ride-sharing may be useful in reducing congestion. Ride- sharing can be encouraged by establishing carpool and vanpool staging areas, providing free or preferred parking for carpools and vanpools, subsidizing transit fares, and designating certain highway lanes as high occupancy vehicle (HOV) lanes that can only be used by vehicles with a specified number of people in them (i.e., two or more). Demand management techniques on roads, particularly those involving pricing, often provoke strong political opposition. The panelists cited a number of concerns about pricing strategies including (1) the difficulty in instituting charges to use roads that previously had been available “free”, (2) the equity issues that arise from the potentially regressive nature of these charges (i.e., the surcharges constitute a larger portion of the earnings of lower income households and therefore impose a greater financial burden on them), and (3) the concern that restricting lanes or roads to people who pay to use them is elitist because that approach allows people who can afford to pay the tolls to avoid congestion that others must endure. Provide Options for Financing Mobility Improvements and Consider Additional Sources of Revenue More options for financing surface and maritime transportation projects and more sources of revenue may be needed to achieve desired mobility outcomes and address those segments of transportation systems that are most congested. Our panelists suggested three financing strategies: Increase funding flexibility. The current system of financing surface and maritime transportation projects limits options for addressing mobility challenges. For example, separate funding for each mode at the federal, state, and local level can make it difficult to consider possible efficient and effective ways for enhancing mobility. Providing more flexibility in funding across modes could help address this limitation. Expand support for alternative financing mechanisms. The public sector could also expand its financial support for alternative financing mechanisms to access new sources of capital and stimulate additional investment in surface and maritime transportation infrastructure. These mechanisms include both newly emerging and existing financing techniques such as providing credit assistance to state and local governments for capital projects and using tax policy to provide incentives to the private sector for investing in surface and maritime transportation infrastructure. These mechanisms currently provide a small portion of the total funding that is needed for capital investment and some of them could create future funding difficulties for state and local agencies because they involve greater borrowing from the private sector. Consider new revenue sources. A possible future shortage of revenues may limit efforts to address mobility challenges, according to many of the panelists. For example, some panelists said that because of the increasing use of alternative fuels, revenues from the gas tax are expected to decrease, possibly limiting funds available to finance future transportation projects. One method of raising revenue is for counties and other regional authorities to impose sales taxes for funding transportation projects. A number of counties have already passed such taxes and more are being considered nationwide. However, several panelists expressed concerns that this method might not be the best option for addressing mobility challenges because (1) moving away from transportation user charges to sales taxes that are not directly tied to the use of transportation systems weakens the ties between transportation planning and finance and (2) counties and other taxing authorities may be able to bypass traditional state and metropolitan planning processes because sales taxes provide them with their owns funding sources for transportation. New or increased taxes or other fees imposed on the freight sector could also help fund mobility improvements, for example, by increasing taxes on freight trucking. The Joint Committee on Taxation estimated that raising the ceiling on the tax paid by heavy vehicles to $1,900 could generate about $100 million per year. Another revenue raising method would be to dedicate more of the revenues from taxes on alternative fuels, such as gasohol, to the Highway Trust Fund rather than to Treasury’s general fund, as currently happens. However, this would decrease the amount of funds available for other federal programs. Finally, pricing strategies, mentioned earlier in this statement as a technique to reduce congestion, are also possible additional sources of revenue for transportation purposes. In summary, the nation faces significant challenges in maintaining and enhancing mobility on its surface and maritime transportation systems, particularly with the growing congestion that accompanies increased passenger and freight travel. However, as the Congress considers reauthorizing surface transportation legislation—and weighs the structure, nature, and level of federal investment it will provide in future years to support surface and other transportation activities—it has an opportunity to consider new strategies for dealing with congestion and promoting enhanced mobility. While no single approach is sufficient, the key strategies that we have outlined today may help transportation decisionmakers at all levels of government address mobility challenges and the institutional barriers that contribute to them. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have at this time. Contacts and Acknowledgments For further information on this testimony, please contact JayEtta Z. Hecker at (202) 512-2834 or heckerj@gao.gov. Individuals making key contributions to this testimony include Christine Bonham, Jay Cherlow, Colin Fallon, Rita Grieco, David Hooper, Jessica Lucas, Sara Ann Moessbauer, Jobenia Odum, Katherine Siggerud, and Andrew VonAh. Appendix I: Scope and Methodology Our work covered major modes of surface and maritime transportation for passengers and freight, including public roads, public transit, railways, and ports and inland waterways. To identify mobility challenges and strategies for addressing those challenges, we primarily relied upon expert opinion, as well as a review of pertinent literature. In particular, we convened two panels of surface and maritime transportation experts to identify mobility issues and gather views about alternative strategies for addressing the issues and challenges to implementing those strategies. We contracted with the National Academy of Sciences (NAS) and its Transportation Research Board (TRB) to provide technical assistance in identifying and scheduling the two panels that were held on April 1 and 3, 2002. TRB officials selected a total of 22 panelists with input from us, including a cross-section of representatives from all surface and maritime modes and from various occupations involved in transportation planning. In keeping with NAS policy, the panelists were invited to provide their individual views and the panels were not designed to build consensus on any of the issues discussed. We analyzed the content of all of the comments made by the panelists to identify common themes about key mobility challenges and strategies for addressing those challenges. Where applicable, we also identified the opposing points of view about the strategies. The names and affiliations of the panelists are as follows. We also note that two of the panelists served as moderators for the sessions, Dr. Joseph M. Sussman of the Massachusetts Institute of Technology and Dr. Damian J. Kulash of the Eno Foundation, Inc. Benjamin J. Allen is Interim Vice President for External Affairs and Distinguished Professor of Business at Iowa State University. Daniel Brand is Vice President of Charles River Associates, Inc., in Boston, Mass. Jon E. Burkhardt is the Senior Study Director at Westat, Inc., in Rockville, Md. Sarah C. Campbell is the President of TransManagement, Inc., in Washington, D.C. Christina S. Casgar is the Executive Director of the Foundation for Intermodal Research and Education in Greenbelt, Md. Anthony Downs is a Senior Fellow at the Brookings Institution. Thomas R. Hickey served until recently as the General Manager of the Port Authority Transit Corporation in Lindenwold, N.J. Ronald F. Kirby is the Director of Transportation Planning at the Metropolitan Washington Council of Governments. Damian J. Kulash is the President and Chief Executive Officer of the Eno Transportation Foundation, Inc., in Washington, D.C. Charles A. Lave is a Professor of Economics (Emeritus) at the University of California, Irvine where he served as Chair of the Economics Department. Stephen Lockwood is Vice President of Parsons Corporation, an international firm that provides transportation planning, design, construction, engineering, and project management services. Timothy J. Lomax is a Research Engineer at the Texas Transportation Institute at Texas A&M University. James R. McCarville is the Executive Director of the Port of Pittsburgh Commission. James W. McClellan is Senior Vice President for Strategic Planning at the Norfolk Southern Corporation in Norfolk, Va. Michael D. Meyer is a Professor in the School of Civil and Environmental Engineering at the Georgia Institute of Technology and was the Chair of the school from 1995 to 2000. William W. Millar is President of the American Public Transportation Association (APTA). Alan E. Pisarski is an independent transportation consultant in Falls Church, Va., providing services to public and private sector clients in the United States and abroad in the areas of transport policy, travel behavior, and data analysis and development. Craig E. Philip is President and Chief Executive Officer of the Ingram Barge Company in Nashville, Tenn. Arlee T. Reno is a consultant with Cambridge Systematics in Washington, D.C. Joseph M. Sussman is the JR East Professor in the Department of Civil and Environmental Engineering and the Engineering Systems Division at the Massachusetts Institute of Technology. Louis S. Thompson is a Railways Advisor for the World Bank where he consults on all of the Bank’s railway lending activities. Martin Wachs is the Director of the Institute of Transportation Studies at the University of California, Berkeley and he holds faculty appointments in the departments of City and Regional Planning and Civil and Environmental Engineering at the university. Appendix II: Related GAO Products Transportation Infrastructure: Alternative Financing Mechanisms for Surface Transportation. GAO-02-1126T. Washington, D.C.: September 25, 2002. Highway Infrastructure: Preliminary Information on the Timely Completion of Highway Construction Projects. GAO-02-1067T. Washington, D.C.: September 19, 2002. Marine Transportation: Federal Financing and a Framework for Infrastructure Investments. GAO-02-1033. Washington, D.C.: September 9, 2002. Surface and Maritime Transportation: Developing Strategies for Enhancing Mobility: A National Challenge. GAO-02-775. Washington, D.C.: August 30, 2002. Highway Infrastructure: Interstate Physical Conditions Have Improved, but Congestion and Other Pressures Continue. GAO-02-571. Washington, D.C.: May 31, 2002. Highway Financing: Factors Affecting Highway Trust Fund Revenues. GAO-02-667T. Washington, D.C.: May 9, 2002. Transportation Infrastructure: Cost and Oversight Issues on Major Highway and Bridge Projects. GAO-02-702T. Washington, D.C.: May 1, 2002. Intercity Passenger Rail: Congress Faces Critical Decisions in Developing National Policy. GAO-02-522T. Washington, D.C.: April 11, 2002. Environmental Protection: Federal Incentives Could Help Promote Land Use That Protects Air and Water Quality. GAO-02-12. Washington, D.C.: October 31, 2001. Intercity Passenger Rail: The Congress Faces Critical Decisions About the Role of and Funding for Intercity Passenger Rail Systems. GAO-01- 820T. Washington, D.C.: July 25, 2001. U.S. Infrastructure: Funding Trends and Federal Agencies’ Investment Estimates. GAO-01-986T. Washington, D.C.: July 23, 2001.
The scope of the U.S. surface and maritime transportation systems--which primarily includes roads, mass transit systems, railroads, and ports and waterways--is vast. One of the major goals of these systems is to provide and enhance mobility. With increasing passenger and freight travel, the surface and maritime transportation systems face a number of challenges in ensuring continued mobility. These challenges include: (1) preventing congestion from overwhelming the transportation system, and (2) ensuring access to transportation for certain underserved populations and achieving a balance between enhancing mobility and giving due regard to environmental and other social goals. There is no one solution for the mobility challenges facing the nation, and numerous approaches are needed to address these challenges. These strategies include: (1) focusing on the entire surface and maritime transportation system rather than on specific modes or types of travel to achieve desired mobility outcomes, (2) using a full range of techniques to achieve desired mobility outcomes, and (3) providing more options for financing mobility improvements and considering additional sources of revenue.
Management and Oversight of Indian Energy Resources and Development In our prior work, we identified concerns associated with BIA management of energy resources and categorized them into five broad areas: (1) oversight of BIA activities; (2) collaboration and communication; (3) BIA workforce planning; (4) technology; and (5) BIA’s data. In the past 2 years, we issued three reports on Indian energy resources and development in which we made 14 recommendations to BIA. BIA agreed with most of these recommendations, and has identified steps it will take to address some of the recommendations. Oversight of BIA Activities In a June 2015 report, we found that BIA review and approval is required throughout the development process, including the approval of leases, right-of-way (ROW) agreements, and appraisals. However, BIA does not have a documented process or the data needed to track its review and response times—such as data on the date documents are received, the date the review process is considered complete by the agency, and the date documents are approved or denied. However, a few stakeholders we interviewed and some literature we reviewed suggested that BIA’s review and approval process can be lengthy and increase development costs and project development times, resulting in missed development opportunities, lost revenue, and jeopardized viability of projects. For example, in 2014, the Acting Chairman for the Southern Ute Indian Tribe reported that BIA’s review of some of its energy-related documents took as long as 8 years. Specifically, as of April 30, 2014, the tribe had been waiting for at least 5 years for BIA to review 81 pipeline ROW agreements—11 of these 81 ROW agreements had been under review for 8 years. According to the tribal official, had these ROW agreements been approved in a timely manner, the tribe would have received revenue through various sources, including tribal permitting fees, oil and gas severance taxes, and royalties. The tribal official noted that, during the period of delay, prices for natural gas rose to an historic high but had since declined. Therefore, the official reported that much of the estimated $95 million in lost revenue would never be recovered by the tribe. In another example from our June 2015 report, one lease for a proposed utility-scale wind project took BIA more than 3 years to review and approve and according to a tribal official, the lease was only reviewed and approved after multiple calls and letters from the tribe to BIA headquarters. According to a tribal official, the long review time contributed to uncertainty about the continued viability of the project because data used to support the economic feasibility and environmental impact of the project became too old to accurately reflect current conditions. We recommended in our June 2015 report that Interior direct BIA to develop a documented process to track its review and response times. Interior agreed with the recommendation and stated it would try to implement a tracking and monitoring mechanism by the end of fiscal year 2017 for oil and gas leases. However, Interior did not indicate whether it intends to track and monitor its review of other energy-related documents that must be approved before tribes can develop resources. Without comprehensively tracking and monitoring its review process, BIA cannot ensure that documents are moving forward in a timely manner, and lengthy review times may continue to contribute to lost revenue and missed development opportunities for Indian tribes. Further, in a June 2016 report, we found that BIA took steps to improve its process for reviewing revenue-sharing agreements but still had not established a systematic mechanism for monitoring or tracking. We recommended, among other things, that BIA develop a systematic mechanism for tracking these agreements through the review and approval process. Interior concurred with this recommendation and stated that BIA would develop such a mechanism and in the meantime would use a centralized tracking spreadsheet. Collaboration and Communication In June 2015, we reported that the added complexity of the federal process, which can include multiple regulatory agencies, prevents many developers from pursuing Indian energy resources for development. In a November 2016 report, we reported that Interior has recognized the need for collaboration in the regulatory process and described the creation of the Indian Energy Service Center as a center point of collaboration for permitting that will break down barriers between federal agencies. We found that BIA had taken steps to form an Indian Energy Service Center that was intended to, among other things, help expedite the permitting process associated with Indian energy development. We reported that the Service Center had the potential to increase collaboration between BIA and BLM on some permitting requirements associated with oil and gas development. However, we found that BIA did not coordinate with other key regulatory agencies, including Interior’s Fish and Wildlife Service, the U.S. Army Corps of Engineers, and the Environmental Protection Agency. As a result, the Service Center was neither established as the central point for collaborating with all federal regulatory partners generally involved in energy development, nor did it serve as a single point of contact for permitting requirements. Without serving in these capacities, the Service Center was limited in its ability to improve efficiencies in the federal regulatory process. We also found that in forming the Service Center, BIA did not involve key stakeholders, such as the Department of Energy (DOE)—an agency with significant energy expertise—and BIA employees from agency offices. By not involving key stakeholders, BIA was missing an opportunity to incorporate their expertise into its efforts. We recommended that BIA include other regulatory agencies in the Service Center so that it can act as a single point of contact or a lead agency to coordinate and navigate the regulatory process. We also recommended that BIA establish formal agreements with key stakeholders, such as DOE, that identify the advisory or support role of the office, and establish a process for seeking and obtaining input from key stakeholders, such as BIA employees, on the Service Center’s activities. Interior agreed with our recommendations and described its plans to address them. In addition, in 2005, Congress provided an option for tribes to enter into an agreement with the Secretary of the Interior that allows the tribe, at their discretion, to enter into leases, business agreements, and ROW agreements for energy resource development on tribal lands without review and approval by the Secretary. However, in our June 2015 report, we found that uncertainties about Interior’s regulations for implementing this option have contributed to deter tribes from pursuing such agreements. We recommended that Interior provide clarifying guidance. In August 2015, Interior stated the department was considering further guidance. As of December 2016, however Interior had not provided additional guidance. BIA Workforce Planning In our June 2015 report, we found that BIA’s long-standing workforce challenges, such as inadequate staff resources and staff at some offices without the skills needed to effectively review energy-related documents, were factors hindering Indian energy development. Further, in November 2016, we found that some BIA offices had high vacancy rates for key energy development positions, and some offices reported not having staff with key skills to review energy-related documents. For example, BIA agency officials in an area where tribes are considering developing wind farms told us that they would not feel comfortable approving proposed wind leases because their staff do not have the expertise to review such proposals. Consequently, these officials told us that they would send a proposed wind lease to higher ranking officials in the regional office for review. Similarly, an official from the regional office stated that they do not have the required expertise and would forward such a proposal to senior officials in Interior’s Office of the Solicitor. The Director of BIA told us that BIA agency offices generally do not have the expertise to help tribes with solar and wind development because it is rare that such skills are needed. Through the Indian Energy Service Center, BIA plans to hire numerous new staff over the next 2 years, which could resolve some of the long- standing workforce challenges that have hindered Indian energy development in the past. However, BIA is hiring new staff without incorporating effective workforce planning principles. Specifically, BIA has not assessed key skills needed to fulfill its responsibilities related to energy development or identified skill gaps, and does not have a documented process to provide reasonable assurance its workforce composition at agency offices is consistent with its mission, goals, and tribal priorities. As a result, BIA cannot provide reasonable assurance it has the right people in place with the right skills to effectively meet its responsibilities or whether new staff will fill skill gaps. We recommended in our November 2016 report that BIA assess critical skills and competencies needed to fulfill its responsibilities related to energy development and identify potential gaps. We also recommended BIA establish a documented process for assessing BIA’s workforce composition at agency offices taking into account BIA’s mission, goals, and tribal priorities. Interior agreed with our recommendations and stated it was taking steps to implement them. Technology In June 2015, we found that BIA did not have the necessary geographic information system (GIS) mapping data for identifying who owns and uses resources, such as existing leases. Interior guidance states that efficient management of oil and gas resources relies, in part, on GIS mapping technology because it allows managers to easily identify resources available for lease and where leases are in effect. According to a BIA official, without GIS data, the process of identifying transactions, such as leases and access agreements for Indian land and resources, can take significant time and staff resources to search paper records stored in multiple locations. We recommended BIA should take steps to improve its GIS capabilities to ensure it can verify ownership in a timely manner. Interior stated it will enhance mapping capabilities by developing a national dataset composed of all Indian land tracts and boundaries in the next 4 years. Incomplete and Inaccurate Data In June 2015, we found that BIA did not have the data it needs to verify who owns some Indian oil and gas resources or identify where leases are in effect. In some cases, BIA cannot verify ownership because federal cadastral surveys—the means by which land is defined, divided, traced, and recorded—cannot be found or are outdated. The ability to account for Indian resources would assist BIA in fulfilling its responsibilities, and determining ownership is a necessary step for BIA to approve leases and other energy-related documents. We recommended that BIA identify land survey needs. Interior agreed with the recommendation and stated it will develop a data collection tool to identify the extent of its survey needs in fiscal year 2016. As of December 2016, Interior had not provided information on the status of its efforts to develop a data collection tool. In conclusion, our reviews have identified a number of areas in which BIA could improve its management of Indian energy resources. Interior has stated that it intends to take some steps to implement our recommendations, and we will continue to monitor its efforts. We look forward to continuing to work with this committee in overseeing BIA, BIE, and IHS to ensure that they are operating in the most effective and efficient manner, consistent with the federal government’s trust responsibilities, and working toward improving service to tribes and their members. Chairman Farenthold, Ranking Member Plaskett, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contacts and Staff Acknowledgement If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Christine Kehr (Assistant Director), Richard Burkard, Jay Spaan, and Kiki Theodoropoulos made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Indian tribes and their members hold considerable energy resources and may decide to use these resources to provide economic benefits and improve the well-being of their communities. However, according to a 2014 Interior document, these resources are underdeveloped relative to surrounding non-Indian resources. Development of Indian energy resources is a complex process that may involve federal, tribal, and state agencies. Interior's BIA has primary authority for managing Indian energy development and generally holds final decision-making authority for leases, permits, and other approvals required for development. GAO's 2017 biennial update to its High Risk List identifies federal management of programs that serve tribes and their members as a new high risk area needing attention by Congress and the executive branch. This testimony highlights the key findings of three prior GAO reports ( GAO-15-502 , GAO-16-553 , and GAO-17-43 ). It focuses primarily on BIA's management of Indian energy resources and development. For the prior reports, GAO analyzed federal data; reviewed federal, academic, and other literature; and interviewed tribal, federal and industry stakeholders. In three prior reports on Indian energy development, GAO found that the Department of the Interior's (Interior) Bureau of Indian Affairs (BIA) has inefficiently managed Indian energy resources and the development process and thereby limited opportunities for tribes and their members to use those resources to create economic benefits and improve the well-being of their communities. GAO has also reported numerous challenges facing Interior's Bureau of Indian Education and BIA and the Department of Health and Human Services' Indian Health Services in administering education and health care services, which put the health and safety of American Indians served by these programs at risk. For the purposes of this testimony, GAO is focusing on the concerns related to Indian energy. GAO categorized concerns associated with BIA management of energy resources and the development process into several broad areas, including oversight of BIA activities, collaboration, and BIA workforce planning. Oversight of BIA activities . In a June 2015 report, GAO found that BIA review and approval is required throughout the development process. However, BIA does not have a documented process or the data needed to track its review and response times—such as data on the date documents are received, the date the review process is considered complete, and the date documents are approved or denied. GAO recommended that BIA develop a documented process to track its review and response times. Interior generally agreed and stated it would try to implement a tracking and monitoring mechanism by the end of fiscal year 2017 for oil and gas leases. Interior did not indicate whether it intends to track and monitor its review of other energy-related documents that must be approved before tribes can develop resources. Collaboration . In a November 2016 report, GAO found that BIA has taken steps to form an Indian Energy Service Center that is intended to, among other things, help expedite the permitting process associated with Indian energy development. However, BIA did not coordinate with key regulatory agencies, including Interior's Fish and Wildlife Service, the Environmental Protection Agency and the U.S. Army Corps of Engineers. GAO recommended that BIA include other regulatory agencies in the Service Center so that it can act as a single point of contact or lead agency to coordinate and navigate the regulatory process. Interior agreed with our related recommendation and described plans to address it. BIA workforce planning . In June 2015 and in November 2016, GAO reported concerns associated with BIA's long-standing workforce challenges, such as inadequate staff resources and staff at some offices without the skills needed to effectively review energy-related documents. GAO recommended that BIA assess critical skills and competencies needed to fulfill its responsibilities related to energy development, and that it establish a documented process for assessing BIA's workforce composition at agency offices. Interior agreed with our recommendations and stated it is taking steps to implement them.
DHS Has Made Progress in Strengthening Its Management Functions, but Considerable Work Remains DHS Progress in Meeting Criteria for Removal from the High-Risk List DHS’s efforts to strengthen and integrate its management functions have resulted in progress addressing our criteria for removal from the high-risk list. In particular, in our 2015 high-risk update report, which we released earlier this month, we found that DHS has met two criteria and partially met the remaining three criteria, as shown in table 1. Leadership commitment (met). In our 2015 report, we found that the Secretary and Deputy Secretary of Homeland Security, the Under Secretary for Management at DHS, and other senior officials have continued to demonstrate commitment and top leadership support for addressing the department’s management challenges. We also found that they have taken actions to institutionalize this commitment to help ensure the long-term success of the department’s efforts. For example, in April 2014, the Secretary of Homeland Security issued a memorandum entitled Strengthening Departmental Unity of Effort, committing to, among other things, improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability. Senior DHS officials, including the Deputy Secretary and Under Secretary for Management, have also routinely met with us over the past 6 years to discuss the department’s plans and progress in addressing this high-risk area. During this time, we provided specific feedback on the department’s efforts. We concluded that it will be important for DHS to maintain its current level of top leadership support and commitment to ensure continued progress in successfully executing its corrective actions through completion. Corrective action plan (met). We found that DHS has established a plan for addressing this high-risk area. Specifically, in a September 2010 letter to DHS, we identified and DHS agreed to achieve 31 actions and outcomes that are critical to addressing the challenges within the department’s management areas and in integrating those functions across the department. In March 2014, we updated the actions and outcomes in collaboration with DHS to reduce overlap and ensure their continued relevance and appropriateness. These updates resulted in a reduction from 31 to 30 total actions and outcomes. Toward achieving the actions and outcomes, DHS issued its initial Integrated Strategy for High Risk Management in January 2011 and has since provided updates to its strategy in seven later versions, most recently in October 2014. The integrated strategy includes key management initiatives and related corrective actions plans for addressing DHS’s management challenges and the actions and outcomes we identified. For example, the October 2014 strategy update includes an initiative focused on financial systems improvement and modernization and an initiative focused on IT human capital management. These initiatives support various actions and outcomes, such as modernizing the U.S. Coast Guard’s financial management system and implementing an IT human capital strategic plan, respectively. We concluded in our 2015 report that DHS’s strategy and approach to continuously refining actionable steps to implementing the outcomes, if implemented effectively and sustained, should provide a path for DHS to be removed from our high-risk list. Capacity (partially met). In October 2014, DHS identified that it had resources needed to implement 7 of the 11 initiatives the department had under way to achieve the actions and outcomes, but did not identify sufficient resources for the 4 remaining initiatives. In addition, our prior work has identified specific capacity gaps that could undermine achievement of management outcomes. For example, in April 2014, we reported that DHS needed to increase its cost-estimating capacity and that the department had not approved baselines for 21 of 46 major acquisition programs. These baselines—which establish cost, schedule, and capability parameters—are necessary to accurately assess program performance. Thus, in our 2015 report, we concluded that DHS needs to continue to identify resources for the remaining initiatives; work to mitigate shortfalls and prioritize initiatives, as needed; and communicate to senior leadership critical resource gaps. Framework to monitor progress (partially met). In our 2015 report we found that DHS established a framework for monitoring its progress in implementing the integrated strategy it identified for addressing the 30 actions and outcomes. In the June 2012 update to the Integrated Strategy for High Risk Management, DHS included, for the first time, performance measures to track its progress in implementing all of its key management initiatives. DHS continued to include performance measures in its October 2014 update. However, we also found that the department can strengthen this framework for monitoring a certain area. In particular, according to DHS officials, as of November 2014, they were establishing a monitoring program that will include assessing whether financial management systems modernization projects for key components that DHS plans to complete in 2019 are following industry best practices and meet users’ needs. Effective implementation of these modernization projects is important because, until they are complete, the department’s current systems will not effectively support financial management operations. As we concluded in our 2015 report, moving forward, DHS will need to closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments, as needed. Demonstrated, sustained progress (partially met). We found in our 2015 report that DHS has made important progress in strengthening its management functions, but needs to demonstrate sustainable, measurable progress in addressing key challenges that remain within and across these functions. In particular, we found that DHS has implemented a number of actions demonstrating the department’s progress in strengthening its management functions. For example, DHS has strengthened its enterprise architecture program (or blueprint) to guide and constrain IT acquisitions and obtained a clean opinion on its financial statements for 2 consecutive years, fiscal years 2013 and 2014. However, we also found that DHS continues to face significant management challenges that hinder the department’s ability to accomplish its missions. For example, DHS does not have the acquisition management tools in place to consistently demonstrate whether its major acquisition programs are on track to achieve their cost, schedule, and capability goals. In addition, DHS does not have modernized financial management systems. This affects its ability to have ready access to reliable information for informed decision making. As we concluded in our 2015 report, addressing these and other management challenges will be a significant undertaking that will likely require several years, but will be critical for the department to mitigate the risks that management weaknesses pose to mission accomplishment. DHS Progress in Achieving Key High-Risk Actions and Outcomes Key to addressing the department’s management challenges is DHS demonstrating the ability to achieve sustained progress across the 30 actions and outcomes we identified and DHS agreed were needed to address the high-risk area. In our 2015 report, we found that DHS has fully implemented 9 of these actions and outcomes, with additional work remaining to fully address the remaining 21. Achieving sustained progress across the actions and outcomes, in turn, requires leadership commitment, effective corrective action planning, adequate capacity (that is, the people and other resources), and monitoring the effectiveness and sustainability of supporting initiatives. The 30 key actions and outcomes include, among others, validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process, and sustaining clean audit opinions for at least 2 consecutive years on department-wide financial statements and internal controls. We further found that DHS has made important progress across all of its management functions and significant progress in the area of management integration. In particular, DHS has made important progress in several areas to fully address 9 actions and outcomes, 5 of which it has sustained as fully implemented for at least 2 years. For instance, DHS fully met 1 outcome for the first time by obtaining a clean opinion on its financial statements for 2 consecutive years and fully met another outcome by establishing sufficient component-level acquisition capability. It also sustained full implementation of another outcome by continuing to use performance measures to assess progress made in achieving department-wide management integration. DHS has also mostly addressed an additional 5 actions and outcomes, meaning that a small amount of work remains to fully address them. We also found that considerable work remains, however, in several areas for DHS to fully achieve the remaining actions and outcomes and thereby strengthen its management functions. Specifically, DHS has partially addressed 12 and initiated 4 of the actions and outcomes. As previously mentioned, addressing some of these actions and outcomes, such as modernizing the department’s financial management systems and improving employee morale, are significant undertakings that will likely require multiyear efforts. Table 2 summarizes DHS’s progress in addressing the 30 actions and outcomes and is followed by selected examples. Acquisition management. In our 2015 report, we found that DHS has fully addressed 1 of the 5 acquisition management outcomes, partially addressed 3 outcomes, and initiated actions to address the remaining outcome. For example, DHS has recently taken a number of actions to fully address establishing effective component-level acquisition capability. These actions include initiating (1) monthly Component Acquisition Executive staff forums in March 2014 to provide guidance and share best practices and (2) assessments of component policies and processes for managing acquisitions. DHS has also initiated efforts to validate required acquisition documents in accordance with a knowledge-based acquisition process, but this remains a major challenge for the department. A knowledge-based approach provides developers with information needed to make sound investment decisions, and it would help DHS address significant challenges we have identified across its acquisition programs. DHS’s acquisition policy largely reflects key acquisition management practices, but the department has not implemented it consistently For example, in March 2014, we found that U.S. Customs and Border Protection (CBP) had not fully followed DHS policy regarding testing for the integrated fixed towers being deployed on the Arizona border. As a result, DHS does not have complete information on how the towers will operate once they are fully deployed. In addition, in our 2015 report we found that DHS continues to assess and address whether appropriate numbers of trained acquisition personnel are in place at the department and component levels, an outcome it has partially addressed. Further, while DHS has initiated efforts to demonstrate that major acquisition programs are on track to achieve their cost, schedule, and capability goals, DHS officials have acknowledged it will be years before this outcome has been fully addressed. Much of the necessary program information is not yet consistently available or up to date. IT management. In our 2015 report, we found that DHS has fully addressed 2 of the 6 IT management outcomes, mostly addressed another 3, and partially addressed the remaining 1. For example, DHS has finalized a directive to establish its tiered governance and portfolio management structure for overseeing and managing its IT investments, and annually reviews each of its portfolios and the associated investments to determine the most efficient allocation of resources within each of the portfolios. DHS has also implemented its IT Strategic Human Capital Plan at the enterprise level. This includes developing an IT specialist leadership competency gap workforce analysis and a DHS IT career path pilot. However, as DHS has not yet determined the extent to which the component chief information officers have implemented the enterprise human capital plan’s objectives and goals, DHS’s capacity to achieve this outcome is unclear. Additionally, we found that DHS continues to take steps to enhance its information security program. However, while the department obtained a clean opinion on its financial statements, in November 2014, the department’s financial statement auditor reported that continued flaws in security controls such as those for access controls, configuration management, and segregation of duties Thus, were a material weakness for fiscal year 2014 financial reporting.the department needs to remediate the material weakness in information security controls reported by its financial statement auditor. Financial management. In our 2015 report, we found that DHS has fully addressed 2 financial management outcomes, partially addressed 3, and initiated 3.financial statements for 2 consecutive years, fiscal years 2013 and 2014, fully addressing 2 outcomes. As of November 2014, DHS was working Most notably, DHS received a clean audit opinion on its toward addressing a third outcome—establishing effective internal control over financial reporting. We reported in September 2013 that DHS needs to eliminate all material weaknesses at the department level, including weaknesses related to financial management systems, before its financial auditor can affirm that controls are effective. However, as we reported in our 2015 report, DHS has yet to identify and commit the resources needed for remediating the remaining material weaknesses. As we reported in September 2013, according to DHS’s auditors, the existence of these material weaknesses limits DHS’s ability to process, store, and report financial data in a manner that ensures accuracy, confidentiality, integrity, and availability of data without substantial manual intervention. This, in turn, increases the risk that human error may cause material misstatements in the financial statements. We also found in our 2015 report that DHS needs to modernize key components’ financial management systems and comply with financial management system requirements. The components’ financial management system modernization efforts are at various stages due, in part, to a bid protest and the need to resolve critical stability issues with a legacy financial system before moving forward with system modernization efforts. For fiscal year 2014, auditors reported that persistent and pervasive financial system functionality conditions exist at multiple components and that DHS continues to rely on compensating controls and complex manual work-arounds due to serious legacy financial system issues. We concluded that without sound controls and systems, DHS faces long-term challenges in obtaining and sustaining a clean audit opinion on internal control over financial reporting, and ensuring its financial management systems generate reliable, useful, and timely information for day-to-day decision making. Human capital management. In our 2015 report, we found that DHS has fully addressed 1 human capital management outcome, mostly addressed 2, and partially addressed the remaining 4. For example, the Secretary of Homeland Security signed a human capital strategic plan in 2011 that DHS has since made sustained progress in implementing, fully addressing this outcome. We also found that DHS has actions under way to identify current and future human capital needs. However, DHS has considerable work ahead to improve employee morale. For example, the Office of Personnel Management’s 2014 Federal Employee Viewpoint Survey data showed that DHS’s scores continued to decrease in all four dimensions of the survey’s index for human capital accountability and assessment—job satisfaction, talent management, leadership and knowledge management, and results-oriented performance culture. DHS has taken steps to identify where it has the most significant employee satisfaction problems and developed plans to address those problems. In September 2012, we recommended, among other things, that DHS In improve its root-cause analysis efforts related to these plans.December 2014, DHS reported actions under way to address our recommendations but had not fully implemented them. Given the sustained decrease in DHS employee morale indicated by Federal Employee Viewpoint Survey data, as we concluded in our 2015 report, it is particularly important that DHS implement these recommendations and thereby help identify appropriate actions to take to improve morale within its components and department-wide. GAO, DHS Training: Improved Documentation, Resource Tracking, and Performance Measurement Could Strengthen Efforts, GAO-14-688 (Washington, D.C.: Sept. 10, 2014). found that while component officials generally identified the Leader Development Framework as beneficial, DHS management could benefit from improved information for identifying the need for and making program improvements. In support of the Leader Development Framework, we recommended, among other things, that DHS clearly identify Leader Development Program goals and ensure program performance measures reflect key attributes. DHS agreed and implemented this recommendation in December 2014. However, to fully achieve this outcome, DHS also needs to develop and make sustained progress in implementing a formal training strategy, as well as issue department-wide policies on training and development, among other things. Management integration. In our 2015 report, we found that DHS has sustained its progress in fully addressing 3 of 4 outcomes we identified and agreed they are key to the department’s management integration efforts. For example, in January 2011, DHS issued an initial action plan to guide its management integration efforts—the Integrated Strategy for High Risk Management. Since then, DHS has generally made improvements to the strategy with each update based on feedback we provided. DHS has also shown important progress in addressing the last and most significant management integration outcome—to implement actions and outcomes in each management area to develop consistent or consolidated processes and systems within and across its management functional areas—but we found that considerable work remains. For example, the Secretary’s April 2014 Strengthening Departmental Unity of Effort memorandum highlighted a number of initiatives designed to allow the department to operate in a more integrated fashion, such as the Integrated Investment Life Cycle Management initiative, to manage investments across the department’s components and management functions. DHS completed its pilot for a portion of this initiative in March 2014 and, according to DHS’s Executive Director for Management Integration, has begun expanding its application to new portfolios, such as border security and information sharing, among others. However, given that these main management integration initiatives are in the early stages of implementation and contingent upon DHS following through with its plans, it is too early to assess their impact. To achieve this outcome, we concluded that DHS needs to continue to demonstrate sustainable progress integrating its management functions within and across the department and its components. In our 2015 report, we further concluded that in the coming years, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. In doing so, it will be important for DHS to maintain its current level of top leadership support and sustained commitment to ensure continued progress in executing its corrective actions through completion; continue to implement its plan for addressing this high-risk area and periodically report its progress to us and Congress; identify and work to mitigate any resource gaps, and prioritize initiatives as needed to ensure it can implement and sustain its corrective actions; closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments as needed; and make continued progress in achieving the 21 actions and outcomes it has not fully addressed and demonstrate that systems, personnel, and policies are in place to ensure that progress can be sustained over time. We will continue to monitor DHS’s efforts in this high-risk area to determine if the actions and outcomes are achieved and sustained over the long term. Key Themes Continue to Impact DHS’s Progress in Implementing Its Mission Functions In September 2011, we reported that our work had identified three key themes that had impacted DHS’s progress in implementing its mission functions since it began operations: (1) executing and integrating its management functions for results, (2) leading and coordinating the homeland security enterprise, and (3) strategically managing risks and assessing homeland security efforts. As previously discussed, DHS has made important progress with respect to the first theme by strengthening and integrating its management functions, but considerable work remains. Our recent work indicates that DHS has similarly made progress related to the other two themes of leading and coordinating the homeland security enterprise and strategically managing risk and assessing homeland security efforts, but that these two themes continue to impact the department’s progress in implementing its mission functions. Leading and coordinating the homeland security enterprise. As we reported in September 2011, while DHS is one of a number of entities with a role in securing the homeland, it has significant leadership and coordination responsibilities for managing efforts across the homeland security enterprise. To satisfy these responsibilities, it is critically important that DHS develop, maintain, and leverage effective partnerships with its stakeholders while at the same time addressing DHS-specific responsibilities in satisfying its missions. Before DHS began operations, we reported that to secure the nation, DHS must form effective and sustained partnerships among components and also with a range of other entities, including federal agencies, state and local governments, the private and nonprofit sectors, and international partners. DHS has made important strides in providing leadership and coordinating efforts. For example, in June 2014, we reported on DHS efforts to enhance border security by using collaborative mechanisms such as the Alliance to Combat Transnational Threats to coordinate border security efforts. Specifically, we reported that DHS and CBP had coordinated border security efforts in (1) information sharing, (2) resource targeting and prioritization, and (3) leveraging of assets. For example, through the Alliance to Combat Transnational Threats, interagency partners—including CBP, the Arizona Department of Public Safety, and the Bureau of Land Management, among others—worked jointly to target individuals and criminal organizations involved in illegal cross-border activity. However, our recent work has also identified opportunities for DHS to improve its partnerships. For example, with respect to DHS’s efforts to enhance border security using collaborative mechanisms, in June 2014, we found that DHS had established performance measures and reporting processes for the mechanisms, but opportunities existed to strengthen the mechanisms. For instance, we found that establishing written agreements with its federal, state, local, and tribal partners could help DHS address coordination challenges, such as limited resource commitments and lack of common objectives, and recommended that DHS establish such agreements. DHS concurred and stated that it planned to develop memoranda of understanding to better facilitate its partnerships. Further, in November 2014, we reported on DHS’s processing of Freedom of Information Act requests. We found, among other things, that DHS lacked an important mechanism for effectively facilitating public interaction with the department on the handling of Freedom of Information Act requests because the department did not have an updated regulation reflecting changes in how it processes these requests. We recommended that DHS finalize and issue an updated DHS Freedom of Information Act regulation. DHS concurred and reported planned actions to implement this recommendation by April 2015. GAO-11-881. to critical infrastructure security and resilience. Our recent work has further found that DHS offices and components have continued to engage in risk management activities. For example, in September 2014, we reported that during fiscal years 2011 to 2013, DHS offices and components conducted or required thousands of vulnerability assessments of critical infrastructure. These assessments can identify factors that render an asset or facility susceptible to threats and hazards. However, we also found that DHS is not well positioned to integrate relevant assessments to, among other things, support nationwide comparative risk assessments, because the assessment tools and methods used vary in length, detail, and areas assessed. In addition, our recent work has identified opportunities for components to better strategically manage risks in various programs. For example, in September 2014, we reported that CBP had a $1 million budget for covert operations of various activities—including nuclear and radiological testing—covering fiscal years 2009 through 2013. We found that DHS had established a policy that requires that components with limited resources make risk-informed decisions, but that CBP testing did not inform capabilities across all border locations, and CBP had not conducted a risk assessment that could inform and prioritize the locations, materials, and technologies to be tested through covert operations. We recommended that—to help ensure that resources for covert operations provide reasonable assurance that efforts to detect and interdict nuclear and radiological material smuggled across the border are working as intended and appropriately targeted—DHS conduct or use a risk assessment to inform the department’s priorities for covert operations. DHS concurred and reported that it plans to implement this recommendation in July 2015. In September 2011, we reported that limited strategic and program planning, as well as assessment and evaluation to inform approaches and investment decisions, had contributed to DHS programs not meeting strategic needs or doing so effectively and efficiently. Our recent work has indicated that strategic and program planning challenges continue to affect implementation of some DHS programs. For example, in September 2014, we reported on DHS headquarters consolidation efforts and their management by DHS and the General Services Administration (GSA). We found that DHS and GSA’s planning for the consolidation did not fully conform with leading capital decision-making practices intended to help agencies effectively plan and procure assets. DHS and GSA officials reported that they had taken some initial actions that may facilitate consolidation planning in a manner consistent with leading practices, but consolidation plans, which were finalized between 2006 and 2009, had not been updated to reflect these changes. According to DHS and GSA officials, the funding gap between what was requested and what was received from fiscal years 2009 through 2014 was over $1.6 billion. According to these officials, this gap had escalated estimated costs by over $1 billion—from $3.3 billion to $4.5 billion—and delayed scheduled completion by over 10 years, from an original completion date of 2015 to the current estimate of 2026. However, DHS and GSA had not conducted a comprehensive assessment of current needs, identified capability gaps, or evaluated and prioritized alternatives to help them adapt consolidation plans to changing conditions and address funding issues as reflected in leading practices. We recommended that DHS and GSA work jointly to assess these needs. DHS and GSA concurred, and DHS reported in February 2015 that the agencies had drafted an enhanced consolidation plan. We will assess this plan when it and any additional supporting analyses are made available to us. We also recently found that DHS had taken preliminary steps to begin to understand the cyber risk to building and access controls systems in federal facilities, but that significant work remained, such as developing a strategy to guide these efforts. In particular, in December 2014, we found that DHS lacked a strategy that (1) defines the problem, (2) identifies roles and responsibilities, (3) analyzes the resources needed, and (4) identifies a methodology for assessing cyber risk to building and access controls systems in federal facilities.a strategy that clearly defines the roles and responsibilities of key components within DHS had contributed to a lack of action within the department. For example, we found that no one within DHS was assessing or addressing cyber risk to building and access control systems particularly at the nearly 9,000 federal facilities protected by the Federal Protective Service as of October 2014. We recommended that DHS, in consultation with GSA, develop and implement a strategy to address cyber risk to building and access control systems. DHS concurred and identified steps it plans to take to develop a strategy by May 2015. We concluded that the absence of Chairman Perry, Ranking Member Watson Coleman, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Joseph P. Cruz (Assistant Director), Michael LaForge, Thomas Lombardi, Emily Kuhn, Taylor Matheson, Shannin O’Neill, and Katherine Trimble. Key contributors for the previous work that this testimony is based on are listed in each product. Related GAO Products DHS Training: Improved Documentation, Resource Tracking, and Performance Measurement Could Strengthen Efforts. GAO-14-688. Washington, D.C.: September 10, 2014. Department of Homeland Security: Progress Made; Significant Work Remains in Addressing High-Risk Areas. GAO-14-532T. Washington, D.C.: May 7, 2014. Homeland Security Acquisitions: DHS Could Better Manage Its Portfolio to Address Funding Gaps and Improve Communications with Congress. GAO-14-332. Washington, D.C.: April 17, 2014. Department of Homeland Security: DHS’s Efforts to Improve Employee Morale and Fill Senior Leadership Vacancies. GAO-14-228T. Washington, D.C.: December 12, 2013. DHS Financial Management: Continued Effort Needed to Address Internal Control and System Challenges. GAO-14-106T. Washington, D.C.: November 15, 2013. Information Technology: Additional OMB and Agency Actions Are Needed to Achieve Portfolio Savings. GAO-14-65. Washington, D.C.: November 6, 2013. DHS Financial Management: Additional Efforts Needed to Resolve Deficiencies in Internal Controls and Financial Management Systems. GAO-13-561. Washington, D.C.: September 30, 2013. DHS Recruiting and Hiring: DHS Is Generally Filling Mission-Critical Positions, but Could Better Track Costs of Coordinated Recruiting Efforts. GAO-13-742. Washington, D.C.: September 17, 2013. Information Technology: Additional Executive Review Sessions Needed to Address Troubled Projects. GAO-13-524. Washington, D.C.: June 13, 2013. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has regularly reported on government operations identified as high risk because of their increased vulnerability to fraud, waste, abuse, and mismanagement, or the need for transformation to address economy, efficiency, or effectiveness challenges. In 2003, GAO designated implementing and transforming DHS as high risk because DHS had to transform 22 agencies into one department, and failure to address associated risks could have serious consequences for U.S. national and economic security. While challenges remain for DHS across its range of missions, it has made considerable progress. As a result, in its 2013 high-risk update, GAO narrowed the scope of the high-risk area to focus on strengthening and integrating DHS management functions (human capital, acquisition, financial, and information technology). As requested, this statement discusses, among other things, DHS's progress and actions remaining in strengthening and integrating its management functions. This statement is based on GAO's 2015 high-risk update and reports and testimonies from September 2011 through February 2015. Among other things, GAO analyzed DHS strategies and interviewed DHS officials. As GAO reported in its 2015 high-risk update report earlier this month, the Department of Homeland Security's (DHS) efforts to strengthen and integrate its management functions have resulted in the department meeting two and partially meeting three of GAO's criteria for removal from the high-risk list (see table). b “Partially met”: Some but not all actions necessary to generally meet the criterion have been taken. c “Not met”: Few, if any, actions toward meeting the criterion have been taken. For example, the department's Deputy Secretary, Under Secretary for Management, and other senior officials have continued to demonstrate leadership commitment by frequently meeting with GAO to discuss the department's plans and progress in addressing the high-risk area. DHS has also established a plan for addressing the high-risk area, which it has updated seven times since 2010. However, DHS needs to show additional progress in other areas. For example, in October 2014, DHS identified that it had resources needed to implement 7 of 11 initiatives the department had under way to address the high-risk area, but did not identify sufficient resources for the 4 remaining initiatives. Key to addressing the department's management challenges is DHS demonstrating the ability to achieve sustained progress across 30 actions and outcomes that GAO identified and DHS agreed were needed to address the high-risk area. GAO found in its 2015 high-risk update report that DHS fully addressed 9 of these actions and outcomes, while work remains to fully address the remaining 21. Of the 9 actions and outcomes that DHS has addressed, 5 have been sustained as fully implemented for at least 2 years. For example, DHS fully met 1 outcome for the first time by obtaining a clean opinion on its financial statements for 2 consecutive years. DHS has also mostly addressed an additional 5 actions and outcomes, meaning that a small amount of work remains to fully address them. However, DHS has partially addressed 12 and initiated 4 of the remaining actions and outcomes. For example, DHS does not have modernized financial management systems, a fact that affects its ability to have ready access to reliable information for informed decision making. Addressing some of these actions and outcomes, such as modernizing the department's financial management systems and improving employee morale, are significant undertakings that will likely require multiyear efforts. In GAO's 2015 high-risk update report, GAO concluded that in the coming years, DHS needs to continue to show measurable, sustainable progress in implementing its key management initiatives and achieving the remaining 21 actions and outcomes.
Background GSA increased its use of brokers in the years prior to beginning the NBC. Before 1997, all leasing acquisition work was performed in-house. However, downsizing initiatives in the 1990s reduced GSA’s in-house capacity to acquire leases, and in 1997 GSA began to sign contracts with private broker firms to assist with its leasing portfolio. From 1997 to 2003, GSA had multiple, separate regional contracts for broker services, and GSA paid brokers a fee from appropriated funds in exchange for a variety of lease acquisition and other services. By 2003, approximately 20 percent of the leasing work was being performed by brokers under the regional contracts, although the regional contracts were found to be administratively burdensome and inconsistent, because of variations in the contracts’ terms, conditions, and pricing structures. In 2003, GSA conducted a business analysis comparing the advantages, disadvantages, and costs of different types of contracting options on a national, zonal and local level. Based on this analysis, GSA concluded that the national contracts through the NBC program represented the best option available, and GSA awarded four contracts in October 2004, and contract performance began on April 1, 2005. GSA identified three expected savings from the NBC program: reduced rental rates, reduced administrative expenses and fees, and reduced personnel expenses. GSA officials believed that rental rates would be reduced as a result of a broker’s expert knowledge of the commercial real estate market, which would assist in negotiating lower rental rates. In addition, brokers agreed to forgo a portion of the commission they received—in the form of a commission credit reducing the government’s initial rental cost. GSA officials also stated that the national contract would reduce its administrative expenses because the previous regional contracts had differing terms, conditions, and pricing structures. In 2007, we recommended, among other things, that GSA quantify savings associated with reduced fees and administration expenses using the NBC. In response to our recommendation, GSA reported that it had identified cost savings as a result of moving from regional contracts to the NBC. Finally, GSA anticipated achieving further savings by hiring fewer realty specialists—the position responsible for handling leasing transactions—over time. GSA stated that, as a result of government downsizing, the number of in-house realty specialist staff had decreased from 811 in 1995 to 450 in 2003. GSA estimated that approximately 140 realty specialists were eligible to retire by the end of 2005, and that the agency would need to hire approximately 300 additional staff to handle the leasing workload. In our 2007 report, we found that, after implementing the NBC program, officials subsequently determined that there were additional tasks for realty specialist staff and no longer viewed the NBC program as a way to avoid hiring additional staff. According to GSA officials, in October 2012 they employed approximately 476 realty specialists. The NBC program differed from the previous regional contracts in three major ways. First, the contracts are national contracts, which means that while the program is implemented and used at the regional level by GSA regional officials, all regions are subject to the same contract language and rules. Second, unlike the previous contracts, wherein brokers were paid for providing a variety of lease acquisition and other services to support GSA in acquiring leases, under the NBC program brokers are to perform the full lease acquisition, described in figure 1, as well as lease expansion or extension and market data collection. Third, under the NBC program, brokers are compensated through commissions paid by the lessor (the entity leasing space to GSA), and no payments are made directly by the government as was the case with the regional contracts. Although commission payments are factored by lessors into rental rates, these contracts for the NBC program were referred to by GSA as “no cost” contracts. GSA’s stated goals for the program are to provide consistent, high-quality service nationwide to federal agencies that rely on GSA for lease acquisition services, as well as leverage the expertise of private-sector brokers. GSA set two more specific goals that the agency uses to determine the effectiveness of the NBC program. First, GSA determined that brokers should be accountable for achieving rental rates in line with the overall GSA goal. For example, in 2012 the overall goal for the leasing portfolio was to have rental rates at 9.5 percent below market average. Secondly, GSA set yearly goals for using brokers to acquire leased space. For example in 2012, GSA set the goal of 55 percent of expiring leases to be handled by brokers and 45 percent handled by GSA in-house. Achievement of Goals Is Mixed, and Cost Savings from the NBC Program Are Unclear Rental Rate Goals Met, but Unclear If Cost Savings Result from NBC Program GSA officials report that they are meeting their overall rental rate goals for the entire lease portfolio. As mentioned previously, GSA sets a yearly goal that overall rental rates for space leased from the private sector fall a certain percentage below the average industry-market rental rate. These rental rate goals do not differentiate between broker-negotiated leases and those done in-house. In fiscal year 2012, the goal was for rental rates to be 9.5 percent below market; GSA reported that its overall portfolio, which includes leases negotiated by both brokers and GSA staff, was on average 11.45 percent below market. From 2006 to date, GSA has, to varying degrees, reported that it exceeded its targets for paying lower than average rental rates. See figure 2 below. GSA headquarters officials have used this data as an indication of the NBC program’s success, because they say the data show that the use of brokers has not driven up the rental rates. The officials also noted that they consider this an indication that brokers are not driving up lease rates to obtain higher commissions. GSA is reportedly meeting its overall rental goals, but it is not clear that the NBC program results in rental rates lower than those negotiated by in- house staff. In 2012, GSA officials attempted to determine if brokers negotiated lower rental rates than in-house staff using data from fiscal years 2011 and 2012. GSA’s own analysis did not show that brokers were negotiating significantly lower rental rates, and in fact overall it showed that brokers were negotiating rental rates that were similar to in-house staff. Furthermore, GSA officials cautioned us that they found this available data insufficient to determine if brokers were negotiating lower rental rates than in-house staff. Officials stated that the market rate data they were using were not specific enough to provide the level of detail necessary to discern the potential difference between the rates. They said that the current market data do not take into account the unique circumstances for each individual lease, and how these circumstances affect the rental rate. For example, a lease for a law enforcement agency, with higher security requirements and unique types of space (e.g., a shooting range), may have to be negotiated in the higher end of the market rate. However, the market rate data GSA has does not consider these requirements, and therefore may underestimate the value of the rental rate negotiated. The converse is also true, and it is possible that the market rate data can overstate a rental rate negotiated for an office space with few special requirements. The officials said that this makes comparison of negotiated rental rates difficult because even if both of the example leases were in the same market, one rate may be overvalued while the other is undervalued based on its unique requirements. This limitation notwithstanding, officials told us they still believe it is probable that brokers are negotiating lower rates. In April 2013, GSA began requiring regional officials to use different market rental-rate data, which they expect will allow them to better compare the rental rates achieved by brokers to market rates in the future. For all broker projects greater than 10,000 square feet, regional officials are to obtain an in-house research report at the beginning of the leasing process referred to as “Bullseye.” A Bullseye report is specifically tailored for each lease transaction and includes market information, analysis, and insight regarding the local submarket. The officials stated that the new reports would provide more specific data to gauge the effectiveness of the broker program by comparing negotiated rental rates to the average for the submarket rate at the time of lease award. According to GSA officials, this new reporting will improve their comparison of rental rates negotiated by brokers to market rental rates but will probably still not allow them to definitively determine whether brokers are negotiating better deals than in-house staff. Because this effort is relatively new, data were not available at the time of our review to conduct further analysis. GSA headquarters officials noted the only way to ideally compare one lease to another is to acquire space for similar agencies, in the same market at the same time. In previous attempts by GSA to perform comparison analysis, GSA determined there were not enough data points with these characteristics to draw a meaningful conclusion. GSA Recently Changed Goals for Use of Brokers GSA initially set goals under NBC to have brokers conduct a high percentage of its leases, but beginning in fiscal year 2013 began deemphasizing the importance of annual percentage goals for broker usage. GSA set yearly goals based on the number of leases in the portfolio they determined could be handled by brokers and originally anticipated that 80 percent of the leasing workload would be assigned to brokers by fiscal year 2009. GSA lowered its broker usage goals for fiscal years 2011 and 2012 from 80 percent to 55 percent, and reported that in fiscal year 2012 it used brokers for 33 percent, or 220, of its expiring leases. GSA officials told us that they began lowering usage goals after 2010 because more leases needed to be handled in-house for training purposes. GSA officials told us they did not meet their broker usage goal for a variety of reasons. First, there are some leases with sensitivities such as internal conflict within an agency or the space is for a high-profile public official that they prefer to handle in-house. Second, some work is purposefully kept in-house for training purposes and to maintain staff expertise. (This was also a reason the officials gave for why they began lowering usage goals in 2011.) Third, some GSA regional officials have not always found it advantageous to send leases to brokers because it can be time consuming. Specifically, officials from 6 of the 11 regions expressed concern that the NBC program is administratively burdensome and adds time to leasing procurement. Officials in one region stated that they essentially run “shadow procurements” for all NBC leases. That is, even though the broker is doing a significant amount of paperwork and finding lessors, the work completed by brokers must be closely monitored and reviewed to ensure that the process is in accordance with government standards. In addition, GSA officials are required to conduct performance evaluations at multiple points throughout the process, which officials told us is more time consuming than the evaluation process for in-house personnel. In 2010, the GSA Office of the Inspector General (IG) reported that due to a number of steps unique to NBC projects, there was an average of 1.6 months added to the total time for the leasing process for the data in its sample. Representatives from one broker firm we spoke with told us that NBC leases can take longer because GSA personnel can be less responsive on a broker-negotiated lease than leases they are personally negotiating. They noted that there are many approval points that the broker cannot pass without approval from a GSA official and that significant time can elapse before GSA officials review the paperwork. In fiscal year 2013, GSA deemphasized the importance of annual percentage goals for broker usage. Instead, GSA headquarters officials told us they reviewed each regional portfolio for fiscal year 2013 through fiscal year 2015 procurements and developed a list of leases that they determined should be sent to brokers. The list consists of leases that are viewed by the officials as having a “high to moderate value” and having the greatest potential for brokers to negotiate lower rental rates than in- house personnel. GSA officials stated that they believe NBC provides the greatest value with large complex leases in large metropolitan areas, where brokers have a strong commission incentive. According to GSA headquarters officials, these areas have the highest potential for cost savings if brokers are able to negotiate lower rental rates as a result of private-sector market knowledge and negotiation skills. The officials told us that, after they compiled the list, regional officials were given an opportunity to comment, resulting in some changes, and that the resulting list of leases is what GSA plans to have brokers handle. For example, regional officials said that they removed some of the leases on the list because they had already been sent to a broker or begun in-house, or as stated previously, officials wanted to keep some larger projects in-house for training and maintaining in-house expertise. According to GSA headquarters officials, they plan to move forward with these new goals. They noted that the focus is now on managing the workload and not on meeting a specified utilization number. GSA headquarters officials told us that they believe they are now using the NBC program for its intended purpose, i.e., workload management and driving savings by using broker expertise in more difficult markets and working the more complex, high- to-moderate value leases. Cost Savings from Using the NBC Program Are Unclear The two goals GSA uses to evaluate the NBC program are not closely linked to the anticipated cost savings used to justify the program. GSA headquarters officials told us that when they attempted to determine if brokers were negotiating lower rates than in-house staff, they found that the data they had were insufficient for this purpose. The program has evolved over the years, and reduced rental rates from using brokers are the only expected savings that remain. However, GSA lacks data on whether using brokers results in cost savings. Furthermore, GSA officials stated that they do not believe the Bullseye data will necessarily allow for comparison of broker- and in-house-negotiated rental rates. Without data on the cost benefits of using brokers relative to using in-house staff, the value of the NBC program will continue to be unclear. Federal internal control standards state that a key factor in helping agencies achieve their missions better and program results is the use of appropriate internal controls. Among other things, agencies need to compare actual performance to planned or expected results, monitor performance measures, and collect operational and financial data to ensure that agencies are meeting their goals for effective and efficient use of resources. This is particularly important because as programs change, as the NBC program has, and agencies strive to improve operational processes, agencies would be better equipped to assess results by continually assessing and evaluating whether the control activities being used are effective and updated when necessary. By not having adequate data, GSA cannot assess the effectiveness of activities conducted under the NBC program or know if adjustments are needed to the program. GSA Has Made Changes to the NBC Program, and Stakeholders Have Suggested Additional Actions That Can Improve the Program GSA’s Changes to the NBC Program Implementation According to GSA officials, they have made changes to the NBC program since the first contracts were signed in 2004. Based on experiences with the first contracts (NBC1), GSA made changes to the current contracts (NBC2) in three key areas, as described below. Implementation of Commission Cap: According to GSA officials, NBC2 brokers were required to submit caps on the amount of commission they would retain for any given transaction. Under NBC1, brokers were required to submit the percentage of any commission they received that they would give back to GSA in the form of a rent credit. In our 2007 report we found that allowing brokers to represent the government while negotiating their commissions creates an inherent conflict of interest between the brokers’ interest in promoting and negotiating higher commissions and their responsibility to effectively represent GSA’s interest in selecting properties that best meet the government’s needs, including its cost needs. Under NBC2, GSA believes this issue has been addressed, because the brokers have committed to a cap on the amount of commission the brokers retain. Using commissions paid by building owners to compensate brokers is typical in the commercial real estate industry. However, brokers could still have an incentive to seek government approval of a higher rent in order to increase the overall amount of their commission. Changes in Broker Performance Evaluations: According to GSA officials, GSA made two major changes to broker performance evaluations for NBC2. First, under NBC2, GSA reduced the number of mandatory broker performance evaluations from seven to four for each lease. In addition to receiving a final overall evaluation, brokers are now evaluated at the market-survey, lease-award, and post-award-services steps. GSA officials stated these changes were made to streamline the process and decrease the administrative burden on the in-house staff overseeing the brokers. Second, under NBC2, GSA awards tasks to brokers based on price and performance data. For NBC1 task orders were based on equitable distribution—i.e., brokers received equitable square-footage distributions in each of GSA’s 11 regions. Under both contracts, brokers are evaluated and scored on a scale from 1 to 5 on document quality, timeliness, cost control, business relations, and personnel technical quality. These evaluations are used to derive a score for each task that is compiled into a report and sent to GSA headquarters officials on a quarterly basis. GSA officials use these scores to determine a national- level rating for each broker. For NBC2, GSA regional officials determine which factors (document quality, timeliness, cost control, business relations, and personnel technical quality) are the most important for a lease and then use the national-level performance rating, along with cost, to determine which broker will be awarded a task order. Earlier Broker Involvement: According to GSA officials, under NBC2 GSA included the option of bringing brokers into the process earlier, at the requirements development process. “Requirements development” is the process by which GSA works with an agency to obtain the specific requirements for a lease—such as the amount of space needed and the preferred location—at the beginning of the leasing process. Under NBC1, the contract allowed for the broker to assist with requirements development, but GSA policy was to have in-house staff assist the agency. GSA officials stated that brokers could add value by being involved in requirements development because they can bring in market data information and other expertise, which can better inform GSA on how much space might be needed and options for where it can be located. GSA officials told us that they are considering additional changes as they plan for the third generation of contracts. According to the officials, they are reconsidering the structure and administration of the NBC program with the goal of improving its effectiveness. At the time of our review, we were told GSA was assessing potential changes using a range of tools, including those below: Collecting lessons learned, such as using brokers for new leases, which may provide better value, and working projects with short timelines in-house to allow for greater control over project timelines. Incorporating new approaches and strategies, such as examining alternatives to the current commission and rent structure and developing new methods for evaluating broker performance. Creating and gathering input via a dedicated members only discussion group for real estate stakeholders on the LinkedIn networking website. Exploring the feasibility of conducting a pilot program prior to full contract launch. Using the previously mentioned contractor review of the NBC program’s effectiveness and changes the officials and industry stakeholders thought would improve the program. GSA plans to issue a request for proposals in 2014, and to award new contracts in fiscal year 2015. All Brokers and Several GSA Regions Report That Changes to Performance Evaluation Are Needed Officials in 7 of the 11 GSA regions expressed dissatisfaction with the current broker performance-evaluation system. As mentioned previously, past performance is one of the factors regional officials use to select a broker. However, GSA regional officials noted some challenges associated with the process for evaluating broker performance. For example, GSA regional officials told us that under the current system, the same level and quality of work can receive very different ratings across regions. The officials noted that the ratings are largely at the discretion of regional officials and that the standards for scores are not necessarily applied uniformly. Regional officials suggested additional training or guidance for officials who do the ratings could help address these problems. GSA headquarters officials indicated that in addition to training, the changes to the internal structure used to support the contract that are being considered under NBC3 will help address these concerns. However, officials in four regions told us the broker performance- evaluation process worked well and allowed them to accurately represent each brokers’ performance. GSA headquarters officials said they are looking at ways to make the performance evaluation process more consistent. They noted that in a performance-based contract, they are required to evaluate the contractors’ performance and need regional support to ensure this practice is enforced. According to the headquarters officials, in some instances, milestones are evaluated months or up to a year after the completion date and that evaluations deemed “untimely” are discarded. Officials emphasized that in order to accurately reflect a contractor’s performance, the evaluations must be completed once all tasks associated with that milestone are completed. Representatives from all four national broker firms participating in the NBC program expressed dissatisfaction or concern with the broker performance-evaluation system. Representatives from one broker firm told us they believed the evaluations are subjective because the standards for numerical performance evaluation scores are ambiguous. For example, the same work could receive a rating of 3 in one region and a rating of 5 in another region, presenting a challenge when attempting to determine the better value to the government. Brokers from another firm said there needs to be greater consistency between regions on how they conduct broker evaluations. All four brokers also suggested that providing more detailed guidance for GSA officials who do the rating would make the ratings less subjective and more consistent across regions. GSA regional officials and brokers also suggested using region-specific broker performance-evaluation data to award tasks. Officials from 10 of 11 GSA regions told us that this change would help them award the task order to the broker that performed the best in the region. As officials in one region explained, the brokers’ national scores—which combine the scores from all regions—are not helpful, because these scores do not always reflect the performance of a specific broker that works well in a region. The officials in this region told us that, with 11 regions, the combined national score loses effectiveness, because brokers have different strengths and weaknesses in various markets. For illustrative purposes, in figure 3, we present actual data from a 2012 quarterly performance report to show how using national scores may result in assigning a task to a broker that does not perform as well in a specific region as the broker firm’s overall national performance score. In this region, officials conducted two evaluations and rated this broker firm as “marginal” in every category, while the overall national scores for the same broker firm are primarily “very good” and “exceptional,” except for cost control. In the example shown in figure 3, if officials in this region identified “Timeliness” as the most important factor when awarding a task order, under the current contract, they would have to use the national scores to determine the broker with the best performance in timeliness. Thus, the task order could be awarded to this broker whose past performance was rated as marginal by regional officials. According to GSA headquarters officials, they are considering using regional past performance for task order selection. They noted that they would make this decision after they are confident that GSA staff are adequately trained and have a clear understanding of the contract requirements. Additionally, they said that they place a priority on making sure the contract is administered fairly and properly before they begin using regional past performance. Of the 10 regions where officials suggested using region-specific broker performance evaluation data to award tasks, officials in 3 regions suggested developing a hybrid approach for performance evaluation data, wherein they could have the option of combining national scores with region-specific scores and using the combined score to choose a broker. Officials in one region told us this would allow for assignments to be made based on work completed in that region which would help the officials make better decisions. Similarly, officials in another region told us they would prefer to use a combination of both the regional and national level scores because each region has different needs, and brokers have different strengths and weaknesses. GSA headquarters officials told us they are considering using regional performance evaluation scores, but want to determine if the performance evaluation processes are working properly before they make a change. These officials also stated that there is concern that if regions with less attractive portfolios (fewer and less valuable leases) are not factored into a national score used to determine assignments, brokers may have less incentive to perform well in smaller markets. Representatives from two broker firms suggested using region-specific broker-performance- evaluation data to award tasks. For example, one broker told us that since work is being distributed using the national average performance-evaluation scores, this overlooks the benefit that some regions may have developed strong working relationships not captured under a national score. They also noted that the way work is assigned does not seem to take into account “logical follow-on” or unique experience. All Brokers and Some GSA Regions Report Challenges with the Commission Structure GSA officials in three regions expressed concerns about the current commission structure and made suggestions for improvement. These regional officials were generally concerned that the commission-based system did not guarantee that brokers had an incentive to negotiate a lower rental rate for the government. Officials in one region told us that under the current system, brokers benefit from higher rental rates because their commission is a percentage of the total rental rate. They suggested that if commission rates were a specified amount instead of a variable percentage, GSA would be better assured that the broker was not trying to get a larger commission than is appropriate in that market. GSA headquarters officials told us that they are reexamining the commissions and are actively looking at alternatives to the current commission and rent structure. All four brokers participating in the NBC program told us that the change to the commission structure GSA implemented in 2011 has decreased commissions and potentially could reduce the quality and quantity of brokers’ staff. In 2011, GSA modified the contracts to incorporate changes to the Federal Acquisition Regulation (FAR) as a result of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 (Duncan Hunter Act). As explained by GSA, these changes require that every task order or group of task orders estimated to yield a commission in excess of $150,000 be sent to all brokers, who then have an opportunity to be awarded the task order. Brokers were unanimous in their concern that this requirement, which they call “re-bidding” or “re- competing,” would ultimately result in lower value for the government. They told us that their commissions or initial contract pricing were based on the assumption that the profitability of the larger leases (including those with commissions in excess of $150,000) would help offset the potential losses on smaller leases, which they say are generally not profitable to the brokers. The brokers also told us that although this may result in short-term savings for the government by encouraging broker bidding and reducing commissions, ultimately it is not financially sustainable for them. Two brokers provided examples in which decreased revenue negatively affected staffing. For example, one broker told us these requirements were forcing commissions lower and that they would eventually have to compensate by laying-off staff or hiring staff with less experience; and another broker told us that decreased revenues resulted in a loss of employees and difficulty replacing them. Representatives from 3 of the 4 broker firms stated the changes to the FAR did not require contract modifications. These brokers stated that in their opinion, the changes to the FAR do not apply to the NBC program and recommended that changes be made that reflect this. Most GSA Regions Suggested More Flexibility in Using Brokers Officials in the majority of regional offices (8 of the 11) suggested that greater flexibility in applying the NBC program would allow them to make better use of the brokers. Currently, when a broker is awarded a task order, as we described previously, it is a full service lease acquisition, which means the broker handles the majority of the work for a task. Regional officials described a preferred “menu of services” that would allow them to use the brokers more selectively for the specific tasks where GSA wants assistance as opposed to working with the broker throughout the entire lease procurement. For example, regional officials told us this would allow officials to involve brokers at crucial points—such as by conducting market analysis—and then allow GSA to handle the other parts of the procurement process in-house. The officials said that this would benefit GSA, because it would allow GSA to harness the brokers’ market expertise, which, in their opinion, is how brokers add the most value. According to GSA headquarters officials, they are reviewing the contract on an on-going basis and making the changes that will offer greater flexibility. They said that they have enhanced the requirements development, allowing the broker to perform most of the duties associated with requirements development process and are removing unnecessary requirements that delay task-order issuance. Brokers Suggested Earlier Involvement and Increased Access The current NBC program allows for brokers to have only a limited role in working with the agency requesting space during the earliest part of a leasing procurement—the requirements development phase —but according to both GSA regional officials and brokers early broker involvement seldom occurs. In some cases, it does not occur because a region has a group dedicated to the requirements development phase, so the region uses in-house expertise. However, all four brokers suggested that earlier involvement in the procurement process would improve the program’s effectiveness by allowing the brokers to assist in determining space needs or identifying other means of cost savings. Brokers from one firm noted that they have considerable experience in reducing costs for private sector clients by helping them plan and implement space-saving arrangements, and that they could provide added benefit to GSA if they were involved earlier in the process. Brokers from another firm said that GSA and the agency requesting space will sometimes develop requirements that will result in higher costs, but that changes could result in significant savings. They said that they would like to start with a blank slate and help determine how to save money for multiple sets of circumstances, including broader space planning for a single agency, planning across agencies, multiple leases in the same area, or negotiating multiple leases at the same time in the same building. In addition to earlier involvement in the leasing process, all four brokers participating in the NBC program suggested that allowing them increased access to agencies requesting space would expedite the process. Brokers indicated that the procurement process generally slows down when they have limited direct interaction with tenant agencies because the extra layer of communication can cause miscommunications. This situation can result in the broker’s trying to secure a lease with bad information, which can delay the process until the broker determines exactly what the agency wanted. Two brokers said that if they had more access to the agencies, the process would move more quickly and they would have better information on which to make decisions. Conclusion GSA’s goals and metrics for evaluating the NBC program have not been linked to the cost savings in rental rates GSA anticipated when proposing the program. As a result, GSA does not have a means of evaluating and reporting on this aspect of the program and the value of the NBC program in terms of cost savings continue to be unclear. While GSA has taken steps to better assess how its rental rates compare to market rates, these changes will not necessarily be sufficient for determining the overall costs and benefits of the NBC program. Accordingly, GSA will continue to lack the data needed to assess whether the use of brokers’ results in the expected cost savings through overall rental rate reduction. Clarifying its goals for the program and linking them to cost savings would also serve as a way to be transparent to the Congress and other stakeholders about the purpose of the program and how GSA plans to monitor and achieve the program’s goals. Such transparency is especially important given that GSA is planning to seek a third generation of broker contracts in fiscal year 2015. Recommendation for Executive Action To promote transparency and fully reflect the expectations GSA used to justify the NBC program, we recommend that the Administrator of GSA ensure the program’s goals are linked to cost savings achieved through the use of NBC brokers and develop and implement a means of evaluating and reporting results. Agency Comments We provided a draft of this report for review and comment to GSA. GSA concurred with the report’s recommendation and provided technical clarifications, which we incorporated as appropriate. GSA’s comments are discussed in more detail below. GSA’s letter is reprinted in appendix II. GSA stated that it will take action to implement the recommendation as well as address the challenges and suggested improvements noted in the report. GSA stated, with respect to the use of “shadow procurements” reported, that this practice was an extraordinary and unnecessary scrutiny of broker work that was limited to one GSA region and is not reflective of the NBC program in its totality. This point raised by GSA is reflected in the report, and the report does not imply that this was indicative of the program as a whole. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Management and Budget and the Administrator of General Services. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) the General Services Administration’s (GSA) National Broker Contract (NBC) program goals and the extent to which they are being realized; and (2) the changes GSA has made to the program and what challenges and suggestions for improvement, if any, key stakeholders identified. To identify the goals of the NBC program, we reviewed GSA’s 2003 analysis used to recommend the program as well as the subsequent goals and policies used to implement the program. From these sources and interviewing GSA officials, we identified expected savings and goals for the NBC program. The expected savings of the program include reducing rental rates for GSA leases of private sector space and lower administrative expenses as compared to the previous, mainly regionally based, contracts with private brokers. The goals that we identified for the project are for brokers to negotiate rental rates in line with overall GSA lease portfolio goals, and a yearly goal for using the broker program. We also conducted interviews with GSA headquarters officials managing the NBC program concerning the expected savings and goals of the program. To examine the extent to which benefits and goals have been realized we reviewed available GSA data and data analysis, GSA internal reports, GSA Office of Inspector General reports, and prior GAO work. We reviewed GSA broker data from the first year of the contract in 2005 through 2013. We analyzed GSA’s in-house and broker-negotiated rental rates, and discussed with GSA headquarters officials the methodology used and the limitations of their analysis and findings. In consultation with an economist and social science analyst at GAO, we determined that performing our own analysis would be limited by the same factors identified by GSA. We reviewed internal and external reports on the NBC program, which provided information about program outcomes. In addition, we obtained data from GSA on the NBC program including information on commissions and credits and on use of brokers. To determine the reliability of this data we reviewed relevant documentation including internally published reports, and we interviewed agency officials about their processes for reviewing the data and ensuring their accuracy. We found the broker-related leasing data generally reliable for the purposes of this report. We interviewed officials from all 11 GSA regions and representatives from all four broker firms currently on contract about the use and perceived benefits of the program. We asked officials to identify challenges and suggestions that could help improve the program. Therefore, we could not always determine whether these challenges or suggestions were applicable in other regions or with other broker firms unless other officials brought them to our attention. We also identified federal guidance on standards for internal controls—plans, methods, and procedures used to meet missions, goals, and objectives—that are key to helping agencies better achieve their missions and desired program results. To identify stakeholder suggestions for improving the NBC program we conducted interviews with the major stakeholders of the program including: GSA headquarters officials, all 11 GSA regional office officials, and representatives from all four broker firms. Each of the stakeholders was asked to speak about the changes between the first and second broker contract, and whether they felt that benefits had resulted because of these changes, and what further improvements, if any, they would suggest for the program. We reviewed modifications GSA made to the contract in response to the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 and to the Federal Acquisition Regulation. We compared the first and second signed national broker- contract documentation, and analyzed the reasons for the changes between the two contracts. We reviewed the project plan for reviewing the NBC program and spoke with officials concerning their plans for making changes for the next generation of contracts. We also identified federal guidance on standards for internal controls—plans, methods, and procedures used to meet missions, goals, and objectives—that are key to helping agencies better achieve their missions and desired program results and assessed GSA’s efforts to set goals and objectives to achieve the desired results of the NBC program. We conducted this performance audit from September 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from General Services Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made important contributions to this report: David Sausville (Assistant Director), Kenneth Bombara, Lorraine Ettaro, Bert Japikse, Aaron Kaminsky, Josh Ormond, Amy Rosewarne, and Jade Winfree.
For fiscal year 2013, GSA expected to lease approximately 197 million square feet at a cost of about $5.2 billion. Since 2005, GSA has acquired some leased space using its NBC program to enlist commercial real estate brokerage firms to negotiate with building owners on behalf of the government. In 2012, GSA relied on this contract to replace approximately 33 percent of its expiring leases. GAO was asked to review GSA's NBC program. This report examines (1) NBC's program goals and the extent to which they are being realized and (2) the changes GSA has made to the program and what challenges and suggestions for improvement, if any, key stakeholders identified. GAO reviewed NBC contract documentation, agency documents, relevant legislation and regulations, and available data on leasing tasks assigned to brokers. GAO interviewed officials from GSA headquarters, all 11 GSA regions, and representatives of all 4 brokers currently participating in the program. While the General Services Administration (GSA) has used the National Broker Contract (NBC) program to assist with the agency's lease portfolio, it is unclear whether the program has resulted in the rental rate cost savings that GSA anticipated when proposing the program. GSA officials have stated that brokers should be able to obtain lower rental rates than in-house staff because brokers have greater market expertise and in addition are able to credit a portion of the broker's commission to the rental rate. In 2012, when GSA attempted to compare rental rates negotiated by brokers with those negotiated by in-house staff, the agency not only found little difference between the two, but also stated that the data were insufficient to conduct a meaningful comparison. In April 2013, GSA began requiring the use of a different market rent data report --"Bullseye"-- which includes market information, analysis, and insight regarding the local submarket. Officials said this new data would improve their ability to compare rental rates negotiated by brokers to market rental rates, but will likely not allow officials to determine whether brokers are negotiating better deals than in-house staff. Beginning in fiscal year 2013, GSA also deemphasized previous annual goals for broker use and began identifying and assigning leases to brokers based on where agency officials believe the brokers provide the greatest value. GSA's goals and metrics for evaluating the NBC program have not been linked to the anticipated cost savings in rental rates. As a result, GSA has no means of evaluating and reporting on this aspect of the program and the value of the NBC program in terms of cost savings continues to be unclear. Clarifying goals and linking them to cost savings would also serve as a way to be transparent with Congress and other stakeholders about the program's purpose and how GSA plans to monitor and achieve its objectives. Clear, linked goals are especially important given that GSA is planning to seek a third generation of broker contracts in 2015. Officials from all 11 GSA regional offices and representatives from all 4 broker firms identified challenges and made suggestions to improve the program. Two areas frequently cited are: the broker evaluation process and greater flexibility using brokers. Broker evaluation process : Both GSA regional officials and brokers expressed dissatisfaction with the current broker performance evaluation system and brokers suggested additional guidance and training would help. Both groups also suggested that GSA officials be allowed to use region-specific performance evaluation data to award tasks. Greater flexibility using brokers : Regional officials suggested that greater flexibility in applying the NBC program would allow them to make better use of brokers. Regional officials described a preferred "menu of services" that would allow them to use the brokers more selectively for the specific tasks where GSA wants assistance as opposed to working with the broker throughout the entire lease procurement. GSA has begun identifying potential changes to the structure and administration of the NBC program as the agency develops a strategy for the third generation of NBCs. For example, officials told GAO they are examining alternatives to the current commission and rent structure.
Background Regulation and Structure of Banking Organizations The regulatory system for banks in the United States is known as the “dual banking system” because banks can be either federally or state-chartered. As of September 30, 2005, there were 1,846 federally chartered banks and 5,695 state-chartered banks. National banks are federally chartered under the National Bank Act. The act sets forth the types of activities permissible for national banks and, together with other federal law, provides OCC with supervisory and enforcement authority over those institutions. State banks receive their powers from their chartering states, subject to activities restrictions and other restrictions and requirements imposed by federal law. State banks are chartered and supervised by the individual states but also have a primary federal regulator (see fig. 1). FRB is the primary federal regulator of state banks that are members of the Federal Reserve System. FDIC is the primary federal regulator of state banks that are not members of the Federal Reserve System. OCC and state regulators collect assessments and other fees from banks to cover the costs of supervising these entities. OCC does not receive congressional appropriations. Banks chartered in the United States can exist independently or as part of a bank holding company. OCC, which administers the National Bank Act, permits national banks to conduct their activities through operating subsidiaries, which typically are state-chartered businesses. OCC has concluded that a national bank’s use of an operating subsidiary is a power permitted by the National Bank Act and that national banks’ exercise of their powers through operating subsidiaries is subject to the same laws that apply to the national banks directly. Because OCC supervises national banks, the agency also supervises national bank operating subsidiaries. Further, many federally and state-chartered banks exist as parts of bank holding companies. Bank holding companies may also include nonbank financial companies, such as finance and mortgage companies that are subsidiaries of the holding companies. These holding company subsidiaries are referred to as affiliates of the banks because of their common ownership or control by the holding company. Unlike national bank operating subsidiaries, nonbank subsidiaries of bank holding companies often are subject to regulation by states and their activities may be subject to federal supervision as well. OCC’s Mission and Regulatory Responsibilities OCC’s mission focuses on the chartering and oversight of national banks to assure their safety and soundness and on fair access to financial services and fair treatment of bank customers. OCC groups its regulatory responsibilities into three program areas: chartering, regulation, and supervision. Chartering activities include not only review and approval of charters but also review and approval of mergers, acquisitions, and reorganizations. Regulatory activities result in the establishment of regulations, policies, operating guidance, interpretations, and examination policies and handbooks. OCC’s supervisory activities encompass bank examinations and enforcement activities, dispute resolution, ongoing monitoring of banks, and analysis of systemic risk and market trends. As of March 2005, the assets of the banks that OCC supervises accounted for approximately 67 percent—about $5.8 trillion—of assets in the nation’s banks. Among the banks OCC supervises are 14 of the top 20 banks in asset size. OCC also supervises federal branches and agencies of foreign banks. As the supervisor of national banks, OCC has regulatory and enforcement authority to protect national bank consumers. In addition to exercising its supervisory responsibilities under the National Bank Act, which include consumer protection, OCC enforces other consumer protection laws. These include the Federal Trade Commission Act or FTC Act, which prohibits unfair and deceptive practices, and the Federal Home Ownership and Equity Protection Act, which addresses predatory practices in residential mortgage lending. With respect to real estate lending, other consumer protection laws that national banks and their operating subsidiaries are subject to include, but are not limited to, the Truth in Lending Act, the Home Mortgage Disclosure Act, the Fair Housing Act, and the Equal Credit Opportunity Act. One of OCC’s strategic goals is to ensure that all customers of national banks have equal access to financial services and are treated fairly. The agency’s strategic plan lists objectives and strategies to achieve this goal and includes fostering fair treatment through OCC guidance and supervisory enforcement actions, where appropriate, and providing an avenue for customers of national banks to resolve complaints. The main division within OCC tasked with handling consumer complaints is the Customer Assistance Group (CAG); its mission is to ensure that bank customers receive fair treatment in resolving their complaints with national banks. In our recent report on OCC consumer assistance efforts, we found that, in addition to resolving individual complaints, OCC uses consumer complaint data collected by CAG (1) to assess risks and identify potential safety, soundness, or compliance issues at banks; (2) to provide feedback to banks on complaint trends; (3) and to inform policy guidance for the banks it supervises. OCC’s bank examiners use consumer complaint information to focus examinations they are planning or to alter examinations in progress. OCC and Preemption Preemption of state law is rooted in the U.S. Constitution’s Supremacy Clause, which provides that federal law is the “supreme law of the land.” Because both the federal and state governments have roles in supervising financial institutions, questions can arise about whether federal law applicable to a depository institution preempts the application of a state’s law to the institution. Before promulgating the preemption rules in January 2004, OCC primarily addressed preemption issues through opinion letters issued in response to specific inquiries from banks or states. According to OCC, the preemption rules “codified” judicial decisions and OCC opinions on preemption of particular state laws by making those determinations generally applicable to state laws and clarifying certain related issues. However, the preemption rules were controversial. In commenting on the proposed rules, some opponents questioned whether OCC, in issuing the bank activities rule, interpreted the National Bank Act too broadly, particularly with respect to the act’s effect on the applicability of state law to national bank operating subsidiaries. Others opposed the rules because of what they viewed as potentially adverse effects on consumer protection and the dual banking system. For example, consumer groups and state legislators feared that the preemption of state law, particularly with respect to predatory lending practices, would weaken consumer protections. In comments on the proposed visitorial powers rule, most opponents questioned OCC’s assertion of exclusive visitorial authority with respect to national bank operating subsidiaries. Opponents expressed concern that the visitorial powers rule eliminates states’ ability to oversee and take enforcement actions against national bank operating subsidiaries, even though those entities may be state-licensed businesses. Supporters of the proposed bank activities rule, a group consisting largely of national banks, asserted that subjecting national banks to uniform regulation, rather than differing state regulatory regimes, was necessary to ensure efficient nationwide operation of national banks. According to these commenters, national banks operating under varied state laws would face increased costs, compliance burdens, and exposure to litigation based on differing, and sometimes conflicting, state laws. In comments on the proposed visitorial powers rule, most proponents suggested that OCC make technical clarifications to the rule, specifically related to the exclusivity of OCC’s visitorial powers with respect to national banks’ operating subsidiaries. OCC Described Types of State Laws That Would Be Preempted, but Questions Remain Regarding the Rules’ Scope and Effect In the bank activities rule, OCC attempted to clarify the types of state laws that would be preempted by relating them to certain categories, or subjects, of activity conducted by national banks and their operating subsidiaries. Specifically, OCC (1) listed subjects of national bank activity—for example, checking accounts, lending disclosure, and mortgage origination and mortgage-related activities such as processing, servicing, purchasing, and selling—to which state laws do not apply; (2) listed subjects to which state laws generally apply; and (3) described the federal standard for preemption under the National Bank Act that it would apply with respect to state laws that do not relate to the listed subjects. Although OCC’s purpose in proposing the regulations was “to add provisions clarifying the applicability of state law to national banks,” we found that grounds for uncertainty remain regarding the applicability of state consumer protection laws to national banks, particularly state statutes that generally prohibit unfair and deceptive practices by businesses. In addition, because they disagree with OCC’s legal analysis underlying the rules, some state officials we interviewed said they are unsure of how to proceed with legal measures, such as proposing, enacting, or enforcing laws or issuing and enforcing regulations that could relate to activities conducted by national banks and their operating subsidiaries. OCC Sought to Clarify the Applicability of State Laws to National Banks and Their Operating Subsidiaries by Interpreting Preemption and Visitorial Powers under the National Bank Act In the bank activities rulemaking, OCC amended or added rules in parts of its regulations applicable to four categories of national bank activity authorized by the National Bank Act: (1) real estate lending; (2) non-real estate lending; (3) deposit-taking; and (4) the general business of banking, which includes activities OCC determines to be incidental to the business of banking. For each of the first three categories, OCC listed subjects that it concluded are not subject to state law because state laws concerning those subjects already had been preempted under OCC interpretations of the National Bank Act, or by judicial decisions, or were found to be preempted by OTS for federal thrifts. For state laws relating to any of the three categories but not to subjects specified in the lists, OCC announced that it would apply the test for federal preemption established by Supreme Court precedents. According to OCC, that test calls for a determination of whether a state law “obstructs, impairs, or conditions” a national bank’s ability to perform a federally authorized activity. For the fourth category—a “catch all” provision for state laws that do not specifically relate to any of the other three categories—the rule states that OCC will apply its articulation of the test for preemption under the National Bank Act. Finally, for each of the four categories of banking activity, the rule lists subjects to which state laws generally apply. These include torts, contracts, the rights to collect debts, taxation, and zoning. The rules also provide that a state law applies to a national bank if OCC determines that the law has only “an incidental effect” on the bank’s activity or “is otherwise consistent with” powers authorized under the National Bank Act. ederal law and OCC regulations vest the OCC with exclusive “visitorial” powers over national banks and their operating subsidiaries. (Citation omitted) Those powers include examining national banks, inspecting their books and records, regulating and supervising their activities pursuant to federal banking law, and enforcing compliance with federal or any applicable state law concerning those activities. (Citation omitted) Federal law thus limits the extent to which any other governmental entity may exercise visitorial powers over national banks and their operating subsidiaries. In the visitorial powers rulemaking, OCC sought to clarify the extent of its supervisory authority. The agency amended its rule setting forth OCC’s visitorial powers so that the rule: (1) expressly states that OCC has exclusive visitorial authority with respect to the content and conduct of activities authorized for national banks under federal law, unless otherwise provided by federal law; (2) recognizes the jurisdiction of functional regulators under the Gramm Leach Bliley Act (GLBA); and (3) clarifies OCC’s interpretation of the statute establishing its visitorial powers, 12 U.S.C. § 484. That provision makes national banks subject to the visitorial powers vested in courts of justice, such as a state court’s authority to issue orders or writs compelling the production of information or witnesses, but according to OCC, does not authorize states or other governmental entities to exercise visitorial powers over national banks. Questions Remain Concerning the Applicability of State Consumer Protection Laws Although OCC issued the bank activities rule to clarify the applicability of state laws to national banks and their operating subsidiaries, many of the state officials, consumer groups, and law professionals we interviewed said that the preemption rules did not resolve questions about the applicability of certain types of state law to national banks and their operating subsidiaries. One set of concerns, discussed in appendix II of this report, reflects differences about how the rules and OCC’s authority under the National Bank Act should be interpreted. In OCC’s view, the rules resolved many uncertainties that had existed before the rules but did not resolve all issues about the extent of preemption. A second set of concerns regarding uncertainty over the applicability of state consumer protection laws, particularly those prohibiting unfair and deceptive acts and practices (UDAP) exists, at least in part, because of OCC’s statements that under the preemption rules such laws may apply to national banks. Statements by OCC Suggest That State Consumer Protection Laws Can Be Consistent with Federal Law In the bank activities rulemaking, OCC specified that a state law relating to a subject listed as preempted, as well as any other state law determined to be preempted by OCC or a court, does not apply to national banks and their operating subsidiaries regardless of how the law is characterized. Accordingly, a state law would not escape preemption simply because the state describes it as a consumer protection law. However, OCC has indicated that even under the standard for preemption set forth in the rules, state consumer protection laws can apply to national banks and their operating subsidiaries. Moreover, to the extent that a state’s consumer protection law might apply to a subject on one of the preemption lists, OCC has not specifically indicated what characteristics of the state law would cause it to be preempted. Some state officials and consumer groups we met with were unclear as to whether or not a state consumer protection law would apply to national banks because it is “otherwise consistent with” the National Bank Act, even if the law were to have more than an incidental effect on a national bank’s activity. Some referred to a long-standing decision by a Federal Court of Appeals, discussed below, holding that a state’s law restricting discriminatory real estate lending practices applied to national banks. They said that the court’s reasoning could justify the application of other types of state laws, such as consumer protection laws, to national bank business practices. Moreover, on several occasions, OCC has made statements that reasonably could be interpreted to indicate that state consumer protection laws can be consistent with federal law and, therefore, not preempted, even if they directly affect a national bank’s business activity. In National State Bank of Elizabeth, N.J. v. Long, the United States Court of Appeals for the Third Circuit ruled that a provision of a New Jersey law prohibiting redlining in mortgage lending applied to a national bank. Recognizing that prohibiting redlining was consistent with federal policy, the court ruled that the bank’s compliance with the New Jersey statute would not frustrate the “aims of the federal banking system” or “impair a national bank's efficiency” in conducting activities permitted by federal law. This decision demonstrates that a state law determined to be consistent with federal policy can govern a national bank’s exercise of a federally granted power, even if the law directly affects the way in which the bank conducts its activity. According to some of the individuals we interviewed, the same analysis justifies application of state consumer protection laws to national bank activities such as real estate lending. Some of the individuals we interviewed asserted that the Long court’s analysis justifies the application of state consumer protection laws to national banks, at least to the extent that the laws are consistent with federal policy. They pointed out, moreover, that federal consumer protection laws applicable to banking activities (discussed later in this report) accommodate state laws that impose standards and requirements stricter than those contained in the federal laws themselves, provided the state laws are otherwise consistent with the federal law. Those federal laws contain savings clauses preserving from preemption state laws that impose stricter standards than in the federal laws. However, courts have recognized that those savings clauses do not necessarily preserve such state laws from preemption by the National Bank Act. Several consumer groups and state officials also referred to OCC statements as indications of OCC’s recognition that, to some extent, the application of consumer protection laws to national banks is consistent with federal policy. For example, since the promulgation of the preemption rules, OCC has said that state consumer protection laws, and specifically fair lending laws, may apply to national banks and their operating subsidiaries. Also, while the bank activities rule specifies that national banks may engage in real estate lending without regard to state law limitations concerning the “terms of credit,” the Comptroller recently referred to the agency’s responsibility to enforce “applicable state consumer protection laws” and referred to state fair lending laws as an example. A number of state laws prohibit unfair or deceptive acts or practices, and such laws may be applicable to insured depository institutions. See, e.g., Cal. Bus. Prof. Code 17200 et seq. and 17500 et seq. Operating subsidiaries, which operate effectively as divisions or departments of their parent national bank, also may be subject to such state laws. . . . Pursuant to 12 CFR 7.4006, state laws apply to national bank operating subsidiaries to the same extent that those laws apply to the parent national bank, unless otherwise provided by federal law or OCC regulation. Although OCC published this guidance before it issued the bank activities rule, at the time of the guidance OCC had been following the same preemption standard it applied in the rulemaking. Officials Expressed Differing Views on Applicability of State Consumer Protection Laws We found differing views among state officials with respect to the applicability of state consumer protection laws, particularly their UDAP laws, to national banks. Officials from some state attorney general offices said that their states’ UDAP laws probably are preempted by the bank activities rule, while officials in one state were unclear. State banking department officials we spoke with also had mixed views regarding the applicability of state UDAP laws. In one state, a banking department official said that the state’s UDAP statute would likely be preempted. In another state, an official said that state’s UDAP laws would not be preempted. Two other state banking department officials were unclear about the status of their states’ UDAP laws. Representatives of national banks also had mixed views about the applicability of state UDAP laws. Representatives of one national bank stated that state UDAP laws were preempted, whereas representatives of two other national banks stated that state UDAP laws were, in fact, applicable to national banks and their operating subsidiaries. The status of state UDAP laws is not clear because, some argued, those laws generally are consistent with federal laws and policies and, therefore, might not obstruct, impair, or condition the ability of national banks and operating subsidiaries to carry out activities authorized by the National Bank Act. In addition to uncertainty over the applicability of state consumer protection laws, state officials, consumer groups, and others asserted that the effects of the preemption rules will remain unclear until legal arguments are resolved. The legal disputes pertain to whether OCC correctly articulated and applied the federal preemption standard, OCC’s reasons for including certain subjects of state law in the preemption lists, and the application of state laws to national bank operating subsidiaries. These issues, which essentially concern OCC’s legal authority and rationale for the rules, are summarized in appendix II. According to Most State Officials We Contacted, the Preemption Rules Have Diminished State Consumer Protection Efforts According to most state officials we contacted, the preemption rules have limited the actions states can take to resolve consumer issues and negatively affected the way national banks respond to consumer complaints and inquiries from state officials. More specifically, the state officials asserted that, after the preemption rules went into effect, some national banks and operating subsidiaries became less responsive to actions by state officials to resolve consumer complaints. In addition, some state officials noted that they previously had been able to examine operating subsidiaries without challenge, but after the visitorial powers rule was issued some national bank operating subsidiaries declined to submit to state examinations or relinquished their state licenses. However, other state officials reported good working relationships with national banks and their operating subsidiaries, and some national bank officials said that cooperation with state attorneys general was good business practice. While we found some examples of operating subsidiaries that did not comply with state regulatory requirements after the preemption rules were issued, we note that others had not complied with state requirements before the rules were issued. Some state officials also believed that the preemption rules might prompt holding companies with national bank subsidiaries to move lines of business from a national banks’ holding company affiliate into an operating subsidiary to avoid state regulation. No data are available that would allow us to determine the extent of any such activity, and state officials did not provide conclusive documentary evidence to support their concerns. State Officials Expressed Differing Reactions to the Rules’ Effect on Relationships with National Banks While all state officials we interviewed agreed that the preemption rules have changed the environment in which they relate to national banks and their operating subsidiaries, they have responded differently to the changes. Officials from some state attorney general offices and state banking departments told us that the preemption rules have caused them to approach national banks and their operating subsidiaries about consumer protection issues differently than they did in the past. For example, one state banking department official said that, instead of approaching a national bank operating subsidiary from a regulatory posture, the department now will try to resolve a consumer complaint with a national bank operating subsidiary only if the department has a contact at that particular operating subsidiary, but having a contact is the exception rather than the rule. Other state officials told us that the preemption rules have not caused them to change their practices, either because they continued to attempt resolution at the local level or because they continued to forward unresolved complaints to OCC as they had done prior to the issuance of the preemption rules. Both before and after issuing the preemption rules, OCC issued guidance that, among other things, addressed how banks should handle contacts from state officials. Specifically, OCC issued guidance in 2002—prior to the visitorial powers rule—encouraging national banks to consult the agency about information requests by state officials to determine whether the request constituted an attempt to exercise visitorial or enforcement power over the bank. The guidance also advised national banks that state officials were to contact OCC, rather than the bank itself, if they had information to indicate that the bank might be violating federal law or an applicable state law. In February 2004, approximately 1 month after the visitorial powers rule became effective, OCC updated its 2002 guidance to clarify how national banks should respond to consumer complaints referred directly to the bank by state officials. While the 2002 guidance was silent on consumer complaint handling specifically, the 2004 guidance stated that OCC does not regard referral of complaints by state officials as an exercise of supervisory powers by the states and that national banks should deal with the complaining customer directly. The 2004 guidance advised national banks to contact OCC if (1) the bank considers a referral to be a state effort to direct the bank’s conduct or otherwise to exercise visitorial authority over the national bank or (2) the state-referred complaint deals with the applicability of a state law or issues of preemption. Further, the guidance notifies national banks that state officials are encouraged to send individual consumer complaints to OCC’s Customer Assistance Group, and as outlined in the 2002 guidance, reiterates that state officials should communicate any information related to a national bank’s involvement in unfair or deceptive practices to OCC’s Office of Chief Counsel. Further, in July 2003—prior to the visitorial powers rule—OCC suggested a “Memorandum of Understanding” (MOU) between itself and state attorneys general and other relevant state officials that could, in OCC’s words, “greatly facilitate” its ability to provide information on the status and resolution of specific consumer complaints and broader consumer protection matters state officials might refer to them. The MOU was sent to all state attorneys general as well as the National Association of Attorneys General (NAAG) and CSBS. Some of the officials from banking departments and the offices of attorneys general that we interviewed, as well as representatives of CSBS, said they viewed OCC’s proposed MOU as unsatisfactory because, in their view, it essentially favored the OCC. In addition, some of the state officials with whom we spoke believed that signing the proposed MOU would amount to a tacit agreement to the principles of the banking activities and the visitorial powers rules. According to OCC, states’ attorneys general—in informal comments on the proposed MOU—felt that the proposal was unilateral, imposing certain conditions upon states that received information from OCC but not upon OCC when it received information from state officials. Also, OCC noted that the proposed MOU did not provide for referrals from OCC to state agencies of consumer complaints OCC received pertaining to state- regulated entities. Therefore, in 2004, OCC attempted to address these concerns in a revised MOU, which it provided to CSBS and the Chairman of the NAAG Consumer Protection Committee. According to OCC, the revised MOU expressly says that an exchange of information does not involve any concession of jurisdiction by either the states or by OCC to the other. Only one state official signed the original 2003 MOU, and according to OCC, to date, no additional state officials have signed the 2004 version. Some State Officials Believe That National Banks and Operating Subsidiaries Are Less Inclined to Cooperate Some state officials asserted that before the preemption rules they were able to deal with national banks on more than merely a complaint-referral basis. They said that, through their regular dealings with national banks and their operating subsidiaries, consumer complaints typically had been resolved effectively and expeditiously. Among the anecdotes they provided are the following: State officials in two states said that they treated national banks and their operating subsidiaries just like any other state-regulated business; they would simply approach the institution about consumer complaints and jointly work with the institution to resolve them. An official in one state attorney general’s office referred to an effort in which the attorney general’s office successfully resolved complaints about a national bank’s transmittal of customer account information to telemarketers. Officials from another attorney general’s office said that, before the visitorial powers rule was amended, they often were able to persuade national banks to change their business practices; for example, they said they were able to encourage a national bank to discontinue including solicitations that they viewed as deceptive in consumers’ credit card statements. These officials also stated that, in the past, they were able to speak informally with national banks to get them to alter the way certain products were advertised. As further examples of national banks cooperating with state officials prior to the preemption rules, state officials in two states cited voluntary settlements that national banks entered with states concerning telemarketing to bank credit card holders and the sharing of bank customer information with third parties. In each of two settlements, one in March 2002 and the other in January 2003, national banks entered an agreement with 29 states in connection with judicial proceedings brought by the states concerning telemarketing practices and the disclosure of cardholder information to third parties. Also, in an October 2000 settlement made in connection with judicial proceedings initiated by a state, a national bank agreed to follow certain practices concerning the sharing of customer information with third parties. Although these settlements were made voluntarily and do not represent a judicial determination that the states had authority to enforce laws against the national banks, state officials used them to illustrate that they had some influence over the banks prior to the preemption rules. In addition, some state officials said that prior to the visitorial powers rule, many operating subsidiaries submitted to state requirements regulating the conduct of their business, such as license requirements for mortgage brokering. Also, according to some state officials, prior to the issuance of the visitorial powers rule their states examined and took enforcement actions against operating subsidiaries because they were state-licensed and regulated, and OCC did not interfere. Some state authorities maintained, however, that by removing uncertainties about state jurisdiction and the applicability of state law that may have served as an incentive for cooperation, the preemption rules made it opportune for the institutions to be less cooperative. One official from an attorney general’s office, emphasizing the importance of consumer protection at the local level, stated that the preemption rules have in effect precluded the state from obtaining information from national banks that could assist the state in protecting consumers. The official pointed out that the state’s ability to obtain information from operating subsidiaries enhanced state consumer protection efforts because the institutions would refrain from abusive practices to avoid reputation risk associated with the disclosure of adverse information. Further, many state officials we spoke with expressed a concern that, because of the preemption rules, national bank operating subsidiaries that formerly submitted to state supervision no longer do so. According to some state officials, because of the preemption rules, operating subsidiaries either threatened to relinquish or actually relinquished their state licenses, or did not register for or renew their licenses. Specifically: State officials in two states provided copies of letters they received from operating subsidiaries, citing the visitorial powers rule as the basis for relinquishing their state licenses. An official in one state attorney general’s office provided a list of 27 national bank operating subsidiaries that notified the office that they would no longer maintain their state licenses. Banking department officials in one state estimated that 50-100 operating subsidiaries had not renewed their licenses. While the preemption rules may have prompted some national bank operating subsidiaries to relinquish their state licenses or otherwise choose not to comply with state licensing laws, we note that others did so before the preemption rules were issued. For example, in January 2003 an entity licensed by the State of Michigan that engaged in making first mortgage loans became a national bank operating subsidiary. In April 2003, the entity advised the state that it was surrendering its lending registration for Michigan. Some state officials said that because the visitorial powers rule precludes state banking departments from examining operating subsidiaries, the potential exists for a “gap” in the supervision of operating subsidiaries. According to them, without state examination, consumers may be harmed because unfair and deceptive, or abusive, activities occurring within operating subsidiaries may not be identified. Although OCC’s procedures state that any risks posed by an operating subsidiary are considered in the conduct of bank examinations and other supervisory activities, state officials nonetheless doubted OCC’s willingness to detect compliance with applicable state laws. Some questioned how examiners would know what state laws, if any, apply to national banks and how examiners would review compliance with such laws. While OCC examiners noted that they generally did not have procedures for examining compliance with state laws, OCC officials explained that if they identify a state law requirement that is applicable to national banks and operating subsidiaries, examiners are advised so that they can take the requirement into account as they determine the scope of their examinations. Other State Officials Reported Little Change in Relationships with National Banks The above-described concerns of state officials are not universal. In one state, officials with whom we spoke acknowledged that they still have good working relationships with national banks and their operating subsidiaries. Further, officials of some national banks with whom we spoke stated that they viewed cooperation with state laws and attorneys general as good business practice. For example, one national bank representative stated that knowing about problems that consumers were having helped to provide better services and reduce the potential for litigation. The individual added that the bank wants to maintain relationships with state attorneys general, and if they make an honest effort to engage the bank, then the bank also would engage the attorneys general. Another national bank representative stated that the bank typically tries to focus on resolving the concern rather than quibble about whose jurisdiction— federal or state—the issue falls under. Some State Officials Were Concerned That the Preemption Rules Could Prompt the Creation of Operating Subsidiaries to Avoid State Regulation Some state officials with whom we spoke expressed concern that the preemption rules might cause national banks to bring into the bank lines of business traditionally regulated by states. According to this view, nonsubsidiary affiliates of national banks, such as a mortgage broker controlled by a holding company that also controls a national bank, could be restructured as operating subsidiaries to avoid state supervision and licensing requirements. According to FRB officials, movements of bank holding company subsidiaries to national bank operating subsidiaries have occurred for some time, including before OCC issued the preemption rules. However, FRB does not collect data specifically on such movements. Many lines of business that constitute the business of banking under the National Bank Act, such as mortgage lending and brokering and various types of consumer lending, are conducted by nonbank entities. According to some individuals we spoke with, a bank holding company controlling both a national bank and such a nonbank entity might perceive some benefit in having the nonbank’s business take place through the bank and, therefore, cause the bank to acquire the nonbank as an operating subsidiary. Federal courts considering the status of national bank operating subsidiaries have upheld OCC’s position that operating subsidiaries are a federally authorized means through which national banks exercise federally authorized powers, holding that operating subsidiaries are subject to the same regulatory regime that applies to national banks, unless a federal law specifically provides for state regulation. Under these precedents, converting a nonsubsidiary affiliate or unaffiliated entity into a national bank operating subsidiary would subject the entity to OCC’s exclusive supervision. Moreover, state laws preempted from applying to national banks would be preempted with respect to the entity once it were to become an operating subsidiary. FRB individuals with whom we spoke said that a national bank’s cost of conducting a business activity in an operating subsidiary could be less than the cost of conducting that activity through a holding company affiliate. Therefore, a bank holding company could have an incentive to place a state-regulated activity in a national bank operating subsidiary. However, this would be true both before and after the preemption rules, all else being equal. As discussed in the following section, the financial services industry has undergone many technological, structural, and regulatory changes during the past decade and longer. Determining how the preemption rules, in comparison with any number of other factors that might influence how banks or holding companies are structured, would factor into the national bank’s decision to acquire a nonbank entity was beyond the scope of our work. The Rules’ Effect on Charter Choice Is Uncertain, but Some States Are Addressing Potential Charter Changes Many factors affect charter choice, and we could not isolate the effect of the preemption rules, if any, on charter changes. According to some state regulators and participants in the banking industry, federal bank regulation could be advantageous to banks when compliance with state laws would be more costly, thereby creating an incentive for banks to change charters. However, because the financial services industry has undergone significant changes--involving interstate banking, globalization, mergers, and consolidations—it is difficult to isolate the effects of regulation from other factors that could affect choice of charter. According to our analysis of FRB and OCC data from 1990 to 2004, the number of banks that changed between the federal and state charters was relatively small compared with all banks. However, total bank assets under state supervision declined substantially in 2004 because two large state-chartered banks changed to the federal charter; further, such shifts in assets have budgetary implications for both state regulators and OCC. Based on our work, no conclusion can be made about the extent to which OCC’s preemption rules had any effect on those events or will have on future charter choices. Nevertheless, several state officials expressed the view that federal charters likely bestow competitive advantages in light of the preemption rules; in response, some states addressed potential charter changes by their state banks. For example, one state changed its method of collecting assessments. Industry Changes and Other Factors May Affect Charter Choice A discussion of any effect or perceptions of the effect of the preemption rules on charter choices by state-chartered banks has to be viewed in the broader environment of the evolution of the financial services industry over the past approximately 20 years—changes that make it difficult to assess the impact of the preemption rules. Some of the bank officials and other bank industry participants we interviewed noted these industry changes when discussing their views on the preemption rules and acknowledged that many factors may affect banks’ choices between federal and state charters. Banking Business Has Changed in Many Ways Like other parts of the financial services industry, which includes the securities and insurance sectors, modern banking has undergone significant changes. Interstate banking and globalization have become characteristics of modern economic life. On both the national and international levels, banks have a greater capacity and increased regulatory freedom to cross borders, creating markets that either eliminate or substantially reduce the effect of national and state borders. Deregulation and technological changes have also facilitated globalization. Consolidation (merging of firms in the same sector) and conglomeration (merging of firms in other sectors) have increasingly come to characterize the large players in the financial services industry. The roles of banks and other financial institutions and the products and services they offer have converged so that these institutions often offer customers similar services. As a result, the financial services industry has become more complex and competition sharper. In our October 2004 report on changes in the financial services industry, we cited technological change and deregulation as important drivers of consolidation in the banking industry. For example, in the early 1980s, bank holding companies faced limitations on their ability to own banks located in different states. Some states did not allow banks to branch at all. With the advent of regional interstate compacts in the late 1980s, some banks began to merge regionally. Additionally, the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 removed restrictions on bank holding companies’ ability to acquire banks located in different states and permitted banks in different states to merge, subject to a process that permitted states to opt out of that authority. While the U.S. banking industry is characterized by a large number of small banks, the larger banking organizations grew significantly through mergers after 1995. Convergence of products and services in the banking industry means that now many consumers can make deposits, obtain a mortgage or other loan, and purchase insurance or mutual funds at their bank. Other market factors have made some banks rely more on fee-based income from, among other services and products, servicing on loans they sold to other institutions and fees on deposit and credit card activity (including account holder fees, late fees, and transactions fees). Thus, consumer protection issues have become increasingly important to the industry. Many Factors May Affect Choice of Charters Bank and banking industry association officials and state and federal regulators we interviewed told us that choice of charter is influenced by many factors. For example, the size and complexity of banking operations are important factors in determining which charter will service an institution’s business needs. Bank and other officials also cited the importance of supervisory and regulatory competence and expertise tailored to the scale of a bank’s operations. For example, officials of some large national banks stated that they valued OCC’s ability to effectively supervise and regulate large scale banks with complex financial products and services. Officials from one large state bank said that they valued federal supervision by FDIC and, at the holding company level, FRB. Some bank and state and federal regulatory officials said that smaller banks prefer the generally lower examination fees charged by state regulators and lower regulatory compliance costs associated with their state charters relative to the federal charter. For example, officials of one small state bank, which was previously federally chartered, said that they had to undertake more administrative tasks under the federal charter, such as greater reporting requirements needed to demonstrate compliance with federal laws, and that such tasks were relatively burdensome for a small bank. Some bank officials and state and federal regulators agreed that smaller banks with few or no operations in other states value accessibility to and convenient interaction with state regulators. Additionally, officials of smaller banks said they value the state regulators’ understanding of local market conditions and participants and the needs of small-scale banking. Officials of one small state bank said that, when their bank switched from the federal charter to a state charter, one important consideration was the state regulators’ frequent visits to the bank and their responsiveness and accessibility. Bank officials, industry representatives, and regulators also agreed that new banks tend to be state-chartered because state regulators tend to play an important role in fostering the development and growth of start-up banks. Bank and state regulatory officials noted that a pre-existing relationship between a bank’s senior management and a regulator or management’s knowledge about a particular regulator can play an important role in choosing or maintaining a charter. For example, officials noted that if management has already established a good, long-term relationship with a particular regulator, or if they were familiar with a regulator, they would likely remain with that regulator when considering charter options. Officials from two large, state-chartered banks operating in multiple states said that they valued their relationship with their home state regulators because they were very responsive and provided quality services. Officials from one of the banks stated that they knew the staff of their state banking department very well, and they respected the banking commissioner’s “hands-on” approach to supervision. Mergers and acquisitions of banking institutions also influence charter choice. For example, officials from a large banking institution stated that, because their merger with another large banking institution combined federally and state-chartered entities, they decided to convert from a state to the federal charter to maintain only one charter type in the resulting company. As a result, they believed they would be able to simplify their operations, reduce inefficiencies, and lower risks to the financial safety and soundness of the merged company and have the advantages of the federal charter companywide. The history of acquisitions in a company also may affect charter choice. For instance, officials of one bank said they obtained their federal charter by acquiring a federally chartered bank and then continued to acquire more federally chartered banks. Similarly, according to officials of a state bank, they typically integrate banks they acquire into their existing state charter. Few Banks Have Changed Charters, but Shifts in Bank Assets Have Budgetary Implications for State Regulators and OCC Our analysis of data on charter changes among federally and state- chartered banks from 1990 through 2004 showed that few banks overall changed charters—either switching from federal to state or state to federal—during that period. About 2 percent or less of all banks in these years changed charters; 60 percent of all changes were to the federal charter. Most charter changes occurred in connection with mergers rather than conversions. Appendix III provides details on charter changes. While the numerical shift in bank charters was not significant from 1990 through 2004, there was a major shift in the distribution of bank assets in 2004 to the federal charter. As illustrated in figure 2, the share of assets divided among federally chartered and state-chartered banks remained relatively steady for a decade; between 1992 and 2003, national banks held an average of about 56 percent of all bank assets, and state banks held an average of about 44 percent. However, in 2004, the share of bank assets of banks with the federal charter increased to 67 percent, and the share of bank assets of banks with state charters decreased to 33 percent. While part of this increase may be explained by the growth of federally chartered banks, two charter changes in 2004— JP Morgan Chase Bank and HSBC Bank—substantially increased the share of all bank assets under the federal charter. Changes in bank assets among state and federal regulators have budgetary implications because of the way the regulators are funded. Most state regulators are funded by assessments paid by the banks they oversee. The state banking departments collect assessments, often based on the supervised bank’s asset size. As a result, a department’s budget may be vulnerable when the department collects a significant portion of its revenue from a few large banks if one or more change to the federal charter. For instance, when two of the largest banks in one state changed to the federal charter, the state regulator lost about 30 percent of its revenue. We analyzed funding information in two states we visited to estimate how a change to the federal charter by the largest state bank in each state could affect those state regulators’ budgets. In the first state, if the largest state bank were to change to the federal charter, the state regulator’s assessment revenue would decrease by 43 percent. In the second state, the charter change of the largest state bank would decrease assessment revenue by 39 percent. Some state banking department officials told us that loss of revenue has caused or may cause them to adjust their assessment formula and find other sources of revenue. Others suggested that budget volatility also might make hiring and retaining the expert staff that they needed difficult. OCC is funded primarily from assessments it charges the banks it supervises; it does not receive any appropriations from Congress. (See app. IV for details on OCC’s assessment formula and app. V for information on how some other federal regulators are funded.) Between 1999 and 2004, the assessments collected from national banks funded an average of 96 percent of OCC’s budget. Thus, its budget also could be affected by charter changes. OCC derives much of its assessments from a relatively small number of institutions. Although OCC oversees about 1,900 national banks, the 20 largest banks accounted for approximately 57 percent of OCC’s assessments in December 2004. Since 1999, the percentage of OCC’s budget paid by its largest banks has been increasing (see fig. 3). The potential exists for OCC to experience budget repercussions if large national banks decided to change to a state charter, resulting in fewer assets under OCC’s supervision. However, before December 2004, conversions generally affected less than 1 percent of OCC’s assessment revenue. Figure 4 shows gains and losses in assessments paid to OCC relative to the total amount collected in assessment payments. OCC’s assessment revenue from conversions to the federal charter jumped by about 8 percent, or about $23 million, as of December 31, 2004. The increase is largely attributable to the conversion of one of the two large state banks mentioned previously, which accounted for about 98 percent of the increase in OCC’s assessment revenue from charter conversions. According to OCC officials, the agency is in a position to sustain any serious decrease to its revenue stream. OCC’s financial strategy includes establishing reserves to address unexpected fluctuations in assessment revenue and an increase in demand on resources. According to OCC, its “contingency reserve” would be used to counter any adverse budgetary effects of a large national bank changing charters. OCC’s policy is to maintain the contingency reserve at between 40 and 60 percent of its budget. According to OCC, at the beginning of fiscal year 2006, the contingency reserve was 49 percent of OCC’s budget. According to OCC, having a reserve allows the agency to handle any change in revenue in a controlled way and reduce the impact of budget volatility and any need to suddenly increase assessments charged to the banks they supervise. Some Officials Perceive Competitive Advantages in the Federal Charter and Have Taken Actions to Address Potential Charter Changes by State Banks According to some state officials, because of federal preemption, national banks do not have to comply with state laws that apply to banking activities and, to the extent that compliance with federal law is less costly or burdensome than state regulation, the federal charter provides for lower regulatory costs and easier access to markets. Therefore, some state regulators and banking industry officials expressed concern that the federal charter, and particularly the preemption of state laws, will result in competitive advantages for federally chartered banks over state chartered banks. According to some state officials, state chartered banks that operate in multiple states could be at the greatest competitive disadvantage. In contrast, according to OCC and many banking industry participants, from a legal perspective, the preemption rules did not change anything; state laws always have been subject to preemption under the National Bank Act and the Supremacy Clause of the U.S. Constitution and, therefore, state banks historically have faced the possibility that a federal charter could be more beneficial than a state charter. As noted above, many factors may influence banks’ choice of a state or federal charter. Substantiating claims about any competitive advantage for federally chartered banks would involve, among other things, comparisons of states’ laws and regulations with federal law and regulations, conclusions about which set of laws and regulations overall would be less burdensome or costly for a particular bank, and obtaining and analyzing data and individual opinions about whether differences in burden and compliance costs, if any, would be significant enough to limit a state bank’s ability to compete with national banks. A study of this magnitude was beyond the scope of this report and, during our work, we did not learn of any study demonstrating that the preemption-related aspects of a national bank charter generally give national banks a competitive advantage over state-chartered institutions. Officials Perceived Some Competitive Advantages for the Federal Charter Many officials that we spoke with said that preemption of state law could make the federal bank charter attractive for some state banks. Representatives of a number of federally and state-chartered banking institutions and industry associations stressed the value of not having to comply with different state laws and of having more regulatory uniformity throughout the country under a federal charter. They stated that large banks and banks with operations in multiple states prefer the federal charter because it makes it easier and less costly to do business. Some bank officials stated that OCC preemption also makes the federal charter attractive because the rules clarify supervisory and regulatory authority for national banks and their operating subsidiaries and also encourage more standardized banking practices. For example, officials from one national bank said that changes in the banking industry, such as interstate banking, make it more important for banks with multistate operations to have uniform federal regulations to operate across states and to achieve economies of scale. Similarly, an official from another large national bank noted that having a consistent set of national regulations also facilitates banks in offering consumers, who are becoming increasingly mobile, financial products and services across the country. Officials from one large federally chartered bank said that a state charter for them would be impractical because it would be expensive to develop and maintain different operational systems for different state laws. Officials from one large, state-chartered bank operating in multiple states said that they need to tailor their products, fees, forms, disclosures, and staff training to the requirements of each state and that the requirements could be conflicting. In contrast, they said, for national banks, there are fewer legal discrepancies when operating under the federal charter. Officials from one state-chartered bank with multistate operations also said that they invest significant resources to keep abreast of and monitor state regulatory matters in the various states where they operate. Furthermore, some bank officials noted that mistakes are more likely to occur when business operations must be tailored to the multiple and different requirements of different states. Despite these challenges, officials from these state-chartered banks believed the benefits of being state-chartered outweighed the challenges. Some States Have Made Efforts to Address the Potential Impact of Charter Changes State officials noted two efforts to address potential charter changes by their state banks: strengthening their state parity laws, which generally confer on state banks the same powers given to national banks, and changing their funding sources. Some individuals we interviewed suggested that the use of state “parity” statutes could help even the regulatory playing field between federal and state regulation. Parity statutes generally grant state-chartered banks the same powers given to national banks and treat state banks like national banks in other ways. According to data from CSBS, prior to the rules, 46 states had parity statutes granting state-chartered banks parity with national banks. Of those, 11 states had parity statutes that were triggered automatically; that is, when national banks were granted certain powers, state banks in that state were automatically granted the same powers. According to CSBS, the remaining 35 states’ parity laws require the state bank regulator’s permission before a state bank is allowed to operate under the parity law’s provisions. Representatives of one state bankers association told us in their state—where regulator approval for parity is required—there are often long delays, with some state bank’s parity applications pending for 2-3 years. Further, according to state regulators many states’ parity laws include other restrictions that some say may make it difficult for a state bank to be competitive with national banks. After the preemption rules were promulgated, one state bank regulator proposed to enhance the state’s current parity statute to include an automatic trigger for state-chartered banks when national banks are given certain powers by the OCC. Thus, in the view of many industry participants and observers we spoke with, state parity laws are not an ideal solution to leveling any competitive advantage federally chartered banks might have over state- chartered banks. However, an effective parity law could provide an incentive for existing state-chartered banks to maintain their state charters. We note that views about the rationale for parity laws generally did not address other possible explanations for those laws, such as a belief that federal regulation is an appropriate model for regulating and supervising state banks. Regardless of a state’s reasons for having a parity law, many participants and regulators in the banking industry maintain that without regulatory parity the dual banking system will suffer because banks will migrate to the regulatory regime they consider to be most advantageous. In recent testimony before FDIC, some regulatory officials and banking industry representatives testified that unless efforts were made to restore parity between federal and state bank regulation, the dual banking system, which they described as having encouraged economic development especially at the community level, would be adversely affected, as would healthy competition, regulatory innovation, and checks and balances among state and federal regulators. Some state regulatory officials with whom we spoke recognized the budgetary consequences associated with state banks changing to the federal charter. To reduce the impact on their budgets if one of their largest state-chartered banks changed charters, some state regulators we spoke with have taken steps to limit the potential for instability. For example, one state banking department has changed its method of collecting assessments. Prior to 2005, this state banking department was not collecting assessments from sources other than approximately 300 depository institutions, a small portion of the approximately 3,400 bank and nonbank institutions such as check cashers and money transmitters that it oversaw. Now this state regulator collects assessments from all of its regulated entities. Other officials have said they were considering alternative methods of determining assessments to decrease the banking department’s reliance on one or a few banks to sustain their budgets. Although state banking departments would experience smaller budgets if assessments were lost, the decrease in assessments would be somewhat offset by the decreased costs of supervising a smaller group of banks and other financial entities. There are other factors that could mitigate the consequences of any loss of revenue to a state regulator; for example, funding formulas that cushion the impact of charter conversions. For instance, in one state that we visited, each bank effectively paid a proportionate amount of the banking department’s expenses as its assessment, with consideration given for asset size. As a result, assessments varied directly with changes in the department’s spending and with the number of state-chartered banks. Banking department officials in this state believed that it would take about 100 state-to-federal charter conversions to affect funding significantly. (See app. VI for information on how state bank regulators are funded.) Suggested Measures for Addressing State Consumer Protection Concerns Include Shared Regulation, Which Raises Complex Policy Issues, and Greater Coordination between OCC and States Some state officials and consumer groups identified three general measures that they believed could help address their concerns about protecting consumers of national banks and operating subsidiaries: (1) providing for some state jurisdiction over operating subsidiaries; (2) establishing a consensus-based national consumer protection lending standard; and (3) working more closely with OCC, in part to clarify the applicability of state consumer protection laws to national banks and their operating subsidiaries. The first measure would most likely involve amending the National Bank Act and, along with the second measure, raises a number of legal and policy issues. The third measure would involve OCC’s clarification of the effect of the bank activities rule on state consumer protection laws. Shared Supervisory Authority over Operating Subsidiaries Would Assist State Officials with Consumer Protection Efforts, but the Concept Raises Questions about the Supervision of National Bank Activities Some state officials we interviewed suggested that states should have a direct monitoring or supervisory role over operating subsidiaries, particularly with respect to consumer protection matters, because the subsidiaries are state chartered. Providing such a role would likely require amending the National Bank Act to specify either that (1) the states and OCC share jurisdiction over operating subsidiaries or (2) operating subsidiaries are to be treated as national bank affiliates. However, providing for state involvement in the supervision of operating subsidiaries, even if only for consumer protection purposes, raises difficult questions. Some individuals we interviewed said that doing so would significantly interfere with Congress’ objectives in establishing a national banking system. Others maintained that state and federal supervisory interests could be balanced without undermining national banks’ ability to conduct business. Some supporters of state supervision maintain that the National Bank Act currently does not preempt the application of state laws to operating subsidiaries. Those subscribing to this view maintain that OCC’s interpretation of the act is wrong because the preemption standards and visitorial powers limitations under the act pertain specifically to national banks, not to their operating subsidiaries. According to this position, under the National Bank Act and other federal banking laws, operating subsidiaries should be treated as “affiliates” of national banks, and federal law recognizes the authority of states to regulate affiliates. However, several recent federal court decisions have held that OCC has reasonably interpreted the National Bank Act to permit a national bank’s use of operating subsidiaries to conduct its business. Some authorities assert that Congress agrees with OCC’s regulatory scheme for operating subsidiaries, pointing out that Congress has let the interpretation stand for more than 30 years. They also refer to a provision of GLBA, in which Congress specifically recognized the existence of operating subsidiaries by using OCC’s interpretation to describe them. Further, they maintain that, in GLBA, Congress implicitly recognized that OCC’s authority over operating subsidiaries is exclusive unless Congress specifically says otherwise. Because operating subsidiaries are, by definition, a part of a national bank’s business activity, amending the National Bank Act to provide states with authority to regulate them concurrently with OCC would set the stage for state regulation of a national bank in exercising its federally granted powers. One possible effect of this approach is that, even if a state’s authority over an operating subsidiary were limited to consumer protection, it would be difficult to limit state supervision of the bank. Assuming that a regulatory line could be drawn to separate the activities of the operating subsidiary from those of the bank, states would need to monitor, if not supervise, the activities that trigger consumer protection concerns. To the extent that these activities reflect business decisions, policies, or practices by the national bank, an opportunity would exist for state intrusion into the bank itself. This could lead to, among other things, regulatory disputes over jurisdiction, differing views about the safety and soundness of the bank, or other points of contention arising from regulatory policies and objectives of OCC and the states. Similarly, amending the National Bank Act to specify that operating subsidiaries are affiliates of national banks could have unintended consequences. Assuming that the activities of an operating subsidiary continued to be limited to activities permissible for its parent bank, the bank simply could move those activities into the bank, in which case the efficiencies gained from conducting those activities through a separate unit, if any, would be lost. Alternatively, those activities could be shifted to an affiliate of the bank. The potential impact on the national bank’s delivery of products and services, its costs, and its safety and soundness could be significant. Some state officials noted that states already work effectively with other federal regulators to monitor and enforce compliance with consumer protection laws. They described efforts their offices took with federal regulators, such as FTC, to identify and take enforcement actions against unlawful practices. One official said FTC works with state officials by meeting periodically with state regulators to discuss issues of mutual concern and, when appropriate, to divide investigative responsibilities. In one instance, the state attorney general coordinated efforts with FTC to investigate and reach settlement with certain entities that had engaged in deceptive practices. Some state officials said that their relationships with federal regulators were based on shared regulatory authority over state- chartered entities and that a similar relationship with OCC should exist with respect to national bank operating subsidiaries. We were told of similar federal-state arrangements with respect to state- chartered depository institutions that are subject to both federal and state supervision. State officials said that they work with FDIC and FRB regularly to conduct bank examinations and identify and stop practices that violate applicable laws. As an example, one official said that state regulators, FDIC, and FRB have entered into cooperative regulatory agreements to supervise interstate operations of state-chartered banks that conduct activities in other (host) states. Some state officials said that having the same kind of relationship with OCC concerning national bank operating subsidiaries would enhance consumer protection in their states. All of the above examples involve state-chartered entities that are outside of a national bank and are subject to supervision by both federal and state authorities. Although those entities are subject to some federal laws that preempt or can have a preemptive effect on state laws, state officials generally believed that states have enough supervisory authority over the institutions to ensure their conformity with state policies as expressed in state laws. The extent to which those examples should serve as models for national bank regulation depends on several considerations, not the least of which would involve policy judgments about the autonomy, if not the purpose, of the national bank charter. A Consensus-Based National Consumer Protection Lending Standard Applicable to All Lending Institutions Would Provide Uniformity but Limit State Autonomy During our work, we asked state officials and others for their opinions on whether it is desirable to have a federal lending law ensuring the same level of consumer protection to customers of all lending institutions, including banks and regardless of charter. Officials from state bankers’ associations asserted that a national standard may already exist, for example, in the FTC Act, which among other things prohibits businesses from engaging in unfair and deceptive acts and practices. In addition, officials referred to other federal consumer protection laws that apply to national banks and national bank operating subsidiaries. Others stated that existing consumer protection standards in federal lending laws are weak and suggested a stricter, consensus-based national consumer protection standard applicable to lending activities by all state-chartered and federally chartered financial institutions. Assuming such a standard could be set, lenders and consumers could rely upon protections that would not change based upon the lending institution’s charter. A consensus-based national consumer protection lending standard, however, would appear to limit states’ abilities to enact standards of their own. The rationale for a consensus-based national consumer protection lending standard generally is that (1) developing such a standard would protect consumers more than existing laws do and (2) having the right type of standard would help reduce concerns about the preemptive effect of federal laws on state consumer protection programs. However, some of the individuals we interviewed agreed that adopting a consensus-based national consumer protection lending standard, even if sound policy, would be difficult to accomplish. Among other things, defining the conduct subject to a standard could be difficult. While some individuals referred to certain antipredatory lending bills pending in Congress as appropriate models, others stated that it would be difficult to find a uniform solution to practices that are viewed as predatory. We also found mixed views on whether a consensus-based national consumer protection lending standard should serve as a ceiling (which would not allow state authorities to impose more stringent standards) or a floor (which would so allow). Some officials stated they would prefer a floor so that states could go farther to address the particular needs of their states. One state attorney general official stated that the benefits of having a floor would be realized when there were more specific practices that needed to be addressed, such as predatory lending. On the other hand, another attorney general official said that floor-type standards such as those contained in federal laws, such as the Truth in Lending Act and the FTC Act, do not themselves impose adequate protections and often have not led to more protective state laws. Under either approach, valid regulatory objectives could be compromised. A federal “ceiling” could deprive states of the ability to address practices and implement policies unique to local conditions. State officials and consumer groups maintain that the states serve as laboratories for regulatory innovation necessary for adequately policing financial industry products and practices. A uniform national “ceiling” could deprive states of the ability to act independently. Conversely, a limitation of the “floor” approach is that states could impose differing standards that would defeat the objective of uniformity. An OCC Initiative to Clarify Preemption With Respect to State Consumer Protection Laws Could Assist in Achieving Consumer Protection Goals As discussed earlier in this report, many state officials and consumer groups have expressed uncertainty over the extent to which state consumer protection laws apply to national banks and their operating subsidiaries. At the same time, OCC stated that the agency would like to work cooperatively with the states to further the goal of protecting consumers. Based on our work, it appears that OCC’s clarification of the effect of the preemption rules on state consumer protection laws would assist states in their consumer protection efforts and could provide an opportunity for the agency to work with states more broadly on consumer protection concerns. OCC informed us of their efforts to work with states on preemption issues. For example, an OCC representative stated that OCC hoped to harmonize the OCC’s and states’ authorities to provide effective and efficient protections for consumers. Also, in 2004 testimony before the Senate, the Comptroller described OCC’s commitment to protect consumers and welcomed opportunities to share information and cooperate and coordinate with states to address customer complaints and consumer protection issues. However, OCC has no formal initiative specifically addressing the applicability of state consumer protection laws. State officials and others suggested that OCC undertake an initiative to work with the states in clarifying the scope of preemption with respect to state consumer protection laws and to coordinate OCC and state consumer protection objectives. Clarifying the applicability of state consumer protection laws would be consistent with a strategy for achieving one of OCC’s strategic goals, which is to enhance communication with state officials to facilitate better coordination on state law issues affecting national banks. Further, unlike the two measures discussed previously, such an OCC initiative would not involve statutory amendments. One state official cited an example of cooperation between OCC and the state to protect consumers: a case in which OCC and the State of California coordinated their efforts to initiate proceedings against a national bank and some of its state-chartered affiliates. The actions were based on alleged unfair and deceptive practices that violated the FTC Act, the California Business and Professions Code, the Fair Credit Reporting Act, and other applicable laws. OCC instituted an enforcement action against the national bank, while the state filed a civil judicial action against the national bank’s state-chartered parent—a financial corporation—and two other state- chartered affiliates. In June 2000, both actions were settled. The defendants did not admit or deny the allegations against them, but in both proceedings they agreed to payment of a $300 million “restitution floor” as seed money for a restitution account. Under the settlement, any payment made by a defendant in one proceeding would discharge any identical payment obligation by the other defendants in the other proceedings. Even though OCC and the state initiated separate proceedings against separately supervised institutions, they worked together to treat the restitution floor obligation as a joint settlement. According to some state officials and others, an OCC initiative to clarify the applicability of state consumer protection laws could assist both OCC and the states in their consumer protection efforts. It could also have the added benefit of facilitating the sharing of information among the states and OCC on conditions in a state or a location that might be conducive to predatory lending or other abuses and could help individual states, as well as OCC. State officials told us that states have knowledge of local conditions that allow them to identify abusive practices within their jurisdictions. A means for the states to systematically share this kind of information with OCC could help the agency in its supervision of national banks and operating subsidiaries. Conclusions Although the preemption rules were intended to provide a clear statement of OCC’s standard for preemption and its exclusive visitorial powers authority, the bank activities rule does not fully resolve uncertainties about the applicability of state consumer protection laws to national banks and their operating subsidiaries. Based on OCC’s own statements, the scope and the effects of the rules are not entirely clear. It is, therefore, not surprising that some state officials said they are uncertain as to what state consumer protection laws apply to national banks and their operating subsidiaries. Many state officials we spoke with maintain that their ability to protect consumers by directly contacting national banks and their operating subsidiaries has been diminished by the preemption rules. However, to date, courts have upheld OCC’s view that it has exclusive authority to supervise national bank operating subsidiaries. State officials reported that they have maintained cooperative relationships with national banks and/or operating subsidiaries since OCC issued the preemption rules. While state officials expressed particular concerns that the rules could prompt national banks, or their holding companies, to move activities into operating subsidiaries in order to avoid state regulation, such movements occurred prior to the rules and can result from many factors. OCC has issued guidance to national banks designed to facilitate the resolution of individual consumer complaints and address broader consumer protection issues that state officials believe warrant attention. Changes in charter type—federal or state—are influenced by many factors including whether or not a bank has operations in multiple states. Consistent federal laws throughout the country are an attraction to banks with a presence in more than one state and especially banks with a national presence. Preemption of state law is part of that attraction for such banks but cannot be attributed as the sole reason some banks choose the federal charter. While the number of charter changes has been relatively small during the period we reviewed (1990 through 2004), the amount of the corresponding bank assets that moved from state bank regulators’ supervision to that of OCC as a result of charter changes did increase noticeably in 2004, albeit largely because of the charter conversion of one large bank. Because both OCC and state banking departments are funded by the entities they regulate and their formulas for the assessments charged are based partially, if not totally, on the assets of the banks and other entities they regulate, their budgets and workloads can be affected by changes in bank charters. OCC has a reserve fund to protect itself from any dramatic shifts away from the federal charter, but some state banking departments’ budgets and workloads could face reductions if large state banks changed to the federal charter. Our work identified three general measures that, while not necessarily exhaustive of all potential measures, could help address state officials’ concerns about protecting consumers of national banks and operating subsidiaries. Two of these—providing for some state jurisdiction over operating subsidiaries and establishing a consensus-based national consumer protection lending standard—raise a number of complex legal and policy issues of their own and could be difficult to achieve. The third measure, in contrast to the first two, would not raise complex issues such as the potential need to amend the National Bank Act. Rather, it would require OCC to clarify the characteristics of state consumer protection laws that would make them subject to federal preemption. We recognize the impracticality of specifying precisely which provisions of state laws are, or are not, preempted, and acknowledge that some uncertainty may always exist. Nevertheless, we believe that an OCC outreach effort to describe in more detail which characteristics of state consumer protection laws would make them subject to preemption could help state officials better understand the effect of the rules and help allay their concerns. OCC has expressed a willingness to reach out to states regarding consumer protection issues. Further, such efforts would be consistent with OCC’s strategic goal of enhancing communication with state officials to facilitate better coordination on state law issues affecting national banks. Recommendation for Executive Action We recommend that the Comptroller of the Currency undertake an initiative to clarify the characteristics of state consumer protection laws that would make them subject to federal preemption. Such an initiative could serve as an opportunity for dialogue between OCC and the states on consumer protection matters. For example, OCC could hold forums where consumer protection issues related to federal and state laws could be discussed with state officials and consumer advocates. This could improve communication and coordination between OCC and state officials with respect to the impact of the preemption rules on the applicability of state consumer protection laws and could also assist both OCC and the states in their consumer protection efforts. Agency Comments and Our Evaluation We provided a draft of this report to OCC for review and comment. In written comments (see app. VII), the Comptroller of the Currency generally concurred with the report and agreed with the recommendation. Specifically, the Comptroller stated that the report contained a number of observations that were consistent with OCC’s views on the relationship between the preemption rules and a bank’s choice between the federal and state charters. OCC commented that the preemption rules provided clarification regarding the types of state laws listed in the regulations, and noted that recent court decisions reflect a growing judicial consensus about uniform federal standards that form the core of the national banking system. OCC agreed with our observation that it may be impractical to specify precisely which provisions of state laws are, or are not, preempted. However, OCC recognized that it should find more opportunities to work cooperatively with the states to address issues that affect the institutions it regulates, enhance existing information concerning the principles that guide its preemption analysis, and look for opportunities to generally address the preemption status of state laws. Accordingly, OCC described one new initiative intended to enhance federal and state dialogue and coordination on consumer issues. OCC stated that the Consumer Financial Protection Forum, chaired by the U.S. Department of the Treasury, was established to bring federal and state regulators together to focus exclusively on consumer protection issues and to provide a permanent forum for communication on those issues. OCC also provided technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Comptroller of the Currency and interested congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or woodd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are acknowledged in appendix VIII. Objectives, Scope, and Methodology On January 13, 2004, the Treasury Department’s Office of the Comptroller of the Currency (OCC), which supervises federally chartered “national” banks, issued two sets of final rules covering the preemption of state laws relating to the banking activities of national banks and their operating subsidiaries (“bank activities rule”) and OCC’s exclusive supervisory authority over those institutions (“visitorial powers rule”). The rules drew strong opposition from a number of state legislators, attorneys general, consumer group representatives, and Members of Congress, who opposed the rules because of what they viewed as potentially adverse effects on consumer protection and the dual banking system. In this report, we examine (1) how the preemption rules clarify the applicability of state laws to national banks; (2) how the rules have affected state-level consumer protection efforts; (3) the rules’ potential effects on banks’ decisions to seek the federal, versus state, charters; and (4) measures that could address states’ concerns regarding consumer protection. Additionally, this report provides information on how OCC and other federal regulators, as well as state bank regulators, are funded. Identification of Key Issues and Legal Review of Preemption Standard Many of the arguments supporting and opposing OCC’s preemption rules related to legal opinions and policy objectives. Therefore, to identify key concerns and questions about the preemption rules, we conducted a content analysis of comment letters that OCC received in response to its rulemaking. In addition to the analysis we conducted for our previous report on OCC’s rulemaking process, we reviewed the 55 comment letters OCC received on its visitorial powers proposal and conducted a content analysis on 30 of the letters. To analyze the comments, we first separated the 30 letters into two categories: letters that supported the visitorial powers rule (17) and letters that opposed the rule (13). We then randomly selected a test set of letters from each category and established an initial set of codes that would further characterize comments within each category. We applied these codes to the test set of letters and made refinements to establish the final codes for each category. A pair of trained coders independently coded the remaining sets of letters and resolved discrepancies to 100 percent agreement. The coders regularly performed reliability checks throughout the coding process and recorded results in an electronic data file, in which the data were verified for accuracy. Descriptive statistics for the codes were computed by an analyst using SPSS statistical software and a second, independent analyst reviewed the data analysis. To further identify stakeholder concerns, we also reviewed three congressional hearings on the preemption rules. We grouped statements from these hearings under the following categories: issue, implication, or suggestion. The “issue” category included statements that described the nature of the commenters’ concerns. The “implication” category included statements that explained the perceived effect the rules would have relative to a specific issue or concern. The “suggestion” category included statements that described ways certain issues or concerns could be resolved, as well as measures that could facilitate state and federal authorities working together to protect consumers. We also categorized the statements by source and whether the statements were made in support of or opposition to the rules. Finally, to obtain more in-depth information on the issues identified by stakeholders, we conducted site visits or phone interviews with officials and representatives of state attorneys general offices, state banking departments, consumer groups, state bankers associations, and national and state banks in six states (California, Georgia, New York, North Carolina, Idaho, and Iowa). We judgmentally selected the states based on the following characteristics: state officials’ interest in the issue; location of noteworthy federally or state-chartered banks (that is, large banks based on asset size or banks that experienced a recent charter conversion); notable consumer group presence or consumer protection laws; and geographic dispersion. We gauged state officials’ interest in the issue and identified state contacts by reviewing congressional hearings and reviewing comment letters on the proposed rules. We also solicited appropriate state contacts from officials we interviewed, such as the National Association of Attorneys General (NAAG), the Conference of State Bank Supervisors (CSBS), and several consumer group representatives. In Washington, D.C., we interviewed the national associations comprising state attorneys general and state bank regulators; representatives of national consumer groups; and officials at OCC and other federal bank regulatory agencies, including the Board of Governors of the Federal Reserve System (FRB) and the Federal Deposit Insurance Corporation (FDIC). With the information obtained from the content analysis, review of congressional hearings, and the site visits and interviews, we conducted a legal review of the preemption rules, past OCC preemption determinations, relevant case law, and relevant federal and state regulations to determine how OCC clarified the applicability of state laws to national banks. Effects of Preemption Rules on State Consumer Protection Efforts and the Dual Banking System In the discussions with officials noted above, we solicited their views on the effects and potential effects of the rules on consumer protection and asked them how the preemption rules have affected the dual banking system (for example, charter choice and the distribution of assets among the national and state banks). To obtain an industrywide view of how bank assets are divided among state-chartered and federally chartered banks, we obtained data from FDIC’s online database, Statistics on Depository Institutions, and its online version of Historical Statistics on Banking. We extracted data on the number and asset sizes of all banks from 1990 through 2004. To assess the extent to which banks have changed between the federal charter and state charters from 1990 through 2004, we collected and analyzed data from OCC and FRB on the annual number of charter changes and asset sizes of banks experiencing charter changes during this period. According to agency officials, OCC data came from its Corporate Applications Information System and FRB data came from the National Information Center database. To determine the total number of conversions to the federal charter each year, we used OCC data to sum the total number of conversions that occurred in each year from 1990 through 2004 for each type of financial institution as listed in the data. We also summed the corresponding assets for each type of entity. In order to find the total number of conversions out of the federal charter, we separated charter terminations resulting from conversions from charter terminations listed as occurring for other reasons. We then summed the total number of all charter terminations due to conversions out of the federal charter and their total corresponding assets each year. For charter additions to the federal charter resulting from mergers, we used OCC data to sum the total number of additions that occurred in each year from 1990 through 2004 for each type of financial institution (as listed in the data) involved in the merger. Using FRB data, we summed the total assets of banks that changed to the federal charter from state charters as a result of mergers in each year. For deletions from the federal charter resulting from mergers, we first separated charter terminations that OCC categorized as resulting from mergers from charter terminations listed as occurring for other reasons. We then, using OCC data, summed the total number of all charter terminations attributed to mergers. Using FRB data, we summed the total assets of banks that changed from the federal charter to state charters as a result of mergers in each year. To determine how bank chartering decisions affected OCC’s budget, we summarized data provided by OCC on assessments paid by institutions that converted charters between 1999 and 2004 to determine how choice of charter and fees assessed from each type of charter affected OCC’s total revenue. We also collected data from certain states and applied their respective assessment formulas to analyze the effects of chartering decisions on the state regulators’ budgets. To describe how the OCC and state banking departments are funded, we interviewed OCC officials, reviewed agency annual reports, past GAO reports, and CSBS’ Profile of State-Chartered Banking. We also interviewed federal and state regulators to understand their funding mechanisms. Measures to Address States’ Concerns Regarding Consumer Protection As noted previously, we conducted site visits and reviews of congressional hearings to obtain information on ways state and federal authorities could work together to protect consumers. During our site visits, we asked officials and representatives of state attorneys general offices, state banking departments, consumer groups, and state bankers associations to identify measures that would facilitate state and federal authorities working together to protect consumers. When measures were identified, we asked follow-up questions to determine perceived advantages and disadvantages of the measure and challenges to implementing the measure. We then obtained and reviewed relevant information, such as statutes, judicial opinions, and related documents. Overall Data Reliability We assessed the reliability of all data used in this report in conformance with generally accepted government auditing standards. To assess the reliability of the data on bank charters, assets, and assessments, we (1) interviewed OCC, FRB, and FDIC agency officials who are knowledgeable about the data; (2) reviewed information about the data and the systems that produced them; and (3) for certain data, reviewed documentation provided by agency officials on the electronic criteria used to extract data used in this report. For OCC data on the yearly number and assets of banks experiencing charter changes between the federal and state charters, we performed some basic reasonableness checks of the data against FRB data and data reported in a research study by an economist at OCC. We found that the data differed among these three data sources. We identified discrepancies and discussed these with agency officials. We also found that OCC data on assets of banks that changed charters as a result of mergers were very different from both FRB data and data in the research study. Furthermore, according to OCC officials, asset data based on call report information was considered more reasonable for the purposes of our report. Therefore, we did not use OCC data on assets of banks that changed charters as a result of mergers. Instead, we decided to use FRB data because they were more reasonable in comparison to those in the research study and because they were based entirely on call report information. Although OCC data on assets of banks that changed charters as a result of conversions was not based on call report information, we decided to use that data because it was reasonable in comparison with those reported in the research study. After reviewing possible limitations in OCC’s Corporate Applications Information System, we determined that all data provided, with the exception of OCC assets data noted above, were sufficiently reliable for the purposes of this report. We conducted our work in California, Georgia, New York, North Carolina, Idaho, Iowa, and Washington, D.C., from August 2004 through March 2006 in accordance with generally accepted government auditing standards. Legal Arguments Regarding the Preemption Rules In addition to expressing uncertainty about the applicability of state consumer protection laws to national banks, some opponents of the preemption rules disagreed with the Office of the Comptroller of the Currency’s (OCC) legal interpretations of the National Bank Act in support of the rules. They asserted that the effects of the preemption rules will remain unclear until these legal arguments are resolved. As discussed below, legal challenges to the preemption rules consistently have been rejected by federal courts. OCC’s Interpretation of the Preemption Standard Many critics of the bank activities rule disagreed with OCC’s articulation of the standard for federal preemption, asserting that the agency misinterpreted controlling Supreme Court precedents as well as the regulatory scheme Congress has established for national banks. The regulations contained in the bank activities rule provide that, except where made applicable by federal law, state laws that “obstruct, impair or condition” a national bank’s ability to fully exercise its federally authorized powers do not apply to national banks. Opponents of the bank activities rule asserted that this standard misstates the test for preemption under the National Bank Act. This Constitution, and the laws of the United States which shall be made in pursuance thereof; and all treaties made, or which shall be made, under the authority of the United States, shall be the supreme law of the land; and the judges in every state shall be bound thereby, anything in the Constitution or laws of any State to the contrary notwithstanding. Under the Supremacy Clause, state law is preempted by federal law when Congress intends preemption to occur. Preemption may be either express—where Congress specifically states in a statute that the statute preempts state law—or implied in a statute’s structure and purpose. Implied preemption occurs through either “field preemption” or “conflict preemption.” Field preemption occurs when Congress (1) has established a scheme of federal regulation so pervasive that there is no room left for states to supplement it or (2) has enacted a statute that touches a field in which the federal interest is so dominant that the federal system will be assumed to preclude enforcement of state laws on the same subject. In contrast, conflict preemption occurs when a state law actually conflicts with federal law. To determine whether a conflict exists, courts consider whether compliance with both federal and state law is a physical impossibility or whether the state law stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress. Despite these separate analytical approaches, in practice the differences between the two are not always exclusive or distinct. Rather, as a practical matter, there can be a substantial overlap between the categories, with courts using a similar analysis to address field and conflict preemption. Even though field preemption and conflict preemption are not mutually exclusive concepts, the Supreme Court and federal courts traditionally have applied the conflict analysis to determine preemption questions arising under the National Bank Act. Supreme Court and other federal court cases addressing preemption under the act have been decided on the basis of whether a conflict exists between the federal law and a state law. It is well settled that with respect to national banks, the National Bank Act preempts a state law that stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress. Critics of the bank activities rule asserted that (1) the controlling Supreme Court precedent for finding this type of conflict preemption under the National Bank Act is set forth in the Supreme Court’s opinion in Barnett Bank of Marion County v. Nelson and (2) the Barnett Bank decision sets a standard for preemption that is stricter than the one applied by OCC, so that under that standard fewer state laws would be preempted. In defining the pre-emptive scope of statutes and regulations granting a power to national banks, these cases take the view that normally Congress would not want States to forbid, or to impair significantly, the exercise of a power that Congress explicitly granted. To say this is not to deprive States of the power to regulate national banks, where (unlike here) doing so does not prevent or significantly interfere with the national bank’s exercise of its powers. Several critics of the bank activities rule interpret this passage to mean that state law applies to national banks if the law does not “prevent or significantly interfere with” the banks’ ability to engage in activities authorized by the National Bank Act. They said that, in the Barnett decision, the Supreme Court clarified its earlier articulations of conflict preemption under the National Bank Act and that this standard tolerates state regulation of national banks to a greater extent than OCC’s “obstruct, impair or condition” test. According to this argument, state law governs a national bank’s exercise of its federally granted powers unless applying the law would at least significantly interfere with the bank’s ability to engage in banking. OCC interprets the Barnett language to be one of many ways in which the Supreme Court has articulated the standard for preemption under the National Bank Act. In the preamble accompanying publication of the final bank activities rule, OCC explained that its articulation of the standard does not differ in substance from the language used in Barnett or any other Supreme Court test for preemption under the National Bank Act, stating that “he variety of formulations quoted by the Court, . . . defeats any suggestion that any one phrase constitutes the exclusive standard for preemption.” According to some of the sources we consulted, primarily state regulators and consumer groups, under the Barnett decision the application of state laws to national bank activities can be consistent with the National Bank Act. Referring to the rule that state law is preempted when applying it would create “an obstacle to the accomplishment of the full purposes and objectives of Congress,” one legal individual asserted that since at least the early twentieth century it has been an objective of Congress to provide for state regulation of banking activities regardless of whether a bank has a federal or state charter. The individual described this objective as a congressionally established “competitive equilibrium” within the U.S. banking system. According to this perspective, allowing a state law to govern a national bank’s exercise of its federally granted powers would be consistent with the purposes and objectives of Congress. In support of this position, several state officials and consumer groups we interviewed referred to a provision of the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 (Interstate Banking Act). In that legislation, Congress specified that host state laws regarding community reinvestment, consumer protection, fair lending, and the establishment of intrastate branches apply to branches of out-of-state national banks except when, among other things, federal law preempts their application to a national bank. In the conference report accompanying the legislation, the conferees stated that preemption determinations made by the federal banking agencies through opinion letters and interpretive regulations “play an important role in maintaining the balance of Federal and State law under the dual banking system.” The conference report did not discuss how preemption determinations affect the dual banking system. Instead, the discussion referred to state interests in protecting individuals, businesses and communities in their dealings with depository institutions. However, some parties we interviewed maintained that the Interstate Banking Act and its legislative history show that Congress intends states to have a role in regulating national bank activities and that this relationship is a feature of the dual banking system. Several individuals and industry participants we interviewed said that, while Congress may not have prohibited some state regulation of national banks, Congress consistently has endorsed the concept that state laws do not apply to national banking when those laws are inconsistent with federal law. This, they said, is because Congress established the national bank system to be separate from state banking systems. According to this view, the concept of a dual banking system does not contemplate state regulation of national banks, but recognizes that states have authority to regulate the banks they charter. Those sharing this perspective, including OCC representatives, referred to various authorities to demonstrate that Congress established the national bank system to be independent of state regulation, except to the extent provided by federal law. They relied primarily on Supreme Court decisions that referred to the National Bank Act and its legislative history as an expression of Congress’ intent to have a national charter separate from state regulation. For example, in one of those decisions the Supreme Court indicated that the National Bank Act does not contemplate state regulation as a component of the national bank regulatory system. Describing national banks as “federal instrumentalities,” the Supreme Court concluded that a state law was preempted under the National Bank Act because, among other things, Congress had not expressed its intent that a federally authorized national bank activity be subject to local restrictions. In that decision, the Supreme Court held that the National Bank Act preempted a state law forbidding national banks from using “saving” or “savings” in their names or advertising. The Supreme Court said that it found “no indication that Congress intended to make this phase of national banking subject to local restrictions, as it has done by express language in several other instances.” Several individuals, including OCC representatives, also said that even if Congress, in the 1994 Interstate Banking Act, contemplated the potential application of state laws to national bank activities, Congress clearly did not intend state laws to apply if they conflict with federal law. They said that, although the Interstate Banking Act demonstrates Congress’ belief that states have an interest in how national banks conduct their activities in the four areas specified in the act, neither the act nor its legislative history suggest that state laws in those four areas override federal preemption. In their view, Congress’ recognition that state laws are subject to preemption signifies that Congress did not intend the application of state laws covering those four areas to be a purpose or objective of national bank regulation when such state laws conflict with federal law. The general rule governing preemption under the National Bank Act is that state law applies to national banks unless that law conflicts with federal law, unduly burdens the operations of national banks, or interferes with the objectives of the national banking system. (citations omitted). In addition to disagreement over the regulatory objectives of the National Bank Act, we also encountered differences of opinion over the scope of the preemption rules. Some individuals we interviewed asserted that, despite OCC’s invocation of the conflict preemption standard, in fact OCC has preempted the field of national bank regulation. That is, under OCC’s test, there is no room for state law in the regulation of the business activities of national banks because those activities are solely a matter of federal law. OCC, on the other hand, maintains that it applies the conflict preemption standard with the objective of enabling national banks to operate to the full extent of their powers under federal law “without interference from inconsistent state laws.” As discussed previously, courts sometimes apply field and conflict preemption analyses interchangeably. To date, however, federal courts have recognized that OCC preemption determinations are based on an analysis of whether a conflict exists between federal and state law. Courts addressing the preemption regulations have not questioned OCC’s determination that conflict between state and federal laws is the predicate for the preemption rules. Breadth of the Preemption Lists in the Bank Activities Rule In the bank activities rule, OCC explained that state laws concerning the subjects listed as preempted already had been preempted, either by OCC administrative determinations, or by federal court decisions or precedents applicable to federal thrifts. However, some of the subjects listed as preempted have not been specifically addressed in precedents applying the National Bank Act. Rather, information from OCC shows that some subjects of state law were included on the preemption lists because they had been preempted by the Office of Thrift Supervision (OTS). In issuing the bank activities rule, OCC concluded that with respect to the applicability of state law, Congress used the same scheme for both national banks and federally chartered thrifts under the Home Owners Loan Act (HOLA). Opponents of the bank activities rule criticized this approach. They questioned the applicability of OTS precedents to preemption under the National Bank Act, asserting that Congress did not intend HOLA and the National Bank Act to have identical preemptive effects. In addition, several state officials and consumer groups said that the terms OCC used to describe preempted subjects of state law are too broad. Those questioning OCC’s reliance on OTS regulations asserted that preemption under the National Bank Act is not as expansive as it is under HOLA, and thus OCC wrongly concluded that state laws preempted under HOLA also are preempted under the National Bank Act. They maintained that the Supreme Court’s description of OTS’ preemptive authority recognizes the broad preemptive impact of HOLA, which some federal courts have characterized as field preemption. Critics of the bank activities rule said that, because preemption under the National Bank Act is conflict-based, it calls for an analysis of whether a state law covering an OTS-preempted subject conflicts with the National Bank Act. They maintained that OTS, using a field preemption analysis, would not have considered whether a conflict exists. They asserted that the National Bank Act, unlike HOLA, contemplates that states have a role in regulating activities of national banks, and particularly those of national bank operating subsidiaries, which typically are formed under state laws governing the establishment of business entities. According to OCC, for purposes of the bank activities rule labeling preemption as either “field preemption” or “conflict based” is “largely immaterial to whether a state law is preempted under the National Bank Act. Other Aspects of the Preemption Rulemaking State representatives, consumer groups, and others challenged the bank activities rule with respect to real estate lending. Referring to the grant of real estate lending powers in the National Bank Act and past versions of OCC’s real estate lending rule, these individuals asserted that in the bank activities rule OCC broadened the scope of preemption beyond what Congress intends. One argument was based on the provision in the National Bank Act that authorizes national banks to make real estate loans. The provision permits national banks to conduct real estate lending “subject to section 1828(o) of this title (12 U.S. Code) and such restrictions and requirements as the Comptroller of the Currency may prescribe by regulation or order.” Section 1828(o) requires the federal banking agencies to have uniform regulations prescribing standards for real estate loans. The standards include a requirement that lenders comply with “all real estate related laws and regulations.” Some individuals we interviewed said that this standard means that the same real estate lending laws must apply to all federally insured depository institutions and that, because state laws apply to one set of institutions—specifically state banks— those same laws apply to national banks. Opponents also argued that OCC, by broadening the scope of preemption for real estate lending, acted contrary to its previous determinations of limited preemption for this activity. Before the bank activities rule was issued, OCC’s regulations specifically preempted state law with respect to only five aspects of real estate lending. In interpretive letters describing the preemptive effect of the rule, OCC officials sometimes stated that its purpose was to preempt only five categories of state law restrictions on national bank real estate lending, and that “t was not the intention of , however, to preempt all state regulation of real estate lending.” OCC stated that the rule clarified the “limited scope” of preemption with respect to real estate lending, thus “any state regulations outside of the five areas cited continue to apply to national banks, unless preempted by other regulation.” OCC maintained this position while applying the same preemption standard it applied in the bank activities rule. Critics of the bank activities rule asserted that OCC’s past statements were correctly based on the conclusion that the National Bank Act has a limited preemptive effect with respect to real estate lending and that OCC has not adequately justified its new, contrary interpretation. According to OCC, the substance of the bank activities rule is not new; it reiterates preemption determinations that had been made before the rule was promulgated. Therefore, the rule does not represent a change in OCC’s application of preemption principles with respect to real estate lending. Moreover, OCC revised the rule in 1995 to say that OCC would apply principles of federal preemption to state laws concerning aspects of national bank real estate lending not listed in the regulation. OCC had been following this approach in its interpretive letters on preemption since at least 1985. Some critics of the bank activities rule also challenged preemption with respect to deposit-taking, which is one of the four categories of bank activity set forth in the bank activities rule. They asserted that (1) deposits are personal property and deposit accounts are contracts between the depositor and the bank and (2) Congress did not intend the National Bank Act to supersede state property and contract laws. The bank activities rule provides that state laws on the subjects of contracts and the acquisition and transfer of property are not inconsistent with national bank powers and apply to national banks “to the extent that they only incidentally affect the exercise” of the bank’s powers. The disagreement with OCC’s preemption concerning deposit-taking focuses on whether a particular state law that could be preempted (because it relates to deposit-taking) might not be preempted (because it is a state contract or property law). We found one case in which a California state court held that a state law relating to deposit taking was not preempted because, among other things, the court concluded it was a law governing contracts having only an incidental effect on the bank’s deposit-taking. However, in that decision, the court did not question OCC’s authority to preempt state laws applicable to bank deposits. Applicability of Rules to National Bank Operating Subsidiaries State officials and their representative groups disputed OCC’s assertion that the preemptive effects of the National Bank Act extend to national bank operating subsidiaries. They also disagreed with OCC’s assertion of exclusive supervisory and enforcement jurisdiction over national bank operating subsidiaries. These disagreements arise mainly from the contention that OCC has improperly interpreted the status of operating subsidiaries under the National Bank Act. Although a national bank operating subsidiary typically is formed under state business association laws, under OCC’s interpretation of the National Bank Act the entity exists only as a means through which national banks may conduct federally authorized banking activities. This is because OCC permits national banks to have operating subsidiaries on the theory that conducting business through an operating subsidiary is an activity permitted by the National Bank Act. According to OCC, because operating subsidiaries exist and are utilized as a national bank activity, they may not be used by national banks to engage in activities not authorized by the National Bank Act and, correspondingly, are subject to the same laws, terms, and conditions that govern national banks. In several recent cases, federal courts have upheld OCC’s rationale for permitting national banks to use operating subsidiaries and, consequently, have held that operating subsidiaries are subject to the same laws and restrictions that apply to national banks; one of those decisions is under review by the Supreme Court. Opponents of OCC’s position believe that the National Bank Act does not treat national bank operating subsidiaries the same as national banks, regardless of OCC’s rationale for their existence. They maintain that national bank operating subsidiaries are legally independent entities, not banks, and as such they are subject to state laws and supervision by state agencies. OCC has permitted national bank operating subsidiaries since at least 1966, during which time Congress has not enacted legislation to override OCC’s position. Disagreement with OCC’s Interpretation of Its Visitorial Powers In the visitorial powers rulemaking, OCC clarified its position regarding its supervisory authority over national banks and their operating subsidiaries. As discussed in the body of this report, the agency amended its visitorial powers rule to clarify the terms of its exclusive visitorial power over national banks and their operating subsidiaries with respect to the content and conduct of their federally authorized activities. OCC also amended the rule to recognize the jurisdiction of functional regulators and articulate OCC’s interpretation of a part of the visitorial powers provision, 12 U.S.C. § 484, that makes national banks subject to the visitorial powers vested in courts of justice. 305 (2d Cir. 2005); Wells Fargo Bank, N.A. v. Boutris, 419 F.3d. 949 (9th Cir., 2005); Wachovia Bank v. Watters, 432 F.3d. 556 (6th Cir. 2005); OCC v. Spitzer, 396 F.Supp. 2d. 383 (S.D. N.Y. 2005); National City Bank v. Turnbaugh, 367 F.Supp.2d 805 (D. MD 2005). During our work, we encountered disagreements with OCC’s assertion of exclusive supervisory authority over national bank operating subsidiaries and the agency’s view of the nature of the visitorial powers vested in courts of justice. Those disagreeing with the rule described it as an attempt by OCC to limit both state supervision of activities conducted by state- chartered entities and the ways in which states can rely on their courts to take legal action against operating subsidiaries. These disagreements raise complicated legal analyses and policy concerns, but based on our interviews and research, there does not appear to be significant uncertainty over OCC’s view of its visitorial powers as expressed in the visitorial powers regulation. As discussed above, in several recent cases, federal courts have upheld OCC’s conclusion that its visitorial powers confer exclusive supervisory jurisdiction with respect to the banking activities of national banks and their operating subsidiaries. Bank Charter Changes from 1990 to 2004 From 1990 to 2004, More Banks Changed to the Federal Charter, but Most Changes Resulted from Mergers From 1990 to 2004, the number of bank charter changes to the federal charter outnumbered changes to a state charter. Figure 5 shows the total annual changes resulting from conversions and mergers between federal and state bank charters according to data from Office of the Comptroller of the Currency (OCC). Of 3,163 charter changes for that period, 1,884 involved moving from state charters to the federal charter, and 1,279 involved moving from the federal to a state charter, a net increase of 605 to the federal charter. Annual changes between the two types of charters tended to be similar in number, with the exception of 1994–1999, when noticeably more state banks changed to the federal charter. According to industry observers and academics we interviewed, the greater number of changes to the federal charter in 1997 could be attributed to the easing of individual state restrictions on interstate banking and the passage of the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, which removed remaining state restrictions on interstate banking. The majority of changes between the federal and state charters during this period resulted from mergers rather than conversions. Of the 3,163 charter changes in that period, 2,353 (or 74 percent) involved mergers. Further, 1,545 (82 percent) of the 1,884 changes from a state to the federal charter involved mergers. Changes from the federal to a state charter involved somewhat fewer mergers: 808 (63 percent) of 1,279 changes. Focusing only on conversions, we found that, over the entire period, there was a net increase in state-chartered banks. Figure 6 shows the number of annual changes resulting from conversions between the two types of charters from 1990 through 2004. There were a total of 339 conversions to the federal charter and a total of 471 conversions to state charters. Thus, there were 132 more conversions to state charters than to the federal charter. Looking only at mergers, we found the opposite—a net increase in federal charters. Figure 7 shows the number of annual changes resulting from mergers between the federal and state bank charters from 1990 through 2004. Over the entire period, there were a total of 1,545 mergers into the federal charter and 808 mergers into state charters. Thus, there were 737 more mergers into the federal charter than into state charters. Recent Charter Changes Substantially Increased the National Bank Share of All Bank Assets When banks change charters, the share of bank assets under the supervision of OCC and different state bank regulators also changes. Figure 8 shows the total assets of banks that changed charters annually from 1990 through 2004 according to data from OCC and the Board of Governors of the Federal Reserve System (FRB). In 1990-2003, the total assets of all banks that changed charters were less than $200 billion annually. However, in 2004 total assets of all state-chartered banks that changed to the federal charter increased to about $789 billion, largely due to the charter changes of JP Morgan Chase Bank and HSBC Bank. The assets of these banks constituted about 96 percent (about $759 billion) of the total assets of banks that changed to the federal charter, or 82 and 14 percent, respectively. Over the entire period, total assets that shifted to the federal charter amounted to about $1,574 billion, and total assets that shifted to state charters amounted to about $687 billion. Thus, about $887 billion more in assets shifted to the federal charter than to state charters. We also looked at the movement of assets depending on whether charter changes resulted from conversions or mergers. Figure 9 shows the assets of banks that converted annually between the federal and state charters from 1990 through 2004, according to data from OCC. In 1990-2003, about $55 billion more in assets shifted to state charters than to the federal charter. During that period, total assets of banks that converted charters remained below $100 billion annually. However, when 2004 figures are included, about $590 billion more in assets shifted to the federal charter than to state charters. This is largely due to the conversion of one formerly state-chartered bank (JP Morgan Chase Bank), which alone contributed 99 percent (about $649 billion) of all assets in 2004 of state banks that converted to the federal charter. The assets of JP Morgan Chase Bank represented almost 8 percent of all bank assets in that year. Similarly, figure 10 shows the assets of banks that experienced mergers between the federal and state charters annually in 1990-2004, according to data from FRB. In 1990-2004, about $296 billion more in assets shifted to the federal charter than to state charters as a result of mergers. The Annual Number and Assets of Banks That Changed between Federal and State Charters Was Small Relative to All Banks and All Bank Assets From 1990 through 2004, the annual number and assets of banks that changed between the federal and state charters constituted a small percentage of all banks and all bank assets in those years. During that period, the annual number of changes between the federal and state bank charters was about 2 percent or less of all banks in those years. For example, the number of changes to the federal charter as a percentage of all banks was 2.4 percent in 1997, when there were 223 changes. The number of changes to the state charter as a percentage of all banks was 1.3 percent in 1993, when there were 139 changes. Figure 11 shows the total annual changes between the federal and state bank charters as a percentage of all banks in each year from 1990 through 2004. We found that the percentage of assets involved in charter changes was also small relative to all bank assets. Figure 12 shows the annual assets of banks from 1990 to 2004 that converted between the federal and state charters as a percentage of all bank assets. From 1990 to 2004, total assets of banks that converted from the federal charter to state charters were about 1.5 percent or less of all bank assets annually. For example, assets were highest in 1994 at about $59.6 billion, which was 1.49 percent of all bank assets that year. Similarly, from 1990 through 2003, the total annual assets of banks that converted from state charters to the federal charter were about 1 percent or less of all bank assets each year. For example, during this period assets were highest in 1997 at about $54 billion, which was about 1.07 percent of all bank assets that year. In 2004, however, assets for state to federal conversions reached their highest since 1990 at about $653 billion, which was about 7.8 percent of all bank assets that year. Similarly, the annual charter changes and assets of banks involved in mergers have also been a small percentage of all banks and all bank assets for those years. During the period, the number of mergers between the two types of charters is less than 2 percent of all banks per year. For example, as shown in figure 7, the highest number of mergers into state charters from the federal charter was 80 in 1998, which was 0.91 percent of all banks that year. The highest number of mergers into the federal charter from state charters was 158 in 1997, which was 1.73 percent of all banks that year. The annual assets of banks experiencing mergers between the two types of charters were less than 3 percent of all bank assets each year. For example, as shown in figure 10, assets for mergers into state charters from the federal charter were highest in 1996 at about $136.6 billion, which was about 2.98 percent of all bank assets that year. Assets for mergers into the federal charter from state charters were highest in 2004 at about $135.9 billion, which was about 1.62 percent of all bank assets that year. How OCC is Funded The Office of the Comptroller of the Currency (OCC) is funded primarily by the assessments and fees that it collects from the institutions it oversees. The amounts assessed for OCC oversight are primarily based on a bank’s asset size, but other factors are included in OCC’s assessment formula. Under the formula, bank assessments decrease as asset size increases. As a result, mergers and consolidations among banks result in a smaller assessment paid to OCC by the resulting bank. OCC Is Funded Primarily by the Assessments It Charges National Banks As of fiscal year 2004, assessments made up almost all of OCC’s revenue — about 97 percent. As shown in figure 13, since 1999 assessments have constituted no less than 94 percent of OCC’s revenue. OCC also receives revenue from other sources: corporate fees banks pay primarily for licensing, investment income from gains on U.S. Treasury securities, income from the sale of OCC publications, and income from miscellaneous internal operations such as parking fees paid by OCC employees. OCC’s Assessment Formula Is Based on Asset Size but Includes Other Factors The assessment formula, changed in the mid-1970s from a flat rate per dollar of assets to its current regressive structure, determines how much each national bank must pay for OCC supervision. The relationships between bank size (assets) and assessments are shown in table 1. Every national bank falls into one of the 10 asset-size brackets denoted by columns A and B. The semiannual assessment is composed of two parts. The first part is the calculation of a base amount of the assessment, which is computed on the assets of the bank as reported on the bank’s Consolidated Report of Condition (or call report) up to the lower end point (column A) of the bracket in which it falls. This base amount of the assessment is calculated by OCC in column C. The second part is the calculation by the bank of assessments due on the remaining assets of the bank in excess of column E. The excess is assessed at the marginal rate shown in column D. The total semiannual assessment is the amount in column C, plus the amount of the bank's assets in excess of column E multiplied by the marginal rate in column D: Assessments = C+[(Assets - E) x D]. OCC also levies a surcharge for banks that require increased supervisory resources as reflected in the bank’s last OCC-assigned CAMELS rating. The CAMELS score is a numerical rating assigned by supervisors to reflect their assessment of the overall financial condition of a bank. The score takes on integer values ranging from 1 (best) to 5 (worst). Surcharges are calculated by multiplying the assessment, based on the institution’s reported assets up to $20 billion, by 50 percent for a CAMELS 3-rated institution and 100 percent for 4- and 5- rated institutions. For example, a national bank, with a 4 supervisory rating, $15 billion in assets, and no independent trust or credit card operations would be charged a standard assessment of $968,900 plus a 100 percent surcharge of $968,900, for a total assessment of $1,937,800. Since January 1, 2003, OCC special examinations and investigations have been subject to an additional charge of $110 per hour. Each year OCC issues a notice with updates on changes and adjustments, if any, to the assessment formula. It may adjust the marginal rates in column D and the amounts in column C; most adjustments are made based on the percentage change in the level of prices, as measured by changes in the Gross Domestic Products Implicit Price Deflator (GDPIPD). GDPIPD is sensitive to changes in inflation, and OCC has discretion to adjust marginal rates by amounts less than the percentage change in GDPIPD for that time period. For example, the GDPIPD adjustment was 1.5 percent in 2004 and 1.1 percent in 2003. OCC also has the authority to reduce the semiannual assessment for banks other than the largest national bank controlled by a company; these nonlead banks may receive a lesser assessment. For example, in the 2004 Notice of Comptroller of the Currency Fees, OCC reduced the assessment of nonlead national banks by 12 percent. The Price of OCC Supervision Decreases with Asset Size Because the multipliers used to compute assessments beyond the base assessment decrease as asset size increases, (see column D in table 1), the price of supervision is less per million dollars in assets for larger banks than for smaller banks. To illustrate this point, we calculated the price per million dollars in assets for the largest possible total asset size within each assessment range (see table 2). For example, table 2 shows that a national bank with about $2 million in total assets would pay about $2,500 per million dollars of assets for supervision, while the price of supervision for a national bank of about $2 billion is less than $100 per million dollars of assets. Mergers and Consolidations Result in Less Revenue for OCC OCC’s assessment formula prices supervision for merged national banks at a rate less than that of individual national banks with equivalent total assets. In cases where there is a merger between two national banks, for example, bank A is a national bank and bank B is a national bank, the merged bank C may have total assets equal to bank A plus those of bank B, but the assessment for bank C could be less than the assessment of bank A plus the assessment of bank C. To illustrate this point, we selected 10 merger transactions and applied OCC’s assessment formula. In all cases, OCC received less revenue in assessments after the merger occurred, compared with individual assessments prior to the merger. Table 3 shows the asset amounts of the banks in one of our examples and the effect of OCC’s formula. In this example, the impact is a change in OCC’s budget that decreases assessment revenue by about $76,500. An OCC official acknowledged that the regressive nature of its assessment formula could reduce the assessment paid by merged banks compared with individual bank assessments prior to a merger. However, the official stated that costs associated with supervising merged banks were dependent on specific characteristics of the merged bank. For example, if a national bank located in California merged with a national bank located in New York, OCC may need to continue to maintain bicoastal bank examination teams. In this case, assessments would decrease, but costs would remain the same. In most cases however, over time, mergers of roughly equal-sized banks would realize savings and other synergies that do not require extra resources. For example, certain fixed costs could be spread across the merged banks; thus, as the bank’s assets grow larger, average costs generally decrease. How Selected Federal Financial Industry Regulators Are Funded FDIC is to contribute to the stability of and public confidence in the nation’s financial system by insuring deposits, examining and supervising financial institutions, and managing receiverships. In cooperation with state bank regulators, FDIC regulates federally insured, state-chartered banks that are not members of the Federal Reserve and federally insured state savings banks. FDIC funds its operations by premiums that banks and thrifts pay for deposit insurance and earnings on its investments in U.S. Treasury securities. FDIC has permanent budget authority and, therefore, is not subject to the congressional appropriations process. As the nation’s independent, decentralized central bank, the FRB is responsible for conducting monetary policy, maintaining the stability of the financial markets, and supporting a stable economy. The FRB supervises and regulates bank holding companies and, in cooperation with state bank regulators, examines and supervises state-chartered banks that are FRB members. FRB funds its operations primarily from the earnings on its investments in Treasury securities. FRB has permanent budget authority and, therefore, is not subject to the congressional appropriations process. OTS’s mission is to effectively and efficiently supervise thrift institutions to maintain their safety and soundness in a manner that encourages a competitive industry. OTS examines and supervises all federally chartered and insured thrifts and thrift holding companies. In cooperation with state regulators, OTS examines and supervises all state-chartered, federally insured thrifts. OTS funds its operations primarily from assessments on the federal financial institutions it regulates. It has permanent budget authority and, therefore, is not subject to the congressional appropriations process. NCUA’s mission is to foster the safety and soundness of federally insured credit unions and to better enable the credit union community to extend credit. It charters, regulates, and insures federally chartered credit unions. It also insures the majority of state-chartered credit unions. In cooperation with state regulators, it supervises federally insured, state- chartered credit unions. NCUA funds its operations primarily from assessments on the federal credit unions it regulates. It has permanent budget authority and, therefore, is not subject to the congressional appropriations process. SEC’s mission is to (1) promote full and fair disclosure; (2) prevent and suppress fraud; (3) supervise and regulate the securities markets; and (4) regulate and oversee investment companies, investment advisers, and public utility holding companies. SEC is funded by the fees it collects from the entities it regulates subject to limits set by the congressional authorizations and appropriations processes. Excess fees are put into an offset fund and may be administered by Congress for other purposes. SEC is subject to the Office of Management and Budget process. OFHEO is to ensure the capital adequacy and financial safety and soundness of Fannie Mae and Freddie Mac, two government-sponsored enterprises, privately owned and operated corporations established by Congress to enhance the availability of mortgage credit. OFHEO examines and regulates the two enterprises. OFHEO is funded through assessments paid by Fannie Mae and Freddie Mac and subject to limits set by the congressional authorizations and appropriations process. OFHEO must deposit collected assessments into the Oversight Fund, an account held in the Treasury. FHFB is to ensure the safety and soundness of the Federal Home Loan Bank System, a government-sponsored enterprise whose mission is to support housing finance, and ensure that the system carries out its housing finance mission. FHFB examines and regulates the 12 Federal Home Loan Banks. FHFB is supported by assessments from the 12 Federal Home Loan Banks. No tax dollars or other appropriations support the operations of the FHFB or the Federal Home Loan Bank System. FCA is an independent federal regulatory agency responsible for supervising, regulating, and examining institutions operating under the Farm Credit Act of 1971; the institutions that it regulates make up a system that is designed to provide a dependable and affordable source of credit and related services to the agriculture industry. FCA’s expenses are paid through assessments on the institutions it examines and regulates. No federally appropriated funds are involved. Information on Funding of States’ Bank Regulators Information gathered by the Conference of State Bank Supervisors (CSBS) indicates that most state bank regulators levy assessments to fund their operations. Forty-three states used some type of asset-based assessment formula to collect funds from banks and/or other entities they regulated, according to the CSBS data for 2004-2005. Of the other seven states, two based their assessments on department costs, and two levied assessments only for shortfalls in the departments’ budgets. The remaining three states did not report information. Most state bank regulators (40 of 49 that reported such information) indicated that their legislatures determined how those funds would be allocated, appropriated, or spent. Table 4 provides more detailed information on the funding arrangements for six state bank regulators that we interviewed. Comments from the Office of the Comptroller of the Currency GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Katie Harris, Assistant Director; Nancy Eibeck; Nicole Gore; Jamila Jones; Landis Lindsey; Alison Martin; James McDermott; Kristeen McLain; Suen-Yi Meng; Marc Molino; Barbara Roesmann; Paul Thompson; James Vitarello; and Mijo Vodopic made key contributions to this report.
In January 2004, the Office of the Comptroller of the Currency (OCC)--the federal supervisor of federally chartered or "national" banks--issued two final rules referred to jointly as the preemption rules. The "bank activities" rule addressed the applicability of state laws to national banking activities, while the "visitorial powers" rule set forth OCC's view of its authority to inspect, examine, supervise, and regulate national banks and their operating subsidiaries. The rules raised concerns among some state officials and consumer advocates. GAO examined (1) how the rules clarify the applicability of state laws to national banks, (2) how the rules have affected state-level consumer protection efforts, (3) the rules' potential effects on banks' choices of a federal or state charter, and (4) measures that could address states' concerns regarding consumer protection. In the bank activities rule, OCC sought to clarify the applicability of state laws by relating them to certain categories, or subjects, of activity conducted by national banks and their operating subsidiaries. However, the rule does not fully resolve uncertainties about the applicability of state consumer protection laws, particularly those aimed at preventing unfair and deceptive acts and practices. OCC has indicated that, even under the standard for preemption set forth in the rules, state consumer protection laws can apply; for example, OCC has said that state consumer protection laws, and specifically fair lending laws, may apply to national banks and their operating subsidiaries. State officials reacted differently to the rules' effect on relationships with national banks. In the views of most officials GAO contacted, the preemption rules have had the effects of limiting the actions states can take to resolve consumer issues, as well as adversely changing the way national banks respond to consumer complaints and inquiries from state officials. OCC has issued guidance to national banks and proposed an agreement with the states designed to facilitate the resolution of, and sharing information about, individual consumer complaints. Other state officials said that they still have good working relationships with national banks and their operating subsidiaries, and some national bank officials stated that they view cooperation with state attorneys general as good business practice. Because many factors, including the size and complexity of banking operations and an institution's business needs, can affect a bank's choice of a federal or state charter, it is difficult to isolate the effects, if any, of the preemption rules. GAO's analysis of OCC and other data shows that, from 1990 to 2004, less than 2 percent of the nation's thousands of banks changed between the federal and state charters. Because OCC and state regulators are funded by fees paid by entities they supervise, however, the shift of a large bank can affect their budgets. In response to the perceived disadvantages of the state charter, some states have reported actions to address potential charter changes by their state banks. Measures that could address states' concerns about protecting consumers include providing for some state jurisdiction over operating subsidiaries, establishing a consensus-based national consumer protection lending standard, and further clarifying the applicability of state consumer protection laws. The first two measures present complex legal and policy issues, as well as implementation challenges. However, an OCC initiative to clarify the rules' applicability would be consistent with one of OCC's strategic goals and could assist both the states and the OCC in their consumer protection efforts--for example, by providing a means to systematically share relevant information on local conditions.
Some Instances of Noncompliance with Medical Care Standards Occurred At the time of our visits, we observed instances of noncompliance with ICE’s medical care standards at 3 of the 23 facilities we visited. However, these instances did not show a pervasive or persistent pattern of noncompliance across the facilities like we those identified with the telephone system. Detention facilities that we visited ranged from those with small clinics with contract staff to facilities with on-site medical staff, diagnostic equipment such as X-ray machines, and dental equipment. Medical service providers include general medical, dental, and mental health care providers that are licensed by state and local authorities. Some medical services are provided by the U.S. Public Health Service (PHS), while other medical service providers may work on a contractual basis. At the San Diego Correctional Facility in California, an adult detention facility, ICE reviewers that we accompanied cited PHS staff for failing to administer the mandatory 14-day physical exam to approximately 260 detainees. PHS staff said the problem at San Diego was due to inadequate training on the medical records system and technical errors in the records system. At the Casa de San Juan Family Shelter in California, we found that the facility staff did not administer medical screenings immediately upon admission, as required in ICE medical care standards. At the Cowlitz County Juvenile Detention Center in Washington state, we found that no medical screening was performed at admission and first aid kits were not available, as required. Officials at some facilities told us that meeting the specialized medical and mental health needs of detainees can be challenging. Some also cited difficulties they had experienced in obtaining ICE approval for outside nonroutine medical and mental health care as also presenting problems in caring for detainees. On the other hand, we observed instances where detainees were receiving specialized medical care at the facilities we visited. For example, at the Krome facility in Florida we observed one detainee sleeping with the assistance of special breathing equipment (C- PAP machine) to address what we were told was a sleep apnea condition. At the Hampton Roads Regional jail in Virginia we observed a detainee receiving treatment from a kidney dialysis machine. Again, assessing the quality of care and ICE’s decision—making process for approval of nonroutine medical procedures were outside the scope of our review. ICE Compliance Inspections Also Show Some Instances of Noncompliance With Medical Standards We reviewed the most recently available ICE annual inspection reports for 20 of the 23 detention facilities that we visited. With the exception of the San Diego facility in California, the reports covered a different time period than that of our review. The 20 inspection reports showed that ICE reviewers had identified a total of 59 instances of noncompliance, 4 of which involved medical care. According to ICE policy, all adult, juvenile, and family detention facilities are required to be inspected at 12-month intervals to determine that they are in compliance with detention standards and to take corrective actions if necessary. As of November 30, 2006, according to ICE data, ICE had reviewed approximately 90 percent of detention facilities within the prescribed 12-month interval. Subsequent to each annual inspection, a compliance rating report is to be prepared and sent to the Director of the Office of Detention and Removal or his representative within 14 days. The Director of the Office of Detention and Removal has 21 days to transmit the report to the field office directors and affected suboffices. Facilities receive one of five final ratings in their compliance report—superior, good, acceptable, deficient, or at risk. ICE officials reported that as of June 1, 2007, 16 facilities were rated “superior,” 60 facilities were rated “good,” 190 facilities were rated “acceptable,” 4 facilities were rated “deficient,” and no facilities were rated “at risk.” ICE officials stated that this information reflects completed reviews, and some reviews are currently in process and pending completion. Therefore, ICE could not provide information on the most current ratings for some facilities. Four inspection reports disclosed instances of noncompliance with medical care standards. The Wakulla County Sheriffs Office in Florida had sick call request forms that were available only in English whereas the population was largely Spanish speaking. The Cowlitz County Juvenile Detention Facility in Washington state did not maintain the alien juvenile medical records on-site. The San Diego Correctional facility staff, in addition to the deficiencies noted earlier in this statement, failed to obtain informed consent from the detainee when prescribing psychiatric medication. Finally, the Broward Transitional Center in Florida did not have medical staff on-site to screen detainees arriving after 5 p.m. and did not have a properly locked medical cabinet. We did not determine whether these deficiencies were subsequently addressed as required. Alien Detainee Complaints Included Concerns About Medical Care Our review of available grievance data obtained from facilities and discussions with facility management showed that the types of grievances at the facilities we visited typically included the lack of timely response to requests for medical treatment, missing property, high commissary prices, poor quality or insufficient quantity of food, high telephone costs, problems with telephones, and questions concerning detention case management issues. ICE’s detainee grievance standard states that facilities shall establish and implement procedures for informal and formal resolution of detainee grievances. Four of the 23 facilities we visited did not comply with all aspects of ICE’s detainee grievance standards. Specifically, Casa de San Juan Family Shelter in San Diego did not provide a handbook to those aliens in its facility, the Cowlitz County Juvenile Detention Center in Washington state did not include grievance procedures in its handbook, Wakulla County Sheriff’s Office in Florida did not have a log, and the Elizabeth Detention Center in New Jersey did not record all grievances that we observed in their facility files. The primary mechanism for detainees to file external complaints is directly with the OIG, either in writing or by phone using the DHS OIG complaint hotline. Detainees may also file complaints with the DHS Office for Civil Rights and Civil Liberties (CRCL), which has statutory responsibility for investigating complaints alleging violations of civil rights and civil liberties. In addition, detainees may file complaints through the Joint Intake Center (JIC), which is operated continuously by both ICE and U.S. Customs and Border Protection (CBP) personnel, and is responsible for receiving, classifying, and routing all misconduct allegations involving ICE and CBP employees, including those pertaining to detainee treatment. ICE officials told us that if the JIC were to receive an allegation from a detainee, it would be referred to the OIG. OIG may investigate the complaint or refer it to CRCL or DHS components such as the ICE Office of Professional Responsibility (OPR) for review and possible action. In turn, CRCL or OPR may retain the complaint or refer it to other DHS offices, including ICE Office of Detention and Removal (DRO), for possible action. Further, detainees may also file complaints with nongovernmental organizations such as ABA and UNHCR. These external organizations said they generally forward detainee complaints to DHS components for review and possible action. The following discussion highlights the detainee complaints related to medical care issues where such information is available. We did not independently assess the merits of detainee complaints. Of the approximately 1,700 detainee complaints in the OIG database that were filed in fiscal years 2003 through 2006, OIG investigated 173 and referred the others to other DHS components. Our review of approximately 750 detainee complaints in the OIG database from fiscal years 2005 through 2006 showed that about 11 percent involved issues relating to medical treatment, such as a detainees alleging that they were denied access to specialized medical care. OPR stated that in fiscal years 2003 through 2006, they had received 409 allegations concerning the treatment of detainees. Seven of these allegations were found to be substantiated, 26 unfounded, and 65 unsubstantiated. Four of the seven substantiated cases involved employee misconduct, resulting in four terminations. According to OPR officials, three cases were still being adjudicated and the nature of the allegations was not provided. Additionally, 200 of the allegations were classified by OPR as either information only to facility management, requiring no further action, or were referred to facility management for action, requiring a response. CRCL also receives complaints referred from the OIG, nongovernmental organizations, and members of the public. Officials stated that from the period March 2003 to August 2006 they received 46 complaints related to the treatment of detainees, although the nature of the complaints was not identified. Of these 46 complaints, 14 were closed, 11 were referred to ICE OPR, 12 were retained for investigation, and 9 were pending decision about disposition. We could not determine the number of cases referred to DRO or their disposition. On the basis of a limited review of DRO’s complaints database and discussions with ICE officials knowledgeable about the database, we concluded that DRO’s complaint database was not sufficiently reliable for audit purposes. We recommended that ICE develop a formal tracking system to ensure that all detainee complaints referred to DRO are reviewed and the disposition, including any corrective action, is recorded for later examination. We reviewed 37 detention monitoring reports compiled by UNHCR from the period 1993 to 2006. These reports were based on UNHCR’s site visits and its discussions with ICE officials, facility staff, and detainee interviews, especially with asylum seekers. Eighteen of the 37 UNHCR reports cited concerns related to medical care, such as detainee allegations that jail staff were unresponsive to requests for medical assistance and UNHCR’s concern about the shortage of mental health staff. While American Bar Association officials informed us that they do not keep statistics regarding complaints, they compiled a list for us of common detainee complaints received through correspondence. This list indicated that of the 1,032 complaints it received from January 2003 to February 2007, 39 involved medical access issues such as a detainee alleging denial of necessary medication and regular visits with a psychiatrist, allegations of delays in processing sick call requests, and allegations of a facility not providing prescribed medications. Madam Chairman, this concludes my prepared remarks. I would be happy to answer any questions you or the members of the subcommittee have. Contacts and Acknowledgments For further information on this testimony, please contact Richard M. Stana at (202) 512-8777 or by e-mail at stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, William Crocker III, Assistant Director; Minty Abraham; Frances Cook; Robert Lowthian; and Vickie Miller made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2007, Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement (ICE) detained over 311,000 aliens, with an average daily population of over 30,000 and an average length of stay of about 37 days in one of approximately 300 facilities. The care and treatment of aliens while in detention is a significant challenge to ICE, as concerns continue to be raised by members of Congress and advocacy groups about the treatment of the growing number of aliens while in ICE's custody. This testimony focuses on (1) the extent to which 23 facilities complied with medical care standards, (2) deficiencies found during ICE's annual compliance inspection reviews, and (3) the types of complaints filed by alien detainees about detention conditions. This testimony is based on GAO's July 2007 report evaluating, among other things, the extent to which 23 facilities complied with aspects of eight of ICE's 38 National Detention Standards. This report did not address quality of care issues. At the time of its visits, GAO observed instances of noncompliance with ICE's medical care standards at 3 of the 23 facilities visited. These instances related to staff not administering a mandatory 14-day physical exam to approximately 260 detainees, not administering medical screenings immediately upon admission, and first aid kits not being available as required. However, these instances did not show a pervasive or persistent pattern of noncompliance across all 23 facilities. Officials at some facilities told GAO that meeting the specialized medical and mental health needs of detainees had been challenging, citing difficulties they had experienced in obtaining ICE approval for outside nonroutine medical and mental health care. On the other hand, GAO observed instances where detainees were receiving specialized care at the facilities visited. At the time of its study, GAO reviewed the most recently available ICE annual inspection reports for 20 of the 23 detention facilities that it visited; these reports showed that ICE reviewers had identified a total of 59 instances of noncompliance with National Detention Standards, 4 of which involved medical care. One facility had sick call request forms that were available only in English whereas the population was largely Spanish speaking. Another did not maintain alien medical records on-site. One facility's staff failed to obtain informed consent from the detainee when prescribing psychiatric medication. Finally, another facility did not have medical staff on-site to screen detainees arriving after 5 p.m. and did not have a properly locked medical cabinet. GAO did not determine whether these instances of noncompliance were subsequently corrected as required. The types of grievances at the facilities GAO visited typically included the lack of timely response to requests for medical treatment, missing property, high commissary prices, poor food quality and insufficient food quantity, high telephone costs, problems with telephones, and questions concerning detention case management issues. ICE's detainee grievance standard states that facilities shall establish and implement procedures for informal and formal resolution of detainee grievances. Four of the 23 facilities GAO visited did not comply with all aspects of ICE's detainee grievance standards. For example, one facility did not properly log all grievances that GAO found in their facility files. Detainee complaints may also be filed with several governmental and nongovernmental organizations. The primary way for detainees to file complaints is to contact the DHS Office of Inspector General (OIG). About 11 percent of detainee complaints to the OIG between 2005 and 2006 involved medical treatment issues. However, we found that the OIG complaint hotline 1-800 number was blocked or otherwise restricted at 12 of the facilities we tested. OIG investigates the most serious complaints and refers the remainder to other DHS components. GAO could not determine the number of cases referred to ICE's Detention Removal Office and concluded that ICE's detainee complaint database was not sufficiently reliable.
Background GSA estimated that federal agencies spent about $1.6 billion during fiscal year 2009 purchasing office supplies from more than 239,000 vendors. Federal agencies can use a variety of different approaches to purchase office supplies. For relatively small purchases, generally up to $3,000, authorized personnel can use their government purchase cards. For larger purchases, agencies may use other procedures under the Federal Acquisition Regulation, such as awarding a contract or establishing blanket purchase agreements. Alternatively, agencies can use the Federal Supply Schedule program (schedules program), a simplified process for procuring office supplies where GSA awards contracts to multiple vendors for a wide range of commercially available goods and services to take advantage of price discounts equal to those that vendors offer their “most favored customers.” The schedules program can leverage the government’s significant aggregate buying power. In addition, agencies can make office supply purchases under GSA’s new initiative, the OS II program. The OS II program is an outgrowth of an earlier attempt by GSA to offer agencies a simplified process for fulfilling their repetitive supply needs while obtaining prices that are lower than vendors’ schedule prices. By July 2010, GSA had awarded 15 blanket purchase agreementswhich went to small businesses. competitively to support the OS II initiative, 13 of For its study, GSA reviewed office supply purchases in 14 categories of mostly consumable office supplies, ranging from paper and writing instruments to calendars and filing supplies. The report did not include non-consumable items such as office furniture and computers because they are not part of the standard industry definition of office supplies. The GSA report estimated that during fiscal year 2009, the 10 agencies the highest spending on office supplies accounted for about $1.3 billion, or about 81 percent, of the total $1.6 billion spent governmentwide in the 14 categories of office supplies. Further, it stated that about 58 percent of office supply purchases were made outside of the GSA schedules program, mostly at retail stores. Additionally, GSA reported that agencies paid an average of 75 percent more (a price premium) than schedule prices and 86 percent more than OS II prices, for their retail purchases. Departments of the Army, Air Force, Navy, Homeland Security, Veterans Affairs, State, Health and Human Services, Justice, Commerce, and Agriculture. GSA Report Had Data and Other Limitations While the GSA report acknowledged some limitations with the data, we identified additional data and other limitations that lead us to question the magnitude of some of GSA’s reported price premiums. We were not able to fully quantify the impact of these limitations. Additionally, other agencies questioned the study’s specific findings related to price premiums, but their own studies of price premiums support GSA’s conclusion that better prices can be obtained through consolidated, leveraged purchasing. Since purchasing of office supplies is highly decentralized, GSA obtained data for its study from multiple disparate sources, such as the Federal Procurement Data System-Next Generation, the Department of Defense (DOD) electronic mall, and purchase card data from commercial banks. To determine the amount of funds spent on office supplies and to conduct related analyses, GSA had to sort through about 7 million purchase transactions involving over 12 million items. The agency took steps to clean the data prior to using them. For example, it removed duplicate purchases and items that did not meet its definition of office supplies. The GSA study noted that the estimated amount of funds and related calculations were to be considered sound and reliable estimates derived from rigorous data analysis techniques. We also identified additional data and other limitations in GSA’s study, including: GSA may not have been able to properly control for purchases of different quantities of the same item. Because there is no consistency in how part numbers are assigned, manufacturers may assign the same part number to both individual items and to packages of items in some cases. GSA tried to exclude transactions that had large variations in retail prices for apparently identical items to control for these occurrences. However, when we reviewed data for 10 items within the writing instruments category, we found that retail prices for 6 of the 10 items varied by more than 300 percent, such as Rollerball pens, which ranged from $9.96 to $44.96. Two different formulas were used for calculating price premium estimates. However, the study only described one of these specific formulas. The use of the unreported formula did not have a substantial impact on the retail price premium calculations for most categories of office supplies or the overall conclusions of the study, but the GSA report could have been more complete had it fully disclosed all the formulas used for all categories of office supplies. GSA did not identify or collect any data about price comparisons conducted by the purchase cardholders. GSA concluded that purchase cardholders compared costs at some level prior to making a purchase based on its interviews with senior-level acquisition officials. While these officials may have had a broad understanding of agency procurement policies and practices, they were not representative of the approximately 270,000 credit cardholders making purchasing decisions. GSA officials said that given the reporting time frame for the study, they did not have the resources or time needed to survey a representative sample of the 270,000 purchase cardholders. Additionally, officials from the Departments of Air Force, Army, Navy, and Homeland Security believed that the price premiums reported by GSA when buying outside the GSA schedule were overstated based upon their own studies. For example, the Air Force determined that the OS II blanket purchase agreements could save about 7 percent in a study of the 125 most commonly purchased items. However, these agencies agreed with GSA’s overall conclusion that better prices can be obtained through leveraged buys and that prices available through the new OS II blanket purchase agreements were better than the prices available from their existing agency blanket purchase agreements. New Strategic Sourcing Initiative for Office Supplies Shows Potential for Generating Savings According to initial available data, GSA’s OS II blanket purchase agreements have produced savings. The OS II initiative, more so than past efforts, is demonstrating that leveraged buying can produce greater savings and has provided improvements for managing ongoing and future strategic sourcing initiatives. GSA is using a combination of agency and vendor involvement to identify key requirements and cost drivers, increase the ease of use, and obtain the data necessary to manage the program. GSA’s Analysis of OS II Data Shows Savings Are Being Achieved On the basis of the sales data provided by OS II vendors, GSA estimates the federal government saved $39.2 million between June 2010 and March 2012 by using the 15 blanket purchase agreements established for this program. These savings were estimated by comparing the lowest prices of a set—or market basket—of over 400 items available on GSA’s schedules program contracts before OS II with prices and discounts being paid for the same items on the OS II blanket purchase agreements. Importantly, and unlike GSA’s report, GSA’s conclusions about savings realized under OS II are based on data from vendors—which they are required to collect and provide in the normal course of business—and not on data collected after the fact from sources not designed to produce information needed to estimate savings. GSA’s comparison of the market basket of best schedule prices against the OS II blanket purchase agreement vendors’ prices found that prices offered by OS II vendors were an average of 8 percent lower. The average savings, however, is expected to fluctuate somewhat as the OS II initiative continues to be implemented and the mix of vendors, products, and agencies changes. For example, GSA found that savings, as a percentage, declined slightly as agencies with historically strong office supplies management programs increased their use of OS II. Conversely, they expect the savings percentage to increase as agencies without strong office supplies management programs increase their use. In addition to the savings from the blanket purchase agreements, GSA representatives told us that they are also seeing prices decrease on schedules program contracts as vendors that were not selected for the OS II program react to the additional price competition created by the OS II initiative. The agency decided to extend the OS II blanket purchase agreements for an additional year after negotiating additional price discounts of about 3.9 percent on average with 13 of the 15 vendors in the program. The blanket purchase agreements also include tiered discounts, which apply when specific sales volume thresholds are met. Sales realized by 5 of the vendors reached the first tier discount level as of April 2012, and the vendors have since adjusted their prices to provide the corresponding price discounts. GSA anticipates that additional vendors will reach sales volumes that exceed the first tier discount threshold in the first option year, which will trigger additional discounts. An additional benefit of OS II may be lower contract management costs, as agencies can rely on GSA to administer the program instead of their own staffs. While this may create some additional burden for GSA, officials believe the overall government costs to administer office supply purchases should decrease. OS II Includes Key Management Goals and Practices to Enhance Oversight and Manage Suppliers GSA has incorporated a range of activities representative of a strategic procurement approach into the OS II initiative. These activities range from obtaining a better picture of spending on services, to taking an enterprisewide approach, to developing new ways of doing business. They also involve supply chain management activities. All of these activities involve some level of centralized oversight and management. GSA is capturing lessons learned from OS II and is attempting to incorporate these lessons into other strategic sourcing initiatives. GSA obtained commitments from agencies and helped set goals for discounts to let businesses know that the agencies were serious in their commitment to the blanket purchase agreements. This also helped GSA determine the number of blanket purchase agreements that would be awarded. As part of the overall strategy, a GSA commodity council identified five overarching goals, in addition to savings, for the OS II initiative. These goals and the methods used to address them are in table 1. Several new business practices have been incorporated in the OS II program to meet the goals. For example, to meet the capture data goal, GSA is collecting data on purchases and vendor performance that are assimilated and tracked through dashboards, which are high-level indicators of overall program performance. The dashboard information is used by the GSA team members responsible for oversight to ensure that the vendors are meeting terms and conditions of the blanket purchase agreements and that the program is meeting overall goals. The information is also shared with agencies using OS II. Our review of GSA’s OS II vendor files found that GSA has taken a more active role in oversight and is holding the vendors accountable for performance. For example, GSA has issued Letters of Concern to four vendors and has issued one Cure Notice to a vendor. These letters and notices are used to inform vendors that the agency has identified a problem with the vendor’s compliance. To support the OS II management responsibilities, GSA charges a 2 percent management fee, which is incorporated into the vendors’ prices. This fee, which is higher than the 0.75 percent fee normally charged on GSA schedules program sales, covers the additional program costs, such as the cost of the six officials responsible for administering the 15 blanket purchase agreements, as well as their contractor support. In addition, to increase savings and ease of use, OS II includes a point of sale discount, under which blanket purchase agreement prices are automatically charged whenever a government purchase card is used for an item covered by the blanket purchase agreement rather than having the buyers ask for a discount. Additionally, purchases are automatically tax exempt if the purchases are made using a government purchase card. State sales taxes were identified by GSA’s report as costing the federal agencies at least $7 million dollars in fiscal year 2009. GSA’s experience with OS II is being applied to other strategic sourcing initiatives. For example, GSA set up a commodity council for the Federal Strategic Sourcing Initiative Second Generation Domestic Delivery Services II program. The council helped identify program requirements and provide input on how the program operates. Concluding Observations GSA’s office supplies report contained some data and other limitations, but it showed that federal agencies were not using a consistent approach in both where and how they bought office supplies and often paid a price premium as a result of these practices. The magnitude of the price premium may be debatable, but other agencies that have conducted studies came to the same basic conclusion about the savings potential from leveraged buying. The GSA study helped set the course for a more strategic approach to buying office supplies—an approach that provides data to oversee the performance of vendors, monitor prices, and estimate savings. Additional savings are expected as more government agencies participate in the OS II initiative and further leverage the government’s buying power. Chairman Mulvaney, Ranking Member Chu, and the Members of the Subcommittee on Contracting and Workforce, this completes my prepared statement. I am happy to answer any questions you have. GAO Contact and Staff Acknowledgments This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The GSA estimated that federal agencies spent about $1.6 billion during fiscal year 2009 purchasing office supplies from more than 239,000 vendors. Concerned that federal agencies may not be getting the best prices available, Congress directed GSA to study office supply purchases by the 10 largest federal agencies. GSA delivered the results of its study in November 2010. The study also discussed GSA’s efforts to implement an initiative focused on leveraging the government’s buying power to realize savings when buying office supplies, known as OS II. Congress directed GAO to assess the GSA study, with particular attention to the potential for savings. This testimony is based on the findings and conclusions of GAO’s December 2011 report, G AO-12-178, and focuses on (1) the support for the findings and conclusions in GSA’s study, and (2) how GSA's new office supply contracts support the goal of leveraging the government’s buying power to achieve savings. In 2010, the General Services Administration’s (GSA) pricing study found that during fiscal year 2009, the 10 largest federal agencies accounted for about $1.3 billion, or about 81 percent, of the total $1.6 billion spent governmentwide in 14 categories of office supplies. About 58 percent of their office supply purchases were made outside of the GSA schedules program—a simplified process to take advantage of price discounts equal to those that vendors offer “most favored customers.” Most of these purchases were made at retail stores. GSA also reported that agencies paid an average of 75 percent more (a price premium) than schedule prices for their retail purchases and 86 percent more compared to Office Supplies II (OS II) prices. While the GSA acknowledged some limitations with the study data, we identified additional data and other limitations that lead us to question the magnitude of some of GSA’s reported price premiums and assertions. More specifically, we determined that the study may not have properly controlled for quantities, used two different formulas to calculate price premium estimates, and relied on interviews with senior level acquisition officials instead of purchasers to determine whether buyers compared prices before making purchases. We were not able to fully quantify the impact of these limitations. Additionally, other agencies questioned the study’s specific findings related to price premiums, but their own studies of price premiums support GSA’s conclusion that better prices can be obtained through consolidated, leveraged purchasing. Available data show that the OS II initiative has produced savings of $39.2 million from June 2010 through March 2012. According to GSA, the OS II initiative is demonstrating that leveraged buying can produce greater savings and has provided improvements for managing ongoing and future strategic sourcing initiatives. For example, GSA reports that OS II allowed it to negotiate discounts with vendors who were selected for the initiative. As governmentwide sales surpass certain targets, additional discounts are applied to purchase prices. Further, OS II has spurred competition among schedule vendors that were not selected for OS II, resulting in decreased schedule prices. The initiative is also expected to lower governmentwide supply costs through more centralized contract management. Another key aspect of the initiative is that participating vendors provide sales and other information to GSA to help monitor prices, savings, and vendor performance. Finally, GSA is capturing lessons learned from OS II and is attempting to incorporate these lessons into other strategic sourcing initiatives.
Background The Federal Reserve Act of 1913 established the Federal Reserve System as the country’s central bank. The act made the Federal Reserve an independent, decentralized bank to better ensure that monetary policy would be based on a broad economic perspective from all regions of the country. The Federal Reserve System consists of the Board of Governors located in Washington, D.C., and 12 Reserve Banks, with 25 branches, located throughout the nation. Each Reserve Bank is a federally chartered corporation with a Board of Directors representing the public and member banks in its district. Under the Federal Reserve Act, Reserve Banks are subject to the general supervision of the Board. The Board is a federal agency, responsible for maintaining the stability of financial markets, supervising financial and bank holding companies and state-chartered banks that are members of the Federal Reserve and the U.S. operations of foreign banking organizations, and overseeing the operations of the Reserve Banks. The Board has delegated some of these responsibilities, including bank examinations, to the Reserve Banks, which also provide payment services, such as check clearing and wire transfers, to depository institutions and government agencies. In 2001, there were approximately 25,000 staff in the Federal Reserve with about 93 percent of these employees working at the Reserve Banks. From 1995 to 2001, Federal Reserve employment decreased by 482 employees. Employment for 2002 is projected to grow by 314, largely because of plans to increase security staff. Figure 1 shows Federal Reserve employment from 1995 to 2001. In the 1996 report, we noted that the Federal Reserve is self-financed, and that the income it collects but does not use to fund its operations is turned over to the U.S. Treasury. In 2001, the Federal Reserve had a total income of $31.9 billion and expenses of approximately $2.1 billion; it subsequently transferred $27.1 billion to the U.S. Treasury. Since 1993, the operating expenses of the Federal Reserve have increased an average of 4.2 percent per year (2.2 percent when adjusted for inflation). According to its 2002 Budget Review, the Federal Reserve’s operating expenses are budgeted at $2.8 billion, an increase of 4.5 percent from the estimated 2001 expenses. Figure 2 shows the Federal Reserve’s operating expenses from 1993 to 2001. Our 1996 report identified several inefficiencies in the Federal Reserve’s policies and practices that had increased the cost of providing its services, including its costs for travel, personnel benefits, and contracting and procurement. Many of these inefficiencies related to the decentralized nature of the Federal Reserve, which allowed each Reserve Bank to set many of its own policies, and to the absence of traditional cost-minimizing forces, such as competition or appropriations, that are commonplace in entities that are either purely private or public sector in nature. With this in mind, we suggested that the Federal Reserve could do more to increase its cost consciousness and ensure that it is operating as efficiently as possible. The 1996 report concluded that major cost reductions ultimately depended on the Federal Reserve’s carefully reexamining its mission, structure, and work processes. The report identified areas that had potential for reducing the Federal Reserve’s costs. The recommendations from the report fall into four broad categories: systemwide mission and management issues, control and oversight mechanisms, cost of specific Federal Reserve administrative functions, and charging banks for the costs associated with bank examinations. The Federal Reserve Has Taken Steps to Address Systemwide Mission and Management Issues In 1996, we noted that the Federal Reserve faces major challenges in its mission and lines of business, particularly in services to depository institutions and government agencies and in bank supervision. These challenges included (1) increased competition from the private sector and increasing difficulties in recovering costs in priced services; (2) increasingly widespread use of electronic transactions in the financial services industry; and (3) continuing rapid consolidation of the banking industry, which could affect both the need for and the distribution of bank examination staff. Because these areas accounted for the largest part of the Federal Reserve’s expenses and staffing, we believed that addressing these challenges effectively would likely result in major changes in how the Federal Reserve operated. The Federal Reserve’s strategic plans and programs under development at the time of the 1996 report generally focused on individual divisions, Reserve Banks, or functions. While these plans served an important purpose in defining the direction of these Federal Reserve entities, we also believed that the emerging issues and challenges facing the Federal Reserve would necessitate strategic planning focused on the system as a whole. We also found that each of the Reserve Banks administered various functions independently, rather than as a single entity that could operate more efficiently or possibly command more advantageous prices. These findings led to the recommendations in figure 3. Since 1996, the Federal Reserve has consolidated the management of the services provided by the individual Reserve Banks, particularly payment services. For example, payment system products and new technologies are now managed on a systemwide basis. The Federal Reserve also has undertaken an assessment of its role in providing payment services. In January 1998, the Committee on the Federal Reserve in the Payments Mechanism, chaired by the Vice Chair of the Board of Governors, issued its report entitled The Federal Reserve in the Payments Mechanism. The study examined the payment services provided by the Federal Reserve in light of the rapid changes occurring in the financial services and technology sectors. These services include check clearing and automated clearinghouse services such as direct deposits. The committee undertook a fundamental review of the Federal Reserve’s role in the payments system and considered how alternative roles for the Federal Reserve might enhance or undermine the integrity, efficiency, and accessibility of the payments system. It concluded that the Federal Reserve’s current role, or even a slightly enhanced role, in fostering technical change was preferred by most payment system participants. In July 1999, the Federal Reserve formed the Payments System Development Committee (PSDC) to advise the Board and system officials on medium- and long-term public policy issues surrounding developments in the retail payments system. This committee, which includes two Federal Reserve Board Governors, a Reserve Bank President, and a Reserve Bank First Vice President, was intended to follow up on the work of the Committee on the Federal Reserve in the Payments System. PSDC’s work has included the Check Truncation Act, proposed legislation designed to remove certain legal impediments to check truncation and enhance the overall efficiency of the nation’s payments system. It has also worked with payments industry officials to develop standards to facilitate increased use of electronic check processing. The Federal Reserve has introduced new payment products, such as its imaging service, to recognize the increasing role that image-based services are playing in the evolution of the U.S. payments system and the migration toward more electronic payments. The Federal Reserve has also undertaken numerous initiatives to streamline management structures, consolidate operations, and apply emerging technologies to the Reserve Banks’ business processes in order to improve quality and reduce costs. Federal Reserve officials explained that many of their initiatives have the effect of consolidating functions of the Federal Reserve without consolidating the 12 Reserve Banks. For example, in 1999, the Treasury Direct customer support function was consolidated so that only 3 Reserve Banks are providing customer service support to individuals who purchase Treasury securities directly from the Treasury. From 1999 to 2002, the Federal Reserve consolidated several aspects of Fedwire funds and securities transfer operations. Similarly, in 2000, the Treasury Investment Program was implemented, centralizing services that the Federal Reserve provides to the Treasury Tax and Loan Program. The Federal Reserve continues to standardize and centralize the management of computer applications used for common business needs. It has selected a central site to develop and implement a centralized application for the Reserve Bank Planning and Control System. The Federal Reserve estimates that the centralization of applications such as the budget application will result in systemwide savings of $2.6 million over a 5-year period. The Federal Reserve Has Strengthened Its Control and Oversight Mechanisms In 1996, we noted a number of weaknesses in the Federal Reserve’s budgeting and internal oversight processes. In reviewing the budgeting process for both the Board and Reserve Banks, we found that it was based on a current services approach that assumed both that existing functions would be retained and that the budget would continue to grow incrementally. We concluded that such an approach did not adequately support top management in controlling costs and imposing the internal self-discipline necessary for the Federal Reserve to respond effectively to future priorities. We found that internal oversight processes, such as performance measurement, internal audit, and financial audits, either did not support performance evaluation from a systemwide perspective or were becoming increasingly inappropriate in the changing environment. We also noted that the Office of Inspector General was authorized to review the activities of the Board, but not the Reserve Banks. As a result, we concluded that the Federal Reserve might not be making the most use of its resources devoted to Federal Reserve oversight. These findings led to the recommendation in figure 4. In addition to incorporating systemwide business strategies and resource needs into the Federal Reserve Bank budget planning process, the Federal Reserve Banks have changed their cost accounting system and have altered their internal and external auditing practices. The Federal Reserve recognized that its budget process required too much effort and yielded an unrealistic spending outlook. Therefore, the Federal Reserve’s Financial Support Office proposed changes to the budget process in which system budget targets would be based on systemwide guidance rather than on Reserve Bank projections of their expenses. Banks provided their expense information later in the process when their budget development would be further along—an approach that also could help align budget planning with goal-setting for business strategies. In 2001, the Federal Reserve’s Conference of First Vice Presidents recommended incorporating this guidance to align the Reserve Bank budget projections more closely with business strategies. The conference identified “national business leaders,” that is, Federal Reserve staff with responsibility for most Reserve Bank functions. These leaders provide business guidance as input to the Reserve Banks as they prepare their Reserve Bank Budget Outlook. They are responsible for almost 90 percent of the total Reserve Bank spending. (The remaining 10 percent of spending is largely in support of Federal Reserve monetary policy formulation and is not covered by systemwide budget goals.) The Board of Governors has also redefined its strategic plan and has now implemented a more rigorous 4-year planning process and a 2-year budget process. With this new plan in place, the Board has consolidated overhead and several support functions to reduce costs. In March 2000, the Conference of First Vice Presidents approved recommendations designed to improve the cost-accounting practices of the Federal Reserve Banks. Board staff told us that Reserve Bank staffs have implemented these recommendations. Changes, according to Board staff, include the following: Simplifying expense allocation by tying expenses to departments and organizational units in Reserve Banks rather than to specific activities. This change will eliminate a major portion of the manual process currently in place, and in turn, will reduce the opportunity for erroneous activity charges. Shifting some expenses from overhead to the service line that they support to reflect expenses more accurately. Eliminating sharing of costs among Reserve Banks for all services and operations that are provided centrally. Instead, the Reserve Bank that provides the service will now report associated expenses in an effort to enhance accountability. The Board adopted several new policies aimed at safeguarding the independence of its external auditor. An external auditor, under a contract administered by Board staff, reviews each Reserve Bank’s financial statements. To enhance independence, in May 2002, the Board placed restrictions on Reserve Banks’ ability to contract with the Board’s external audit firm or to hire an auditor that has worked on the audit of a Reserve Bank. The Board currently requires that the external auditor of the Reserve Banks remain independent of Reserve Bank management, and that it provide a written statement to the Board delineating all relationships between the external auditor and the Reserve Banks. In 2001, the Board revised its policy on Reserve Bank audit committee duties and responsibilities, requiring that (1) audit committees adopt formal written charters, (2) audit committee members be independent and financially literate, and (3) audit committees meet with external auditors to discuss the Reserve Bank’s financial statements and issues arising from the annual external audit. The audit authority of the Inspector General remains unchanged. The Federal Reserve Has Consolidated Some of Its Administrative Functions In 1996, we concluded that opportunities existed to reduce the Federal Reserve’s spending in a number of different administrative areas. We found in 1996 that Federal Reserve personnel compensation (pay and benefits) varied within the Federal Reserve and included benefits that were relatively generous compared with those of government agencies with similar responsibilities. We also found that the Federal Reserve’s health care benefits were managed on a decentralized basis, with each Bank negotiating its own health care coverage. We noted that although the Reserve Banks had individually made efforts to reduce health care costs, the Reserve Banks had not worked together to determine whether their combined bargaining powers would further reduce these expenses. We found in 1996 that travel policies differed between the Board and the Reserve Banks and among the Reserve Banks. Therefore, the same trip could present different costs to different Reserve Banks. The differing travel policies made it necessary for each Reserve Bank to manage its own travel costs rather than allowing the Federal Reserve to manage travel costs on a centralized basis. We also found in 1996 that the Board and the Reserve Banks used different procurement guidelines. The Board, while not specifically directed to do so by the Federal Reserve Act, followed the spirit of the federal government contracting rules. The Reserve Banks were required to follow Uniform Acquisition Guidelines, which were adopted by the Reserve Banks in 1985. These guidelines were designed to provide minimum requirements for Reserve Bank procurement activities. By providing opportunities for all interested bidders to become a selected source, the guidelines attempted to ensure that Reserve Banks treated sources fairly and impartially. By fostering competition in the procurement process, Reserve Banks would have greater opportunity to realize cost savings through lower competitive pricing. Despite the Uniform Acquisition Guidelines, we observed instances in which practices at individual Reserve Banks differed significantly and some practices favored certain sources over others and proper controls over conflict of interest were not followed at certain Reserve Banks. Practices at certain Reserve Banks lacked independent checks and reconciliations, and best practices used by certain Reserve Banks were not disseminated among the Reserve Banks. These findings led to the recommendations in figure 5. The Federal Reserve has taken or begun a number of actions in response to the findings and recommendations in our 1996 report. These actions include reassessing the compensation approach for the Federal Reserve, consolidating health insurance for the Reserve Banks, and changing its travel and acquisition practices. A work group of senior Reserve Bank and Board officials was established to reassess the compensation philosophy within the Federal Reserve System. The Board approved a new Reserve Bank total compensation philosophy on June 18, 1997. The philosophy provided broad principles for the design of benefit plans that were intended to be competitive within relevant labor markets and sufficiently flexible to attract, retain, and motivate the staff and officers required to fulfill the mission of the Federal Reserve. The policy indicates the purpose and objectives of Reserve Bank compensation and benefit programs as well as relevant labor markets and competitive position. In 1999, the Board approved in concept a Reserve Bank strategic benefits plan, which was developed to be a more specific plan for ensuring that benefits will be appropriately competitive into the future. The Federal Reserve is in the process of consolidating the administration and selection of health benefits so that all Reserve Banks have similar plans that are administered uniformly. Initiatives in managing benefits also have led to the consolidation of the administration and record keeping of several other benefits, including the Thrift Plan, retirement plans, retiree prescription plans, and worker’s compensation plans. According to Federal Reserve staff, the thrift and retirement plans were consolidated a number of years ago and are fully outsourced to a single vendor. Moreover, the Board’s Office of Employee Benefits projects that it will save $4 million from implementing the consolidated health care plans. In 2001, the Reserve Banks began a plan to reduce travel costs by upgrading videoconferencing capabilities. A vendor for this system was selected in the first quarter of 2002, and Federal Reserve officials said installation of new facilities has been completed in offices that previously had videoconferencing. The next phase of this effort will include installing videoconferencing facilities in offices that did not previously have them. The Board has also encouraged travel savings through pursuing government discounts and traveling at nonpeak hours. In March 1997, the Federal Reserve completed a fundamental review of its Uniform Acquisition Guidance. As part of this effort, it reviewed benchmarking and best practices efforts to determine if any changes were necessary. In July 1998, a new Model Acquisition Guideline was approved, replacing the 1985 Uniform Acquisition Guidelines. The Federal Reserve has continued to engage in its benchmarking process. Federal Reserve staff said that this process has revealed continued declines in the cost of providing procurement services through the use of streamlined purchasing procedures. Benchmarking studies have concluded that the Federal Reserve’s enhanced use of the System Purchasing Service to gain economies of scale has resulted in significant savings. Board staff explained that, where it makes sense, the Board uses procurement resources available to government agencies, such as the General Services Administration. However, they said that in some cases, such as in their procurement of telecommunication services, the Board and the Reserve Banks might negotiate together to enhance their bargaining position. The Federal Reserve Has Continued Its Policy of Not Charging for Bank Examinations Our 1996 report noted that the Federal Reserve’s revenues, and hence its return to the taxpayers, would be enhanced by charging fees for bank examinations. Federal bank regulators differ in their policies regarding the assessment of fees for bank examinations. The Office of the Comptroller of the Currency (OCC) charges national banks for examinations that it conducts. In contrast, state-chartered banks, which are supervised by either the Federal Reserve or the Federal Deposit Insurance Corporation (FDIC) in conjunction with state banking agencies, are charged fees by those state banking agencies but not by their federal regulator. Thus, the costs of the Federal Reserve’s federal bank examinations are borne by the taxpayers, while for national banks, the costs of examination are borne by the banks that are examined. The Federal Reserve Act authorizes the Federal Reserve to charge fees for bank examinations, but the Federal Reserve has not done so, for the state member banks it examines. In addition, the Federal Reserve inspects bank holding companies but does not charge the institutions for those inspections. Similarly, FDIC is authorized to charge for bank examinations but it does not do so. These findings led to the recommendation in figure 6. The Federal Reserve continues to believe that it should not charge for bank examinations. Federal Reserve officials told us that, since state member banks already pay state banking commissions for examinations, an additional charge for a Federal Reserve examination would increase the cost and lessen the value of a state banking charter, thus compromising the nation’s dual banking system. Banks pay an array of annual charges and assessments associated with their charters, as table 1 indicates. All three federal bank regulators are self-funded. Differences in their funding mechanisms, however, may lead to differences in who ultimately pays the costs of supervision and regulation, even if the supervisory and regulatory actions serve the common purposes of ensuring that banks are operated in a safe and sound manner. The Federal Reserve funds its operations from the earnings on its portfolio of Treasury securities, as previously noted. Since the Federal Reserve’s transfers to the Treasury are reduced by the expenses of its bank supervisory and regulatory activities, the taxpayer ultimately pays for Federal Reserve activities. FDIC may fund its operations from the premiums that banks pay for deposit insurance. However, since the Bank Insurance Fund is 1.25 percent of bank deposits, which it has been since 1995, FDIC generally does not charge banks premiums for deposit insurance. If FDIC were to begin charging insurance premiums, then either the bank’s owners or customers, including depositors, would be paying for FDIC examinations. OCC is funded by assessments on the assets held by national banks and fees for services. Under this arrangement, the owners or customers of national banks pay for OCC operations. The differences among the funding approaches of the federal bank regulators continue to raise questions about whether these impose unequal burdens on banks—varying with their charter—and their customers. “The dual banking system not only fosters and preserves innovation but also constitutes our main protection against overly zealous and rigid federal regulation and supervision. A bank must have a choice of more than one federal regulator, and must be permitted to change charters, to protect itself against arbitrary and capricious regulatory behavior. Naturally, some observers are concerned that two or more federal agencies will engage in a ‘competition in laxity,’ and we must guard against that; but the greater danger, I believe, is that a single federal regulator would become rigid and insensitive to the needs of the marketplace.” Further, Federal Reserve officials note that the roughly even shares of banks across the charters, and consistent shares of deposits among the charters, suggest that the relative costs and benefits of the charters balance. National banks, they believe, see a value in their charter that at least offsets any additional costs. “Healthy competition in the quality of supervision and innovation in meeting the needs of banks and their customers should lie at the heart of our dual banking system. Unfortunately, today a primary focus of this competition is on price. Because state banks receive a federal subsidy for the predominant part of their supervision, there is a cost incentive for banks to avoid or depart from the national charter in favor of the heavily subsidized state charter. This inevitably tends to undermine a vigorous and healthy dual banking system.” OCC has proposed that the costs of supervising national banks (which OCC performs) and state supervision of state-chartered banks be paid from FDIC insurance funds. This approach would attempt to provide consistency at the federal regulator-level by having the costs of regulation borne by taxpayers or depositors. We have generally favored an approach in which regulated entities pay for their own federal regulation. Since the Federal Reserve continues to strongly disagree with our recommendation regarding charging for bank examinations, actions to implement the recommendation are unlikely. While we continue to believe that a strong argument exists for industry funding of federal supervision and regulation, we also recognize the benefits of the dual banking system. Ultimately, however, it is up to Congress to decide how to fund federal regulation and to balance the differences among the different bank regulators. Scope and Methodology To review the Federal Reserve’s actions in response to our recommendations, we interviewed staff from the Federal Reserve Board’s Division of Reserve Bank Operations, Division of Banking Supervision and Regulation, Office of Inspector General, as well as the Board’s Staff Director for Management. We reviewed relevant policies and other Board actions and documents. We did not visit any Reserve Banks to verify or review implementation of these new policies. We conducted our work in Washington, D.C., between April and August, 2002, in accordance with generally accepted government auditing standards. Agency Comments We requested comments on a draft of this report from the Board. In these comments, the Director of the Division of Reserve Bank Operations and Payment Systems agreed with the information presented in this report; the comments are reprinted in appendix I. We incorporated the Board’s technical comments where appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs, and the House Committee on Financial Services. We will also send copies to the Chairman of the Board of Governors of the Federal Reserve System, and we will make copies available to others on request. In addition, this report is available at no charge on our Web site at http://www.gao.gov. Please contact me or James McDermott, Assistant Director, at (202) 512-8678 if you or your staff have any questions concerning this report. Other key contributors to this report were Thomas Conahan and Josie Sigl. Comments from the Federal Reserve System GAO’s Mission The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
In a 1996 report, GAO made a number of recommendations to the Board of Governors of the Federal Reserve System for reducing spending and improving the operations of the Federal Reserve System (Federal Reserve). The Federal Reserve has taken actions responsive to most of the 1996 report's recommendations. The Federal Reserve has retained its structure but has sought to consolidate operations and bring common management practices to the 12 Federal Reserve District Banks. In particular, the Federal Reserve now manages the payment services it provides to banks on a systemwide basis. The Federal Reserve has also changed its budgeting, internal oversight, and cost accounting processes in an effort to increase accountability. It has taken other steps to decrease costs in areas identified by the 1996 report. Specifically, the Reserve Banks have consolidated their purchase of some services, such as prescription drug coverage, to take advantage of volume discounts, rather than continuing with the former practice of each individual Reserve Bank purchasing services separately. The Federal Reserve, however, continues not charging for bank examinations. Federal Reserve officials explained that they continue to believe that charging for bank examinations would tip the current balance against state charter banks, and thus be inconsistent with maintaining the dual banking system of state and nationally charter banks. Although a strong argument exists for industry funding of federal supervision and regulation, GAO recognizes the benefits of the dual banking system. Ultimately, it is up to Congress to decide how to fund federal regulation and to balance the differences among the different bank regulators.
Background In Afghanistan, the use of local vendors by U.S. and international forces as part of an effort to create economic development is considered to be one of the key supporting elements of the U.S. COIN strategy. For example, guidance issued in August 2010 and amplified in September 2010 by the ISAF/USFOR-A Commander emphasizes the role of contracting in the implementation of the COIN strategy. In Afghanistan, local personnel make up a significant portion of DOD’s contractor workforce. According to CENTCOM’s quarterly census data, in the first quarter of fiscal year 2011, there were more than 87,000 DOD contractor personnel in Afghanistan. Of those personnel, Afghan nationals made up approximately 53 percent of the contracted workforce. According to DOD, recent initiatives that have a direct influence on the hiring of local nationals in Afghanistan include developing a more skilled workforce; increasing business opportunities; increasing community cash flow; improving public infrastructure, such as roads and utilities; and enhancing community organizational capacity. In addition to its importance to DOD, local contracting is integral to the efforts of other U.S. government agencies, such as USAID, to rebuild and expand infrastructure and economic capacity in Afghanistan. Assisting in this effort by providing contracting support are numerous agencies, commands, and offices. For U.S. forces, the two primary DOD contracting entities in Afghanistan based on fiscal year 2010 obligations are CENTCOM Contracting Command and USACE. CENTCOM Contracting Command—whose structure includes the Senior Contracting Officer-Afghanistan and the regional contracting centers—obligated over $2.7 billion in contracts in fiscal year 2010. Also in fiscal year 2010, USACE obligated more than $1.8 billion, and it is expected to undertake approximately $3.7 billion in projects in Afghanistan in fiscal year 2011. Many of these reconstruction and infrastructure projects are expected to be built by vendors, including the extensive use of subcontractors. Further, in fiscal year 2010, USAID obligated over $2.7 billion in program funds for projects in Afghanistan. According to USAID officials, the agency is actively involved in using local vendors to provide goods and services. Additionally, contracts that support forces in Afghanistan may be awarded in the United States by contracting offices and commands, such as the Army Materiel Command’s Rock Island Contracting Center and U.S. Transportation Command. According to State officials, most of the agency’s contracts in Afghanistan are awarded by contracting officials in the United States. Further, given the NATO environment in Afghanistan, contracts that directly or indirectly support U.S. forces may also be awarded by the contracting offices of coalition partners, such as the United Kingdom and Germany, and by NATO contracting entities, such as the NATO Maintenance and Supply Agency. While the use of local vendors in Afghanistan is a key element of the COIN strategy, it also brings about challenges. For example, the ISAF/USFOR-A Commander’s September 2010 guidance cautions that if large quantities of international contracting funds are spent quickly and with insufficient oversight, it is likely that some of those funds will unintentionally fuel corruption, finance insurgent organizations, strengthen criminal patronage networks, and undermine efforts in Afghanistan. Further, the guidance suggests that extensive use of subcontractors in Afghanistan, as well as the lack of visibility of subcontractors by contracting personnel, could increase the risk of corruption. The September 2010 guidance directs commanders and contracting officials to gain and maintain visibility of the subcontractor network, and it warns that excessive subcontracting tiers provide opportunities for criminal networks and insurgents to divert contract money from its intended purpose. Additionally, USAID’s Mission Order for Afghanistan 201.03 seeks to prevent USAID programs and funds from benefiting terrorists. To prevent resources from being used to support terrorist activities or organizations, steps have been taken, such as the issuance of Executive Order 13,224 in September 2001, which blocks the property of individuals and entities designated as terrorists and prohibits the support of these listed individuals or entities through dealing in blocked property. Additionally, various implementing regulations, found in the Federal Acquisition Regulation, prevent government agencies from contracting with designated individuals and entities, or require contracting officers to check potential contract awardees against lists such as the Excluded Parties List System. As part of the acquisition process, the Federal Acquisition Regulation indicates that contracts are to be awarded only to responsible prospective vendors. A contracting officer must make an affirmative determination of responsibility prior to awarding a contract. Guidance found in the CENTCOM Contracting Command Acquisition Instruction, which is intended to implement and supplement, among other regulations, the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement and to establish general contracting procedures, states that its contracting officers “shall take all practicable steps to ensure the award of all contracts to responsible contractors.” Both the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement provide a number of elements to be considered in the determination of responsibility. Several of these are elaborated upon in the CENTCOM Contracting Command Acquisition Instruction, including adequate financial resources to perform the contract, the ability to comply with delivery or performance schedules, a satisfactory past performance record (when part of the evaluation), and integrity and business ethics. The integrity and business ethics element requires the contracting officer to verify that a prospective awardee is not included in the Excluded Parties List System. In response to continued congressional attention and concerns from DOD, USAID, and other agencies about actual and perceived corruption in Afghanistan and its impact on U.S. and ISAF activities, several DOD and interagency (including State and USAID) efforts have been established in Afghanistan to identify malign actors, encourage transparency, and prevent corruption. These efforts include the establishment of several interagency task forces, such as Task Force 2010, an interagency anticorruption task force that aims to provide commanders and civilian acquisition officials with an understanding of the flow of contract funds in Afghanistan in order to limit illicit and fraudulent access to those funds by criminal and insurgent groups, and the Afghan Threat Finance Cell, an interagency organization that aims to identify and disrupt funding of criminal and insurgent organizations. Additionally, ISAF and U.S. agencies have established several other joint task forces, including the Combined Joint Interagency Task Force Shafafiyat. Task Force Shafafiyat works to integrate ISAF and U.S. anticorruption efforts, such as Task Force 2010 and Task Force Spotlight, which focuses on private security contracting, with those of key Afghan government and civil society partners to foster a common understanding of the corruption problem in Afghanistan. DOD Has Recently Begun to Vet Non-U.S. Vendors in Afghanistan, but Its Efforts Could Be Strengthened by a Risk-Based Approach CENTCOM Contracting Command Has Recently Begun to Vet Vendors in Afghanistan In 2010, DOD began to vet non-U.S. vendors in Afghanistan by establishing at CENTCOM headquarters in Tampa, Florida, a vetting cell called the Vendor Vetting Reachback Cell (vetting cell). The purpose of this vetting process—which includes the examination of available background and intelligence information—is to reduce the possibility that insurgents or criminal groups could use U.S. contracting funds to finance their operations. The vetting cell is staffed by 18 contractor employees operating from CENTCOM headquarters and is supervised by DOD officials. The contract used to establish the vetting cell for Afghanistan was awarded in June 2010, and in August 2010 the cell began vetting non- U.S. vendors. According to the CENTCOM Contracting Command Acquisition Instruction, all contract awards or options equal to or above $100,000 to all non-U.S. vendors in Iraq and Afghanistan are subject to vetting by the vetting cell. Additionally, all information technology contracts in Afghanistan, regardless of dollar value, are subject to vetting. The Acquisition Instruction suggests that although not required, all vendors should be submitted for vetting—which would include those with contracts below $100,000. According to the Acquisition Instruction, to vet a vendor, a contracting officer, generally located in Afghanistan, submits a request using a Web-based database system known as the Joint Contingency Contracting System. These requests are ultimately directed to the vetting cell located at CENTCOM headquarters in Florida for vetting. The cell vets the vendor and provides a recommendation either to approve or disapprove it, which first goes to a DOD official in Tampa for review and then is forwarded to a DOD entity in Afghanistan, which makes the final determination. If the final determination calls for not contracting with the vendor, the customer (e.g., the battlespace owner) can request an exception to the policy proscribing DOD entities from awarding contracts to rejected vendors. According to the Acquisition Instruction, contracting officers should plan for the standard vetting process to take at least 14 calendar days. However, urgent vetting requests can be accomplished in 5 days. A request is considered urgent when the customer informs the contracting officer in writing that a delay will cause an operational crisis outweighing the risk of awarding to a potential rejected contractor. After the final determination is made, the approval or disapproval status of the vendor is entered and maintained within the Joint Contingency Contracting System database. According to CENTCOM officials, the cell is currently conducting periodic re-vettings of previously vetted vendors that are under contract, which the cell will continue to do as part of its duties. Additionally, while the vetting cell is structured to be able to vet any non- U.S. vendors, the current vetting emphasis is on Afghan vendors and those from neighboring countries. The CENTCOM Vetting Cell Has Recently Begun Vetting Vendors Used by USACE USACE obligated over $1.8 billion in Afghanistan in fiscal year 2010, but until recently it did not have a process in place to routinely vet non-U.S. vendors. According to USACE officials, in fiscal year 2010, well over half of USACE contract awards and more than half of the dollars obligated went to non-U.S. vendors. USACE officials told us that recognizing the potential for overlap among vendors with which USACE and CENTCOM Contracting Command are contracting in Afghanistan, CENTCOM Contracting Command requested that USACE send a list of its most frequently used prime vendors to be vetted, beginning in January 2011. USACE officials told us that while CENTCOM Contracting Command has asked for a list of the most frequently used prime vendors as well as major subcontractors, it specifically asked USACE to stagger the submission of vendor names so as not to overwhelm the vetting cell. While USACE officials told us that some prime contractor names have been submitted, it is unclear when any subcontractor vendor names will be submitted for vetting. According to USACE officials, CENTCOM Contracting Command made this request, in part, because at the time USACE did not use the Joint Contingency Contracting System database, and as such CENTCOM Contracting Command personnel bear the burden of entering all USACE vendor data into the database. USACE officials told us that although USACE has not previously used the Joint Contingency Contracting System to track contracts and vendors, it has begun to train personnel, both in Afghanistan and in the United States, to use the database. Once this training is complete, USACE expects to have approximately 50 personnel available who could enter vendor information into the database, which USACE officials expect will relieve the burden of data entry on CENTCOM Contracting Command personnel. Vendor Vetting Process Faces Limitations Vetting Cell Does Not Routinely Vet Vendors below $100,000 Threshold The CENTCOM Acquisition Instruction requires that non-U.S. vendors competing for awards equal to or above $100,000 be vetted by the vetting cell. The Acquisition Instruction also encourages the vetting of prospective vendors competing for contracts below $100,000, but these contracts are not routinely vetted, and CENTCOM could not provide us with the specific number of vendors below the threshold that have been vetted to date. In Afghanistan, a significant portion of CENTCOM’s new contracts and options exercised for fiscal year 2010 awarded by CENTCOM Contracting Command are below the $100,000 threshold. According to CENTCOM Contracting Command officials, with the increased focus on local contracting, the number of contracts below the threshold is expected to grow. See table 1 for a breakdown of the number and total obligated value of new contracts and blanket purchase agreements awarded and options exercised in fiscal year 2010, where the vendor was non-U.S. vendor, at or above and below the $100,000 threshold. This table shows that although more money is obligated to contracts and options at or above $100,000, there may be many more contracts awarded and options exercised below the $100,000 threshold. Additionally, USFOR-A and CENTCOM officials told us it is possible that the same contractor may have multiple contracts with them that taken individually fall below the $100,000 mark but when viewed collectively could meet or exceed the $100,000 threshold. FPDS-NG is the federal government’s primary data system for tracking information on contracting actions. While FPDS-NG is known to have some limitations, we have tried to mitigate any potential issues by relying on more recent data and by using more than one data element in our analysis. For further information on FPDS-NG, please see GAO, Defense Contracting: Enhanced Training Could Strengthen DOD’s Best Value Tradeoff Decisions, GAO-11-8 (Washington, D.C.: Oct. 28, 2010), and Federal Contracting: Observations on the Government’s Contracting Data Systems, GAO-09-1032T (Washington, D.C.: Sept. 29, 2009). According to DOD contracting officials and supervisors of the vetting cell, the contract terms do not specifically exclude vendors below the dollar threshold from what the cell can vet. Further, CENTCOM Contracting Command officials stated that if a contracting officer or his or her representative knows of a specific prospective vendor holding or competing for numerous contracts below the threshold, officials are free to recommend that the vendor be vetted. Officials also stated that they are currently considering the vetting of non–information technology vendors that fall below the dollar threshold. However, there is no policy or guidance for this; any such vetting would be conducted on an ad hoc basis. And while CENTCOM Contracting Command officials have stated that vetting additional prospective vendors would more fully address potential risks, they have expressed concern that available vetting cell capacity may not be able to accommodate a large increase should vendors below the threshold be included. Vetting Cell Does Not Routinely Vet Subcontractors Currently, CENTCOM Contracting Command does not routinely vet subcontractor vendors—even when the value of a subcontractor’s work exceeds the $100,000 threshold. Officials from multiple DOD contracting entities with whom we spoke said that subcontractors conduct much of the work in Afghanistan, with some contracts having multiple tiers of subcontractors. For example, USACE contracting officials stated that prime vendors that are awarded large construction contracts often use multiple subcontractor tiers in Afghanistan, and officials recognize that given the high dollar value of their contracts, a significant risk is introduced at the subcontractor level. In addition, officials from USFOR-A stated that the Host Nation Trucking contract—the contract by which most of the goods needed to support U.S. warfighters are transported throughout Afghanistan—utilizes multiple tiers of trucking and security subcontractors. In September 2010, ISAF/USFOR-A released additional COIN contracting guidance that directs officials to gain more visibility over the networks of subcontractors in Afghanistan. The guidance further states that officials are to contract with vendors that have fewer subcontractors since excessive subcontracting can provide opportunities for criminal networks and insurgents to divert contract money from its intended purpose. USACE contracting officials stated that they plan to submit major subcontractors through CENTCOM Contracting Command’s vendor vetting process, though officials did not know when this would occur or what number of subcontractors the vetting cell would be able to support. As with the dollar threshold, CENTCOM officials stated that while the vendor vetting cell contract does not specifically preclude officials from submitting subcontractors to be vetted, the cell was not designed, in terms of its number of staff, to vet subcontractors. However, contracting officials who administer the vetting cell contract, as well as vetting cell officials who conduct the work, stated that the contract was created with the flexibility to enable a reallocation of staff between the Iraq and Afghanistan cells if CENTCOM Contracting Command wanted to vet vendors below the $100,000 threshold and to vet subcontractors. Contracting officials have also indicated that the lack of visibility over subcontractors impairs their ability to provide subcontractor names to the vendor vetting cell. In August 2010, in order to gain more visibility over subcontractors, CENTCOM Contracting Command issued Policy Memorandum No. 10-09, which directs that effective August 31, 2010, contracting officers must make a subcontractor responsibility determination in writing, when the prime contractor identifies that it intends to subcontract a portion of the contract, regardless of a contract’s dollar value. Vetting Cell’s Requirements and Resources Not Clearly Defined When CENTCOM Contracting Command established the vendor vetting cell for Afghanistan, it did so without clearly defining the command’s requirements. According to CENTCOM Contracting Command officials, the requirements in the contract that established, staffed, and resourced the Afghanistan vetting cell were defined with the intention of determining a non-U.S. vendor’s eligibility to be awarded a contract in Afghanistan prior to award. However, according to command officials, the vetting cell has been focused on vetting vendors that have already been awarded contracts. According to CENTCOM Contracting Command officials, they began vetting vendors who had already received contracts in order to address immediate corruption and illicit funding concerns. As of March 12, 2011, CENTCOM Contracting Command officials stated that a total of 248 vendors, most of which are on existing contracts, had been vetted, 19 of which had been rejected. Additionally, officials added that the most recent output average is 15 vendors vetted per week and that contracts valued at $100,000 were awarded to 1,042 Afghan vendors in fiscal year 2010. At the current average of 15 vets per week it would take another 53 weeks, or until late March 2012, just to complete the vetting of host-nation vendors with contracts of $100,000 or more awarded in fiscal year 2010. Furthermore, the number of vendors awarded contracts prior to vetting continues to grow as contracts continue to be awarded in Afghanistan by CENTCOM Contracting Command during fiscal year 2011. As of April 2011 CENTCOM Contracting Command has not determined how many of the remaining non-U.S. vendors that have already been awarded contracts valued above $100,000 will be vetted in the future, a timeline for when it will begin vetting vendors prior to award, or an estimate number of anticipated prospective vendors that will be vetted for the remainder of the fiscal year. As we have previously reported, without a sufficient understanding of projected needs, it is difficult to define accurate requirements, which can result in diminished operational capability. Further, leading federal management practices for improving performance state that when planning activities, defined goals, such as desired output, must be linked with resources in order to effectively and efficiently achieve results. Since the backlog of vendors not vetted continues to grow, it is uncertain how the current vetting process and existing resources will bear the addition of other existing non-U.S. vendors, prospective CENTCOM Contracting Command vendors, and vendors from other contracting commands, such as the January 2011 addition of some USACE contracts. CENTCOM Contracting Command Considers Other Factors in Prioritizing Vetting Needs but Has Not Formalized or Documented a Risk-Based Approach CENTCOM Contracting Command and other contracting officials stated that it would be beneficial to include certain contracts below $100,000 and large subcontractors in its vetting process. We have previously reported that a risk-based approach can help DOD and other executive agencies strategically allocate resources to achieve desired outcomes, including those for contract oversight, and DOD has also recognized the usefulness of such an approach to effectively use existing resources in its acquisitions. For example, we reported that dollar value alone may not be a good proxy for risk for every type of contract and that other facto rs could also be used to identify potential risk, such as the characteristics of the activity being performed, the location, or the type of contract. CENTCOM Contracting Command officials stated in February 2011 that because of their concerns regarding the vetting cell’s capacity, as well as their desire to use the vetting cell resources efficiently and immediately, they prioritized the first tranche of vendors vetted based on a variety of factors in addition to the dollar threshold and vendor type given in the Acquisition Instruction. Specifically, officials stated that the first set of vendors vetted were drawn from contracts performed in Kandahar province, which is generally accepted as a Taliban stronghold; high-value and high-risk contracts, such as private security contracts; complex contracts, such as the Host Nation Trucking contract; and some high-value construction projects in certain high-threat regions. GAO-07-20. approach, for example, in a set of standard operating procedures or white paper; however, these documents have not yet been completed, and officials could not provide any further information. Utilizing a risk-based approach to identify high-risk vendors below the $100,000 threshold, as well as subcontractors, could enable CENTCOM Contracting Command to expand its ability to prevent contracts from going to criminal or insurgent groups within existing resource constraints, particularly as CENTCOM Contracting Command balances vetting existing contracts, those prior to award, and vendors from other commands. For instance, while officials have stated that the USACE’s subcontractors pose a large risk because the high value of their construction contracts, they stated that some of the larger subcontractors are prime vendors for other projects, and many of the USACE subcontractors are also used by CENTCOM Contracting Command, either as prime contractors or subcontractors. USACE officials also stated that as of February 2011, their hope is that their large subcontractors that are not vetted through their roles as prime contractors will be submitted to the CENTCOM vetting cell soon, and that USACE aims to decrease the data entry burden on CENTCOM Contracting Command by beginning to use its own personnel to enter information into the Joint Contingency Contracting System. However, as of February 2011, CENTCOM Contracting Command and USACE officials could not specify when USACE will begin submitting subcontractors for vetting because of CENTCOM Contracting Command’s questions regarding the vetting cell’s capacity, and to date CENTCOM has no plans to begin routinely vetting its subcontractors. USAID Has Begun to Develop a Vendor Vetting Process, but State Has Not USAID Has Recently Established a Unit to Vet Non-U.S. Implementing Partners in Afghanistan, Though Details of the Process Have Not Been Finalized In January 2011, in order to counter potential risks of U.S. funds being diverted to support criminal or insurgent activity, USAID created a process for vetting prospective non-U.S. contract and assistance recipients (i.e., implementing partners) in Afghanistan, which is similar to a vetting process it has used in the West Bank and Gaza since 2006. Previously, as of October 2010, USAID officials indicated that they expected to use the CENTCOM Contracting Command vetting cell to vet potential non-U.S. implementing partners—whether through a formal interagency agreement, shared system or platform, or some other information-sharing arrangement. At the time, officials expressed that they wanted to have one consistent U.S. government approach for vetting non-U.S. vendors in Afghanistan to ensure that no USAID implementing partners engage in or support criminal or insurgent groups with contract or other assistance funds. As illustrated in table 2, in fiscal year 2010 USAID reported 114 new contracts and other awards to U.S. partners valued at over $285 million, and 126 to non-U.S. partners valued at almost $46 million. While the number of dollars USAID reported as obligated to non-U.S. partners is substantially lower than that to U.S. partners, the numbers of awards given is higher. In addition, as with DOD, USAID officials said the use of subcontractors/subawardees is extensive, and the use of host-nation partners is expected to increase. According to USAID officials, the agency had long been interested in vetting its non-U.S. implementing partners in Afghanistan and, with the establishment of the CENTCOM vetting cell, USAID had been working with CENTCOM’s Senior Contracting Official in Afghanistan to do so. However, in late 2010 several factors emerged that led USAID to immediately begin exploring whether the CENTCOM Contracting Command vetting cell best met its needs or, alternatively, the agency needed to establish its own vetting process. For example, USAID officials said that in October 2010 they received a report by the Afghan Threat Finance Cell that found that a certain percentage of USAID dollars were being diverted in certain Afghan provinces and in some cases funneled to insurgent groups. Additionally, in determining if CENTCOM’s vetting cell could meet its needs, officials stated that they sent a test vetting through the cell and that it took nearly 3 months for the vetting cell to provide results. Once USAID began looking into the possibility of setting up a vetting unit, officials said they assessed that the agency had existing capabilities from its vetting process used in the West Bank and Gaza with which to implement a process similar to CENTCOM’s without having to establish a duplicative system. According to USAID officials, given the urgent need to mitigate the issues reported by the Afghan Threat Finance Cell, the timelines experienced with the CENTCOM vetting cell, and the availability of existing vetting resources within USAID, the agency, in consultation with the Coordinating Director for Development and Economic Affairs for the U.S. Embassy in Kabul, decided that a Kabul- based USAID vetting support unit separate from CENTCOM’s process would most immediately and effectively meet the agency’s needs. USAID officials stated that in preparation for standing up the vetting support unit, the agency sent representatives from its Office of Security to observe the CENTCOM vendor vetting cell’s process. According to USAID officials, after observing the CENTCOM process they concluded that USAID had the existing resources and ability to similarly vet its implementing partners within timelines that met the agency’s needs. In January 2011, USAID issued a cable outlining the initial structure of its newly created vetting support unit in Afghanistan, and as of March 2011 USAID officials were in the process of drafting standard operating procedures. According to USAID officials and the January 2011 cable, the purpose of the vetting support unit is to help ensure that U.S. government funds do not support malign actors, such as insurgents, corrupt power brokers, and criminal patronage networks. The unit is to comprise an intelligence analyst and two or more permanent support staff stationed in Kabul, who would reach back to existing vetting analysts in USAID’s Office of Security in Washington, D.C., who would conduct the vetting. As with the CENTCOM process, the actual vetting would take place in the United States, while information identifying the prospective non-U.S. partners would be forwarded from the support unit in Afghanistan to USAID’s vetting database. If USAID analysts find derogatory information, the final decision about whether to use the partner would reside with USAID officials in Afghanistan. Although the vetting unit is currently situated within the Office of Acquisition and Assistance in Kabul, USAID officials stated that the responsibilities of the unit are more closely aligned with security-related functions rather than the formal acquisition process, and that many details of the unit are still being determined. As of February 2011, USAID officials stated that the vetting support unit is currently staffed with temporary personnel, and they expect the process of hiring permanent staff to be complete in 3 to 6 months. The USAID vetting process, as it is described by officials and in preliminary documentation, may have limitations that are similar to those of CENTCOM. For example, USAID’s January 2011 cable indicates that there is a $150,000 award threshold for selecting potential implementing partners to vet, and USAID is still finalizing the extent to which it will vet subcontractors/subawardees. In addition, according to USAID officials, as a first step while the unit hires permanent staff, it will focus first on host- nation partners when it plans to begin vetting in April 2011. However, USAID officials indicated that the agency’s vendor vetting process was still in the early stages, and it is expected to be an iterative implementation process—aspects of which could change, such as the vetting threshold and expanding vetting to other non-U.S. partners. Officials stated that ultimately, the formalized vetting criteria will likely incorporate the assessment of other risk factors, such as which province the activity is located in and local knowledge of USAID officials; however, these criteria have not yet been included in preliminary documents. In addition, in March 2011 officials noted that the vetting support unit will vet at least first-tier potential subcontractors/subawardees that have been identified as apparent recipients of awards with a value of $150,000 or more, and will likely go beyond first-tier subcontractors/subawardees for certain awards, though this has also not been finalized. Further, officials pointed to their experience developing and implementing USAID’s vetting efforts in the West Bank and Gaza—which has included trying different monetary thresholds, as well as vetting contract recipients whose cumulative awards reach the threshold in order to capture frequently used partners—and indicated that they expect to include such considerations as they continue to develop the vetting process. As previously discussed, we have frequently reported the value of using a risk-based approach to effectively achieve desired results. Incorporating such an approach into determining what implementing partners to vet—as USAID officials have indicated will occur but has not yet been documented—would increase USAID’s ability to address the greatest risk with existing resources. State Has Not Created a Vendor Vetting Process for Afghanistan As of March 2011, State was not vetting vendors in Afghanistan. State officials told us that currently many of their contracts are awarded to U.S. prime contractors, and they award relatively few contracts to non-U.S. vendors. However, table 3 shows that based on our analysis, State does work with many non-U.S. vendors in Afghanistan, but embassy officials in Kabul told us that they do not do any vetting or background checks on the vendors other than for the security risks posed by individual personnel with physical access to the embassy property or personnel. See table 3 for a comparison between quantities of awards to U.S. vendors and those to non-U.S. vendors. Further, State has endorsed the Afghan First policy, which will likely result in increased contracting with Afghan vendors in the future, which will in turn increase the potential for funds to be diverted to terrorist or insurgent groups. Given this potential increase in local contracting, and without a way to consider—after specific vendors are known to be candidates—the risk posed by funding non-U.S. vendors to perform particular activities in Afghanistan, the department may increasingly expose itself to contracting with malign actors. While State does not have a vendor vetting program, in 2008 State issued a cable that applies to both State and USAID, requiring personnel to complete a terrorist financing risk assessment for any new program or activity prior to requesting or obligating program funds. Periodic updates to the risk assessment are also completed for ongoing programs and activities, though these do not examine vendors against the same information as the CENTCOM or USAID vetting cells. The risk assessment is intended to ensure that projects and activities are not providing benefits, even inadvertently, to terrorists or their supporters, including people or organizations that are not specifically designated by the U.S. government as such but that may, nevertheless, be linked to terrorist activities. This risk assessment weighs the likelihood that a program or activity will inadvertently be funding or benefiting terrorists against the consequences of that occurring—a risk that varies greatly depending on the type and location of the program or activity. USAID and DOD’s vendor vetting processes are intended to be conducted once a potential vendor for a specific contract or activity is known in order to determine whether awarding to a particular entity will increase the likelihood of U.S. funds being diverted to insurgent or other criminal actors, and additionally use law enforcement and intelligence information. DOD, USAID, and State Have Not Developed a Formal Method of Sharing Vendor Vetting Information in Afghanistan Although DOD, USAID, and State likely utilize many of the same vendors in Afghanistan, the agencies have not developed a formalized process to share vendor vetting information. Currently, DOD and USAID officials in Afghanistan have established informal communication such as biweekly meetings, ongoing correspondence, and mutual participation in working groups. Further, DOD and USAID officials said that their vetting efforts are integrally related and are complementary to the work of the various interagency task forces, such as Task Force 2010 and the Afghan Threat Finance Cell, and that their mutual participation in these task forces contributes to interagency information sharing in general and vetting results in particular. However, a formal arrangement for sharing information such as would be included in a standard operating procedure or memorandum of agreement between DOD and USAID has not been developed. In addition, though the U.S. Embassy also participates in various interagency task forces, such as Task Force 2010, there is no ongoing information sharing of vendor vetting results, either ad hoc or formal. According to CENTCOM Contracting Command officials, the command is in the process of developing a standard operating procedure for sharing the vendor vetting results specifically with USAID, but this document has not yet been completed. Standards for internal control for the federal government highlight the importance of establishing and documenting communication and information-sharing capabilities to enable agencies to achieve their goals. In addition, prior GAO work has highlighted the importance of interagency information sharing and collaboration to achieve common outcomes. USAID and CENTCOM Contracting Command officials stated that interagency information sharing is active and effective; that ISAF, USFOR- A, and USAID are in constant communication in order to establish a common picture of ongoing vetting efforts and results; and that officials have emphasized their strong working relationships. Further, according to USAID officials, sharing vendor vetting results would greatly assist the agency’s efforts to ensure that it is not conducting business with known malign actors in Afghanistan. However, in a workforce environment characterized by frequent personnel rotations, maintaining continuity of processes and procedures can be a challenge. Without documented, formalized procedures, DOD and USAID cannot ensure that their current information-sharing practices will endure. Further, sharing information on vetting results could be especially beneficial for State, since it currently has no plans to perform vetting of the type done by DOD and USAID for any of its non-U.S. vendors in Afghanistan. Conclusions In Afghanistan, the use of local vendors by U.S. government agencies such as DOD, USAID, and State is a key component of the COIN strategy. But awards to local vendors in Afghanistan pose particular challenges because of the potential for fraud, corruption, or the siphoning of funds to organizations hostile to U.S. forces. These concerns highlight the importance of establishing processes for mitigating the risk that malign actors could profit from U.S. government contracts. Both CENTCOM Contracting Command and USAID have established processes to vet non- U.S. vendors in Afghanistan, but these processes are time- and resource- intensive. Given these restraints, it is not feasible to vet every non-U.S. vendor that contracts with the U.S. government in Afghanistan, and it is important that vendors are selected for vetting based on a variety of factors, including the risk level for the service being provided and the risk estimate based on the geographic area in which the service is to be performed. Understanding the capacity and resources available to CENTCOM Contracting Command is also essential to devising an appropriate risk-based approach to effectively use the vendor vetting cell to achieve its goals with existing resources in the short term and evaluating what resources will be needed to accommodate any further increase in the workload in the future. Further, as USAID begins to finalize its vetting process, the consideration of a risk-based approach may help the agency to address limitations similar to those of the CENTCOM process. While State has not yet developed a specific vendor vetting process, given the number of non-U.S. vendors it currently uses, and as it goes forward with implementing Afghan First, the need to vet these vendors may become more acute in order to mitigate the risk of contracting with these vendors. Given the multiagency operational environment in Afghanistan, it is imperative that U.S. efforts be coordinated and that information about malign actors be shared among all contracting parties. This information sharing may be particularly important for State because it does not currently vet its non-U.S. vendors. Otherwise, agencies may unknowingly contract with vendors that have been deemed a risk by other agencies. Recommendations for Executive Action To safeguard U.S. personnel against security risks and help ensure that resources are not used to support insurgent or criminal groups, we recommend that the Commander of U.S. Central Command direct CENTCOM Contracting Command to consider formalizing a risk-based approach to enable the department to identify and vet the highest-risk vendors—including those vendors with contracts below the $100,000 threshold—as well as subcontractors and work with the vendor vetting cell to clearly identify the resources and personnel needed to meet the demand for vendor vetting in Afghanistan using a risk-based approach. To help ensure that resources are not used to support terrorist or criminal groups, we recommend that the Director of the Office of Security and the USAID Mission Director, Kabul, Afghanistan, consider formalizing a risk- based approach that would enable USAID to identify and vet the highest- risk vendors and partners, including those with contracts below the $150,000 threshold. To help ensure that State resources are not diverted to insurgent or criminal groups, we recommend that the Secretary of State direct the appropriate bureau(s) to assess the need and develop possible options to vet non-U.S. vendors, which could include leveraging existing vendor vetting processes, such as USAID’s, or developing a unique process. To promote interagency collaboration so as to better ensure that vendors potentially posing a risk to U.S. forces are vetted, we also recommend that the Commander of U.S. Central Command; USAID Mission Director, Kabul, Afghanistan; and the Coordinating Director for Development and Economic Affairs, U.S. Embassy, Kabul, Afghanistan, consider developing formalized procedures, such as an interagency agreement or memorandum of agreement, to ensure the continuity of communication of vetting results and to support intelligence information, so that other contracting activities may be informed by those results. Agency Comments and Our Evaluation We provided a draft of this report to DOD, USAID, and State. We received written comments from all three, which we have reprinted in appendixes II, III, and IV, respectively. DOD concurred with our recommendations. In response to our second recommendation to CENTCOM to work with the vendor vetting cell to identify the resources and personnel needed to meet the demand for vendor vetting in Afghanistan, DOD provided additional clarification about the limitations that currently exist on its resources, including limitations on expanding its joint manning document and the current mandate to reduce staff at CENTCOM. USAID concurred with our recommendations, and in its response also noted that the GAO team’s field work and draft report contributes positively to USAID/Afghanistan’s efforts to implement a system to help ensure that resources are not used to support terrorist or criminal groups. State partially concurred with our recommendation that the Secretary direct the appropriate bureaus to assess the need and develop possible options to vet non-U.S. vendors. State noted in its written comments that it recognizes the risk of U.S. funds under State’s management being diverted for the benefit of terrorists or their supporters, and has devoted a good deal of time to defining the issue and seeking appropriate processes to mitigate the risk of this occurring. However, State noted that significant legal concerns relating to contracting law, competition requirements, and the conflict between open competition and the use of classified databases to vet contractors and grantees have required analysis and discussion. We recognize these concerns and encourage State to continue to address the various issues if they develop and implement a vetting process. Additionally, State said that the Department of State, Foreign Operations, and Related Programs Appropriations Act for Fiscal Year 2010 (which is Division F of the Consolidated Appropriations Act, 2010, Pub. L. No. 111- 117) prohibited the use of State funds to implement a partner vetting program but authorized creation of a pilot program for contractor vetting to apply to both State and USAID programs and activities. State noted that the department has assigned responsibility for developing such a pilot vetting program and has begun work on the pilot’s design. We appreciate State’s efforts to begin the pilot program and the need for State and USAID to act consistently with the funding restriction described above in all their vetting efforts. However, as we previously noted, State and USAID officials both indicated that the pilot program would not apply to Afghanistan. Additionally, based on its written comments, State is beginning to address our recommendation as it noted that Afghanistan is under active review for inclusion in a vetting effort that would apply specifically to that country. State did not comment on our recommendation that DOD, USAID, and State consider developing formalized procedures to ensure the continuity of communication of vetting results and to support intelligence information, so that other contracting activities may be informed by those results. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense and State and the Administrator of the United States Agency for International Development. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Scope and Methodology Under the authority of the Comptroller General of the United States, we initiated a review to identify what efforts, if any, are under way to ensure that U.S. contracting funds or resources are not diverted to support corruption or insurgent organizations. Specifically, we examined (1) the extent to which the Department of Defense (DOD) has established a process to vet non-U.S. vendors in Afghanistan, both to ensure that resources are not used to support insurgent or criminal groups and to safeguard U.S. personnel and assets against security risks; (2) the extent to which the Department of State (State) and the United States Agency for International Development (USAID) have established processes to vet non-U.S. vendors and other assistance recipients in Afghanistan; and (3) the extent to which vetting information is shared among DOD, State, and USAID. As the use of host nation and regional contractors is expected to increase through the use of various agreements, such as Afghan First, in which the United States and NATO have demonstrated a commitment to obtain products and services locally, we focused our review on non-U.S. contractors and nongovernmental organizations, as well as based on congressional interest. Further, legal protections, policy considerations, and business practices in the United States could constrain the U.S. Government from investigating U.S. citizens, so vetting of U.S. contractors would be more constrained. To identify and examine the efforts DOD has taken to vet non-U.S vendors in Afghanistan and the extent to which State and USAID have established processes to vet non-U.S. vendors in Afghanistan and to share this vetting information, we reviewed recent DOD, State, and USAID policies and procedures, including fragmentary orders; the recently updated November 2010 U.S. Central Command (CENTCOM) Contracting Command’s Acquisition Instruction as well as a previous version USAID’s Mission Order for Afghanistan 201.03; and an April 2010 memorandum of understanding between DOD, State, and USAID relating to contracting in Iraq and Afghanistan. Additionally, we reviewed the DOD contract that establishes a vendor vetting cell in support of U.S. forces in Afghanistan and Iraq at CENTCOM headquarters in Tampa, Florida, and the contract’s associated classified policies and procedures, as well as draft standard operating procedures for USAID’s vetting support unit in Afghanistan. We do not discuss the mechanics of the vetting processes used by DOD and USAID in detail because we did not evaluate the effectiveness of the methods used by the agencies to conduct the vetting. We also reviewed a 2008 State cable that applies to both USAID and State regarding risk assessments to mitigate the threat of financing terrorism. In addition, we reviewed prior GAO and other audit agency work that was related to contract management and oversight in Afghanistan, as well as vetting. We interviewed cognizant DOD, State, and USAID officials in both Afghanistan and the United States, including DOD policy, logistics, and acquisition officials from the offices of the relevant Under Secretaries of Defense in Washington, D.C.; CENTCOM officials in the planning, logistics, and intelligence directorates, as well as representatives of the vendor vetting cell in Tampa, Florida; and USAID and State officials in Washington, D.C., responsible for contracting, procurement, and security. We do not discuss the mechanics of the vetting processes used by DOD and USAID in detail because we did not evaluate the effectiveness of the methods used by the agencies to conduct the vetting. In Afghanistan, we interviewed a variety of DOD, United States Forces – Afghanistan (USFOR-A), and CENTCOM Contracting Command officials in Kabul, including the CENTCOM Senior Contracting Official there, and the commanders of Task Force 2010, Task Force Spotlight, and other groups. Additionally, we put out data calls to USAID and the Department of State for their procurement for fiscal year 2010 in Afghanistan. We present procurement data for fiscal year 2010 in Afghanistan, based on data calls to USAID and the Department of State, and our own data pulls to give a broad context for the scale of awards to U.S. compared to non-U.S. and the amount obligated and determined the method used to gather this data to be sufficiently reliable for this purpose. We also interviewed cognizant U.S. Embassy security and contracting officials and USAID security and contracting officials, all in Kabul. Additionally, we interviewed officials from USFOR-A regional contracting centers in Kabul, Camp Leatherneck, and Kandahar; U.S. Army Corps of Engineers (USACE) officials in Kandahar, as well as USACE officials in other locations via teleconference; and International Security Assistance Force contracting and security officials in Kabul and Kandahar. We also held teleconferences with contracting officials at Bagram Air Force Base and in Qatar. We retrieved contract data from the Federal Procurement Data System- Next Generation to present information about the amount of obligations for USACE and both the obligations and the number of awards above and below $100,000 for CENTCOM Contracting Command in fiscal year 2010 in Afghanistan. Additionally, we put out data calls to USAID and State for their procurement data for fiscal year 2010 in Afghanistan. We presented these data in our report to give a broad context for the scale of awards to U.S. vendors compared to those to non-U.S. vendors and the amounts obligated, and we determined the method used to gather these data to be sufficiently reliable to present the information in this context. We conducted this performance audit from May 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We visited or contacted the following organizations during our review: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Washington, D.C. Appendix II: Comments from the Department of Defense Appendix III: Comments from the United States Agency for International Development Appendix IV: Comments from the Department of State Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, major contributors to this report were Carole Coffey, Assistant Director; Johana Ayers; Vincent Balloon; Laura Czohara; Timothy DiNapoli; Melissa Hermes; Jason Jackson; Natasha Wilder; and Sally Williamson. In addition, Michael Shaughnessy provided legal support, Julia Kennon provided technical support, and Cheryl Weissman and Kimberly Young provided assistance in report preparation.
The Departments of Defense (DOD) and State (State) and the United States Agency for International Development (USAID) have collectively obligated billions of dollars for contracts and assistance to support U.S. efforts in Afghanistan. There are concerns that U.S. funds are being diverted to fund insurgent and criminal activity in Afghanistan. In light of these concerns, under the authority of the Comptroller General of the United States, we initiated a review to identify DOD, State, and USAID efforts to vet non-U.S. contractors and assistance recipients in Afghanistan. GAO examined (1) the extent to which DOD has established a process to vet non-U.S. vendors to ensure that resources are not used to support insurgents; (2) the extent to which State and USAID have established processes to vet vendors and assistance recipients; and (3) the extent to which vetting information is shared among DOD, State, and USAID. GAO reviewed documents and met with a variety of agency officials to address the report's objectives. While DOD's U.S. Central Command (CENTCOM) has established a vetting cell to vet non-U.S. vendors in Afghanistan to minimize the risk of insurgents or criminal groups using contracts to fund their operations, its current approach for selecting vendors to vet has gaps. For example, vendors with contracts below $100,000 are not routinely vetted. In fiscal year 2010 around three-quarters of the command's new contracts with non-U.S. vendors were below $100,000. Subcontractors are also not routinely vetted. Command officials stated that CENTCOM uses other risk factors to prioritize vendors to vet, such as contracts performed in Taliban strongholds, but these factors have not been documented. While officials stated that the vetting cell was created to vet vendors prior to award, CENTCOM is largely vetting vendors with existing contracts, which means it is likely that there are a large number of new vendors that have not been vetted prior to award and may have to be vetted in the future. Also, the vetting effort now includes some U.S. Army Corps of Engineers vendors. However, the vetting cell was not staffed to accommodate this workload, so it is uncertain how its existing resources will be able to vet vendors in a timely manner. Without accurately defining the universe of contracts that may need to be vetted, adopting a formal risk-based approach that incorporates other risk factors to identify non-U.S. vendors that pose the highest risk, and identifying the resources needed to accomplish this, it is uncertain how the vetting cell will be able to meet the additional workload and achieve its goals. In January 2011, USAID created a process intended to vet non-U.S. implementing partners in Afghanistan; however, this process may face similar limitations as CENTCOM's. According to USAID officials, this decision was based on the urgent need to mitigate the risks of USAID funds being diverted to insurgent groups. While USAID's process is in the early stages, it proposes to vet non-U.S. implementing partners and at least first-tier subcontractors with contracts valued at $150,000 or more. USAID officials said that they are considering changing the dollar threshold or vetting other potential assistance recipients based on risk; however, the available documentation does not include other risk factors. As of March 2011, State had not developed a process to vet contractor firms in Afghanistan. Since 2008, State has required that a terrorist financing risk assessment be completed for any new program or activity prior to a request for or obligation of funding. However, it does not use the same information as the CENTCOM or USAID vetting cells. Additionally, its use of Afghan vendors may increase under the Afghan First policy. Absent a way to consider the risk posed by non-U.S. vendors, State may not be well prepared to assess the potential for its funds to be diverted to criminal or insurgent groups. DOD and USAID share vetting information informally, but without a formal mechanism to share vetting results the two agencies cannot ensure that their current practices will endure. Further, as State expands its use of local contractors, it will become imperative that it is part of the data sharing with DOD and USAID.
Background Medical devices encompass a wide array of products with myriad uses. A medical device can be any product used to cure, prevent, diagnose, or treat illness, provided that its principal intended purposes are not achieved primarily by chemical or metabolic action, as would be the case with a pharmaceutical. Devices range in complexity from simple tongue depressors to heart pacemakers and sophisticated imaging systems. There are more than 100,000 products in over 1,700 categories, and they cover a wide spectrum of risk. The U.S. medical device industry grew from 5,900 firms in 1980 to 16,900 firms in 1995. U.S. consumption of medical devices exceeded $40 billion in 1994. Food and Drug Administration The 1976 Medical Device Amendments to the Federal Food, Drug, and Cosmetic (FFD&C) Act gave FDA expanded responsibility for regulating medical devices in the United States. FDA’s regulatory responsibilities have three components: (1) approving new medical devices’ entry into the market; (2) monitoring device manufacturers’ compliance with FDA laws and regulations, including the good manufacturing practices (GMP) regulation to ensure continued quality control; and (3) operating a postmarketing surveillance (PMS) system to gather information about problems that could necessitate withdrawing a device from the market or taking other actions. The Office of Device Evaluation within FDA’s Center for Devices and Radiological Health is responsible for the evaluation of medical device applications. During fiscal year 1994, the Office of Device Evaluation received 16,905 submissions for review, of which it classified 10,293 as major submissions. The 1976 amendments established a three-part classification system for devices, based on the device’s level of risk and the extent of control necessary to ensure the safety and effectiveness of the device. Most medical devices are Class I or Class II (low and medium risk) and reach the market through FDA’s premarket notification—or 510(k)—process.Under its 510(k) authority, FDA may grant clearance for the marketing of devices if it determines that they are substantially equivalent to certain devices already on the market—called predicate devices. Once FDA has made that determination, a manufacturer can begin to market the new device. High-risk, or Class III, devices enter the market through the premarket approval (PMA) process. A PMA review is more stringent and typically longer than a 510(k) review. If a manufacturer needs to test a new device in human subjects before applying for marketing approval or clearance, and if the device presents a significant health risk to subjects, the manufacturer applies to FDA for an Investigational Device Exemption (IDE) to allow use of the device in clinical studies. See appendix II for a more detailed discussion of FDA’s review processes. The U.S. medical device industry values FDA’s “stamp of approval” but has leveled several criticisms against FDA. The industry contends that FDA takes too long to review applications and that review time increased drastically in the early 1990s. Manufacturers maintain that FDA’s review process is unpredictable and burdensome, particularly with regard to the amount and types of data they must submit. Additionally, the industry has stated that FDA is not always reasonable when it requires randomized human clinical trials to demonstrate that a device is safe and effective. European Union In 1990, the EU began to adopt a series of three directives to regulate the safety and marketing of medical devices throughout the EU. The directives specify roles in the device regulatory system for the European Commission; the governments of member states; and review and approval organizations called notified bodies, which are often private entities. When this system is fully in place in several years, every medical device marketed in the EU will have to carry a “CE” mark, indicating that it meets common standards of performance and safety, known as essential requirements. Devices carrying the CE mark can be marketed throughout the EU. The first EU directive, for active implantable devices, covers powered devices that remain in the human body, such as heart pacemakers. It first took effect on January 1, 1993. During a 2-year transitional period, member states could continue to implement their national laws governing these devices, and manufacturers had the choice of either seeking approval to market a device in individual countries under each country’s laws or following the procedures that would allow the device to carry the CE mark and be marketed throughout the EU. As of January 1, 1995, all active implantable devices were subject to the new EU system alone. The second directive, known as the Medical Devices Directive (MDD), covers most other medical devices, ranging from bandages to hip prostheses. The MDD took effect on January 1, 1995, and its transitional period will last until June 13, 1998. The third directive, covering in vitro diagnostic medical devices, such as blood grouping reagents and pregnancy test kits, is under development and will not take effect until at least 1998. Device Review Has Different Goals in United States and European Union The U.S. and EU medical device regulatory systems share the goal of protecting public health, but the EU system has the additional goal of facilitating EU-wide trade. Another distinction between the two systems pertains to the criteria for reviewing devices. Devices marketed in the EU are reviewed for safety and performing as the manufacturer intended; devices marketed in the United States are reviewed for safety and effectiveness. Effectiveness includes the additional standard of providing benefit to patients. EU System Intended to Promote Trade and Public Health One goal of the EU medical device review system is to lower trade barriers and achieve a single market throughout the EU by harmonizing member states’ regulatory controls. At the EU level, the Directorate General for Industry is responsible for implementing the medical device directives. The directives specify that a member state may not create obstacles to the marketing of a CE-marked device within its territory. The other goal of the EU system is to protect public health. Medical devices that circulate in the EU must meet the medical device directives’ essential requirements, the first one being that devices will not compromise the health and safety of patients. The responsibility for enforcing the national regulations that implement the directives in the member states lies with each country’s Department of Health. Before the inception of the EU system, the level of regulation in member states varied widely, and in some countries most medical devices were not regulated at all. Therefore, although the system was created within the context of encouraging trade, in many European countries the directives will increase the level of medical device safety regulation. The U.S. medical device regulatory system exists within a public health context. FDA’s mandate is to ensure that devices that reach the public are safe and effective. The agency has limited statutory responsibility to promote trade. Devices Must Meet Different Criteria in European Union and United States Devices marketed in the EU under the new regulatory system must conform to the essential requirements contained in the applicable medical device directive. Because the directives cover a wide range of products, the essential requirements provide broad targets for manufacturers to meet. The essential requirements are divided into two sections. First, the general requirements state that devices must be designed and manufactured in a way that will not compromise patient health and safety and that devices must perform as the manufacturer intended. Second, the design and construction requirements cover topics such as chemical, physical, and biological properties; labeling; radiation safety; and accuracy of measuring functions. The EU system relies greatly on recognized performance standards, which can be international, European, or national. Demonstrating that a device meets such standards is voluntary, but this is an acceptable—and often convenient—way to demonstrate that a device complies with the essential requirements. In reviewing medical device applications FDA uses the two criteria mandated by law—safety and effectiveness. For devices entering the market through the 510(k) route, the manufacturer must demonstrate comparative safety and effectiveness, that is, the new device is as safe and effective as the legally marketed predicate device. In evaluating the safety and effectiveness of a Class III device through the PMA route, FDA must determine that the application demonstrates a reasonable assurance that the device is safe and effective. To satisfy the effectiveness requirement, a device must provide beneficial therapeutic results in a significant portion of the target patient population. The U.S. criterion of effectiveness encompasses more than the European criterion of performing as the manufacturer intended; it requires the device to benefit certain patients. For example, to market an excimer laser in the United States, the manufacturer must demonstrate not only that the laser can cut tissue from the patient’s cornea, but also that the laser procedure lessens or eliminates the patient’s nearsightedness. In the EU, if the manufacturer specified that the purpose of the device was to eliminate a patient’s nearsightedness, it would have to demonstrate the validity of that claim. However, if the claim was restricted to the device’s ability to remove tissue in a particular way, judgment of the appropriate use of the device would be left to clinicians. In evaluating effectiveness, FDA generally reviews an individual device on its own merits. In certain situations, however, reviewers consider whether a new device is potentially less effective than available alternative therapies. FDA’s position is that the agency evaluates comparative effectiveness only when a less effective device could present a danger to the public, that is, when a device is designed to treat a disease that (1) is either life-threatening or capable of causing irreversible morbidity, or (2) is a contagious illness that poses serious consequences to the health of others. United States and EU Review Systems Structured Differently The EU gives major regulatory responsibilities to public and private bodies; in contrast FDA has sole responsibility in the United States. Both systems link the level of medical device review to the degree of control needed to ensure device safety. However, the two systems use different procedures to reach approval or clearance decisions. Public Agencies and Private Notified Bodies Have Roles in EU Device Regulation Governmental and private organizations both perform major functions in the EU system for regulating medical devices. Each member state designates a competent authority, usually in the Department of Health, which is responsible for implementing and enforcing the medical device directives in that country. The competent authority ensures that the directives are incorporated into national law, approves clinical investigations of devices, and operates the country’s reporting system for adverse incidents. Additionally, the medical device directives contain a safeguard clause. This clause gives the competent authority the power to withdraw an unsafe device from the market; the competent authority can be overruled by the European Commission after consultation among all of the parties concerned. (See app. III for a more detailed discussion of the safeguard clause.) The competent authority also serves as the country’s liaison with the European Commission and other member states. One of the most important responsibilities of the competent authority is to designate and certify the notified bodies located in that country. NBs are the organizations that perform conformity assessments on medical devices of medium or high risk that require the intervention of an independent organization prior to CE marking. The NBs determine whether a device conforms to the essential requirements in the relevant medical device directive. If the device is judged to be in conformance, the manufacturer may then place the CE mark on the product and market it throughout the European Union. NBs may be governmental or private entities, but most are private. In making their NB designations, competent authorities consider whether organizations meet the criteria for NBs contained in the medical device directives. These criteria include standards of competence, impartiality, and confidentiality. Competent authorities may periodically audit NBs and can withdraw NB status from an organization that does not continue to meet the criteria. The competent authority certifies that an NB is qualified to evaluate certain types of devices and to perform specific conformity assessment procedures. Some NBs have a limited certification; for example, they can evaluate only active medical devices or can perform only certain types of quality assurance reviews. Others are qualified to evaluate almost the full range of devices. If an NB is not competent to perform an assessment procedure that a device requires, it can subcontract with another NB or with another organization, such as a testing laboratory, to perform that part of the assessment. A manufacturer may select an NB located in any member state to assess its device. This is a contractual relationship, with the manufacturer paying a fee for the NB’s services. As of October 1995, there were 40 NBs throughout the EU. Germany and the UK had the largest number, 16 and 8, respectively. Representatives of European industry groups and public and private officials in the UK and Germany told us that manufacturers consider several factors when selecting an NB. These include the NB’s expertise and experience with specific devices and assessment procedures, language, cost, and whether the manufacturer has worked with the NB previously. FDA Regulates Devices in the United States In the United States, regulatory responsibilities rest with one government body—FDA. Currently, however, FDA is creating a pilot program to test the use of private third parties to review low- to moderate-risk devices requiring 510(k) clearance. The agency will individually review and accept third-party review organizations interested in participating in the pilot. After completing a device review, the third party will make a clearance recommendation to FDA. In contrast with the role of European NBs, the private reviewers participating in FDA’s pilot program will not have authority to make clearance decisions. FDA will retain that authority and will base its decision on the third party’s documented review. Manufacturers’ participation in the pilot will be voluntary; they may continue to opt for FDA review. Applicants that must submit clinical data on their devices will not be able to select third-party review; FDA has prepared a preliminary list of devices that may be included in the pilot. FDA expects that applicants that do participate will pay a fee directly to the third party to conduct the review. The pilot is scheduled to begin in mid-1996 and will operate for 2 years; during the second year FDA plans to evaluate the feasibility of using third parties to conduct timely and high-quality reviews of devices. Both Systems Link Level of Review to Device Risk Like the United States, the EU has a risk-based device classification system. The EU has four categories, however, instead of three. The manufacturer determines the appropriate class for a new device, based on classification rules in the directives. The manufacturer may also consult with the NB reviewing the device. In the United States, the manufacturer makes a claim regarding which class a device belongs in when it submits an application for FDA review. FDA, however, has final authority over the classification decision. (See app. III for a more detailed discussion of the EU classification system and app. II for a more detailed discussion of the FDA classification system.) Just as every device released in the United States must demonstrate safety and effectiveness, every device in the EU, no matter what its class, must comply with the essential requirements. In both systems, the purpose of classifying devices is to dictate the level of control the system exerts to ensure that devices comply with the respective requirements. The EU directives set out a complex array of assessment procedures that manufacturers must follow to demonstrate that a device conforms to the essential requirements. A device’s class determines the type of conformity assessment review the device must undergo, but the manufacturer is usually permitted to choose an assessment route from at least two options—often involving two general approaches. One approach is a review of the full quality assurance (QA) system that governs every phase of the manufacture of a device, from design through shipping. Officials of a German NB told us that one goal of a full QA review is to ensure that the manufacturer has written quality control procedures for every one of these phases and that these procedures are followed. NB reviewers conduct on-site inspections as part of this process. The other approach consists of two components. The first is a procedure called a type examination, in which the NB physically tests a prototype of the device to determine if it meets certain standards. The type examination component is paired with a limited QA review focused only on the production phase of manufacture. This review is intended to ensure the consistency of product quality. We refer to this overall approach as the type examination route. Appendix III contains a more detailed description of the different routes of conformity assessment and the assessment requirements for different device classes. The EU system includes both of these device approval routes as a compromise between member states that tended to rely on one approach or the other. For example, in the UK a voluntary oversight system had emphasized full QA system review, while the type examination approach had prevailed in Germany’s regulatory system. Both the EU and U.S. systems minimize oversight for the devices considered least risky. For EU Class I devices that do not involve a measuring function or sterile products, manufacturers may simply furnish a declaration that the device conforms to the essential requirements and maintain technical documentation that would permit review of the device. There is no NB review, but the manufacturer must register such devices with the competent authority in the country of the manufacturer’s place of business. In the United States, FDA exempts selected low-risk devices from premarket notification requirements. Manufacturers must still register their devices with FDA and must comply with GMP rules. Most new U.S. devices fall into Class I or Class II and are evaluated for substantial equivalence to devices already on the market. FDA determines whether a device has the same intended use and same technological characteristics as a predicate device by reviewing a 510(k) application submission. If a new device has the same intended use and technological characteristics, FDA deems it substantially equivalent to a predicate device and allows the device to be marketed. Also, if a device has new technological characteristics and FDA determines that they do not raise different questions of safety or effectiveness, FDA will find the device to be substantially equivalent. If the device has new technological characteristics and raises different questions of safety and effectiveness, the device will be found not substantially equivalent. The manufacturer can then seek approval for it through the premarket approval process. FDA requires a PMA review for most Class III devices. This is a more rigorous review because of the device’s inherent high risk or lack of established safety and effectiveness information. A multidisciplinary staff at FDA evaluates the PMA application. Nonclinical studies that the team reviews may include microbiological, toxicological, immunological, biocompatibility, engineering (for example, stress, wear, fatigue), and other laboratory or animal tests as appropriate. The team also reviews the results of any clinical investigations involving human subjects. Generally, FDA evaluates a manufacturer’s tests and does not perform its own tests on products. For a small portion of PMA reviews, FDA reviewers seek advice from an advisory panel of clinical scientists in specific medical specialties and representatives of industry and consumer groups. U.S. device manufacturers have expressed concern that FDA asks them to submit an excessive amount of data during the 510(k) review process. The director of FDA’s Office of Device Evaluation told us that FDA requires only what is necessary to establish that a device is as safe and effective as its predicate. She also told us that FDA has chosen to interpret the 510(k) requirements so that more devices can go through that review process rather than the longer PMA process. As a result, the agency needs enough data to demonstrate that those 510(k) devices meet the standard of substantial equivalence and do not raise new concerns regarding safety and effectiveness. EU Quality Assurance Approach Does Not Examine Individual Devices When an NB certifies a manufacturer’s full QA system, the manufacturer may be able to attach the CE mark to several related products. The philosophy behind this approach is that if a company has a good design and manufacturing system, the devices it produces will be safe and perform as the manufacturer claims. Therefore, the full QA assessment route does not require the NB to conduct individual reviews of related devices that are produced under the same QA system, although the NB can do so when the situation warrants it. The certification covers the related devices, allowing the manufacturer to market all of them without going through an additional conformity assessment. Representatives of a British industry group told us that the QA approach makes it possible to continually monitor a company without testing individual items that may not be representative of the overall quality of production. Officials who work in the EU system told us that they expect manufacturers to choose the full QA route to conformity assessment more frequently than the type examination route. This route can be particularly advantageous for larger companies. The officials believe the type examination route is more likely to appeal to smaller companies that do not produce many product lines or a company that wants to get a particular device to market before it has time to put a full QA system in place. The kinds of standards manufacturers must meet during European QA reviews are similar to the GMP requirements in the U.S. system. (See app. II for additional information about GMP requirements.) However, in contrast to the ability of a full QA review to stand alone as a conformity assessment route for some devices in the EU, FDA never bases a 510(k) clearance or PMA approval decision solely on a GMP inspection. Use of Clinical Trials May Expand Under EU Directives Some U.S. medical device manufacturers have raised concerns that FDA sometimes asks that a new medical device be tested in a clinical trial when the manufacturers believe that approach is inappropriate and unwarranted. They have also asserted that clinical trials can be performed more quickly in Europe. European officials told us that prior to the issuance of the EU medical device directives, Europe had very few requirements for clinical investigations. Under the new system, manufacturers may be required to provide clinical evidence that a device meets the essential requirements for safety; this evidence may come from either published scientific literature on similar devices or data from a clinical trial on the device under consideration. Implementation of the EU medical device directives may result in clinical trials being required more frequently than they had been in the past. Officials from a German NB discussed with us circumstances under which they would be likely to need data from a clinical trial to evaluate a new device under the EU directives. If the device uses an accepted technology to treat a medical indication for which use of that technology is also accepted, a clinical trial would not be necessary. If both the technology and the application are novel, however, they said they would require a clinical trial. In situations where there is a mix of novel and approved device technology and medical indication, they would need to make a judgment call. They said that regardless of whether a clinical trial is necessary, clinical data, based on either previous clinical trials, scientific literature, or field experience, would have to be provided. Although it is unclear how frequently European reviewers will ask manufacturers to perform clinical trials, FDA officials believe that clinical trials are often needed to establish the safety and effectiveness of devices undergoing PMA review. According to FDA, fewer than 10 percent of the medical device products FDA reviews under the 510(k) process require clinical trials. When FDA does require a clinical trial during a 510(k) review, the agency is looking for clinical confirmation that a device is as safe and effective as the legally marketed predicate device. Notified Bodies’ Independence Complicated by Dual Roles NBs carry out a regulatory function within the EU’s medical device system, but the manufacturers whose devices they review are also their clients. This raises questions about the independence of the NBs. Additionally, NB employees are subject to less comprehensive conflict-of-interest rules than are FDA device reviewers. Notified Bodies Have Client Relationship With Subjects of Review Unlike FDA, an NB is in the complicated position of both performing a public health function—and in that capacity having to answer to a governmental competent authority—and having a client relationship with the manufacturer that has hired it to review a device. NBs have a duty to ensure that medical devices that carry the CE mark conform to the EU medical device directives’ essential requirements regarding safety and performance. At the same time, however, they are in competition with each other to secure the business of manufacturers seeking assessment services. The businesses of some NBs include consulting work as well as product reviews, which can further complicate their independence. The director of the UK competent authority told us that if an organization has a consulting arm, his agency checks to see if the consulting function is kept separate from the conformity assessment function. Only then can it be designated as an NB. An EU official told us that he believes the European Commission needs to address this problem of potential conflict of interest for NBs. EU Reviewers Subject to Less Comprehensive Conflict-of-Interest Rules Than FDA Reviewers The EU medical device directives require the staff of NBs to be free of all pressures and inducements, particularly financial, that might influence their judgment or the results of their reviews, especially from anyone with an interest in the outcome of the review. To meet this requirement, NBs and their personnel must comply with European standards governing potential conflicts of interest. These standards are very general. Essentially, they (1) prohibit anyone involved in product testing or accreditation from having a commercial, financial, or other interest that could affect their judgment; and (2) attempt to shield laboratory and certification personnel from control by anyone with a direct financial interest in the outcomes of testing and accreditation. Key terms in the standards, such as control, direct, commercial interest, and financial interest, are not defined. Officials of NBs we visited told us that their employees are bound by international standards and that they must disclose potential conflicts of interest in connection with their assignments. One official told us that as an internal control, the staff who conduct the periodic follow-up surveillance reviews of manufacturers after the initial certification of a product or QA system are different from those who conducted the initial review. FDA employees are subject to a more comprehensive set of rules than are NB personnel. FDA rules include a substantial list of general rules that encompass all the goals and prohibitions included in the EU rules. In addition, they include supplemental guidance on specific matters that could present conflicts of interest, for example, outside employment, stock ownership, gifts, entertainment, filing responsibilities, and political activity. The EU rules are silent on how the general rules might apply in these situations. Information Not Available to Compare Outcomes of New EU System and a Changing FDA The EU medical device system is new and not yet fully operational. Although FDA’s system has been in place for almost 2 decades, the agency’s process is in flux as managers try to respond to criticism by experimenting with streamlined procedures. It is too early to evaluate the impact of those efforts on the length of FDA’s review process. At this time there are no data on the experience of the EU device review system that permit meaningful comparison with FDA. EU Device Review System New and Still Evolving In contrast to FDA’s almost 20 years of experience in carrying out the U.S. device review program, implementation of the EU system is quite new. The only medical device directive that is fully in effect is the one for active implantable devices. The transition period for the directive that covers most devices began just 1 year ago. The system is not yet fully in operation. For example, each competent authority is supposed to establish a system for manufacturers to report adverse incidents with devices; eventually all of these national systems will be electronically linked. The UK already had an extensive voluntary system in place that it can build on, but most countries have barely begun to develop their systems. A UK official told us it will probably be a few years before an EU-wide system is in place. In the meantime officials are communicating by fax and letter when they identify problems. It is too early to know how some aspects of the EU system will translate from the directives into a practical working system. For example, the various competent authorities are bound by the same criteria when designating NBs, and the various NBs—both within and across individual member states—are all supposed to use the same criteria to perform conformity assessments. At present there is no way to measure whether that consistency is occurring in practice. European officials told us that experience levels among the competent authorities and NBs vary. For example, in countries that previously had a regulatory program in place, such as the UK and Germany, the competent authorities already had experience carrying out some of the functions the EU system requires of them. Similarly, some NBs have long histories of evaluating medical devices or QA systems, while others have considerably less experience. Even well-established NBs may have greater experience with particular conformity assessment routes or device categories. For example, NBs in the UK tend to have extensive experience performing full QA system reviews and some German NBs have extensive experience with product testing. Results of FDA Initiatives to Reduce Review Time Not Yet Clear Medical device manufacturers in the United States have charged that FDA takes too long to approve new medical devices and have asserted that the review process in Europe is faster. In response to criticism about the length of its device review process, FDA is attempting to better manage and streamline its system by experimenting with different review procedures. Agency officials believe these initiatives have reduced review time, but it is too early to evaluate their impact. FDA’s management actions include the May 1994 implementation of a three-tier system of review to improve management of its workload and better link the rigor of review with a device’s level of risk. In addition, since December 1994, FDA has exempted close to 300 additional medical devices from premarket notification requirements and moved other devices into lower classification categories in an effort to concentrate on riskier products and reduce the regulatory burden on manufacturers. FDA is also experimenting with an expedited review process for life-sustaining and life-saving devices under which selected applications move to the front of the review queue. At least 40 devices had been reviewed under this process as of July 1995. Additionally, FDA is refusing to accept deficient or poorly prepared applications until manufacturers provide the information needed for review. We recently analyzed patterns in review time for FDA device applications submitted from October 1988 to May 1995. Review times for 510(k) applications and PMA supplements submitted in 1994 were still higher than they were in 1990 but had decreased from 1993 levels. The trend for original PMAs was less clear, in part because FDA has not yet completed the review of a large portion of those applications. No Comparable Data on Length of EU Review Process The EU does not have data on the length of its review process that can be compared with the data available about FDA’s experience. The EU system has been in effect for only a short time. Anecdotal information suggests review time may be shorter in the EU, but differences between the systems make it difficult to find comparable benchmarks. For example, NBs may have extensive interaction with manufacturers before the review process formally begins, and they sometimes perform preliminary reviews before beginning the official conformity assessment. This could make it difficult to identify the date on which the NB’s review begins. For similar reasons of lack of comparable data, it is also difficult to compare FDA’s record with the experience of individual European countries prior to initiation of the EU-wide system. Conclusions The EU system for regulating medical devices is not only new—it is not yet fully in place. Therefore, it is too early to evaluate its success in ensuring the safety of medical devices and bringing them to market in an efficient manner. Because the major actors in the EU system have not had sufficient time to establish a record on how they will carry out their duties, it will be some time before information is available to answer the following questions: How strictly will competent authorities oversee NBs, for example, will competent authorities rescind certifications of NBs if warranted? Will the performance of all competent authorities and NBs be of equal quality, and therefore, will public health authorities and consumers be able to have the same level of confidence in devices no matter where they are reviewed? Will the full QA system and type examination conformity assessment routes both prove to be appropriate ways to regulate devices? Will NBs maintain the necessary degree of independence from manufacturers who are their clients? How will NBs implement requirements for clinical evidence on new devices? Will an adequate postmarket surveillance system be developed? U.S. government officials who want to consider integrating features of the EU approach into the U.S. device review system will be better able to assess the value of the EU system after it accumulates several years of experience. The U.S. medical device industry has advocated giving private third parties a role in the review of medical devices, and FDA is exploring this possibility in a pilot project. Ensuring that private reviewers have the necessary independence, requisite expertise, and sufficient resources would enhance the confidence of the Congress and the American public in the integrity of the device review process. The importance of this assurance would increase if private review organizations were given the added authority of clearing new devices for marketing. Agency Comments FDA and European officials reviewed a draft of this report. FDA’s written comments are reproduced in appendix IV. FDA generally found the report to be accurate and complete and made a number of technical comments clarifying aspects of the agency’s review processes. We incorporated these as appropriate, basing the changes in some instances on further discussions with FDA officials. We also incorporated technical clarifications on the EU system received from European officials. In its comments, FDA stated that the EU system does not evaluate individual devices, but instead evaluates a manufacturer’s quality assurance system. As we noted in the draft report, in some situations the EU system does evaluate individual devices, such as when a manufacturer chooses the type examination route of conformity assessment or when a Class III device’s design dossier is reviewed. We will distribute this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, and other interested parties. This report was prepared under the direction of Mark V. Nadel, Associate Director for National and Public Health Issues. If you or your staff have any questions, please call me at (202) 512-7119 or Bruce D. Layton, Assistant Director, at (202) 512-6837. Other major contributors to this report include Helene F. Toiv, Claude B. Hayeck, Mary W. Freeman, Michele Grgich, and Liv Gorla. Scope and Methodology For our review of the European Union’s medical device approval process, we conducted field work in Germany and the United Kingdom. These countries were ahead of most other member states in adopting the EU regulatory system into their national laws and had greater experience with implementing the new system. Additionally, over half of the notified bodies, which review and approve medical devices under the EU system, were located in these two countries. In Germany and the UK we interviewed government health officials responsible for medical device regulation; officials from two NBs, TÜV Product Service and the British Standards Institution; and representatives of medical device industry groups. We also interviewed EU officials and a representative of an EU-wide industry association. We reviewed EU documents governing the EU regulatory process. Several officials we interviewed reviewed a draft of this report. We reviewed Food and Drug Administration documents and policies as well as laws and regulations governing FDA. In addition, we interviewed officials from FDA’s Center for Devices and Radiological Health. We talked with representatives of the U.S. medical device industry, including the Health Industry Manufacturers Association, the National Electrical Manufacturers Association, and the Medical Device Manufacturers Association, as well as representatives of individual device companies. We also reviewed position papers of several industry groups. We interviewed representatives of organizations with expertise on product review and certification, including officials from the U.S. Department of Commerce; Underwriters Laboratories Inc.; the American National Standards Institute; and the Emergency Care Research Institute. We conducted our review from March through December 1995 in accordance with generally accepted government auditing standards. Description of Selected Aspects of FDA Processes for Regulating Medical Devices This appendix provides additional information about several features of the U.S. system for regulating medical devices and FDA review procedures. The process of bringing a new medical device to market takes one of two routes—premarket notification or premarket approval. Most new devices are variations of already marketed devices, are classified as low to moderate risk, and reach the market through FDA’s premarket notification—or 510(k)—review process. During the 510(k) review, FDA judges whether a device is substantially equivalent to one already on the market. The premarket approval (PMA) process is reserved for high-risk devices. PMAs and PMA supplements require a more stringent FDA review, which may include the analysis of clinical data to provide a reasonable assurance of safety and effectiveness. In addition, manufacturers must comply with certain postmarket requirements such as reporting of certain device-related adverse events. In fiscal year 1994, FDA’s Office of Device Evaluation received 6,434 510(k) applications and 415 PMAs and PMA supplements. Device Classes Medical devices are grouped into three classes according to (1) the degree of potential risk and (2) the types of regulatory control needed to reasonably ensure their safety and effectiveness. Class I devices (for example, bedpans and tongue depressors) are those for which general controls provide reasonable assurances of safety and effectiveness. Class II devices (for example, syringes and hearing aids) require special controls in addition to general controls. Class III devices (for example, heart valves and pacemakers) are subject to general controls and must undergo more rigorous scientific review and approval by FDA as well. General controls include registering device manufacturing facilities, providing FDA with regularly updated lists of marketed devices, complying with good manufacturing practices, and maintaining records and filing reports of device-related injuries and malfunctions. The Safe Medical Devices Act of 1990 (SMDA ) revised the requirements for Class II devices, subjecting them to both general and special controls. Special controls include performance standards, postmarketing surveillance, patient registries, and other controls as deemed necessary. Class III devices are subject to the PMA process, which requires the manufacturer to present evidence, often including extensive clinical data, that there is a reasonable assurance that a device is safe and effective before placing it on the market. Triage To help assess the appropriate level of review for devices, the Center for Devices and Radiological Health in May 1994 introduced a three-level “triage” system that, within the existing classification system, assigns priorities for application review based upon the complexity and risk of the device. A tier I review is essentially a labeling review to ensure that the label correctly identifies the intended use of the device. Most Class I devices fall within tier I because a less rigorous scientific evaluation of these low-risk devices does not adversely affect the public health. A tier II review is a scientific and labeling review. This tier encompasses the majority of 510(k)s and select PMA supplements. A tier III review is an intensive scientific and labeling review, using a team review approach for devices utilizing new technology or having new intended uses. FDA convenes an advisory panel when it lacks the expertise to address questions of safety and effectiveness for devices placed in tier III or when it is otherwise appropriate to obtain advice on scientific matters. Premarket Notification—510(k)s Most new medical devices incorporate incremental changes to devices already on the market. To clear these devices for marketing, FDA determines whether they are substantially equivalent to (that is, as safe and effective as) legally marketed predicate devices. Substantial equivalence means that a device has (1) the same intended use and same technological characteristics as the marketed device or (2) the same intended use and different technological characteristics—but is as safe and effective as the marketed device and does not raise different questions of safety and effectiveness. FDA initially determines whether a 510(k) submission is sufficiently complete before undertaking a substantive review. During the review, FDA determines the intended use of a device by examining the manufacturer’s proposed label statements, including statements in promotional materials that describe the device and its use. To evaluate technological characteristics, FDA reviews the physical and performance characteristics of the device, such as device design, materials used, and power source. For example, in reviewing a new pacemaker lead made of polyurethane, FDA would assess performance testing information to confirm that the new lead is substantially equivalent to the predicate (or previously approved) lead. This is necessary because differences in chemical formulations of polyurethane or differences in design and assembly can affect safety and effectiveness. In arriving at a determination, FDA reviewers may use voluntary standards and guidance about a particular device. Reviewers also commonly used earlier agency decisions on 510(k)s for similar devices. Another resource is the files of the Center for Devices and Radiological Health, such as establishment inspection and postmarketing surveillance files. These files allow reviewers to examine the reviews of similar device types and to determine what questions, if any, were raised by FDA inspectors about a particular type of device. During the review of a 510(k) application, the reviewer may determine that additional information about the device is necessary to complete the review. This additional information may be descriptive information and/or performance testing information. Descriptive information includes the intended use, physical composition, method of operation, specifications, and performance claims of the device. Performance testing information can be data from bench testing or from animal or clinical testing. Upon completion of the review, the Office of Device Evaluation issues a decision letter, which is then sent to the manufacturer. The letter may contain one of the following: a substantially equivalent decision, a not substantially equivalent decision, a request for additional information, or a determination that the device is exempt from a 510(k) submission. Premarket Approval As it does for 510(k)s, FDA first decides whether to accept the PMA or refuse to file it because it does not meet minimum requirements. If FDA accepts the application, a multidisciplinary staff evaluates the filed PMA. The team reviews nonclinical studies such as microbiological, toxicological, immunological, biocompatibility, animal, and engineering tests. The team also reviews the results of clinical investigations involving human subjects. During this stage, FDA prepares a critique of the scientific evidence of the safety and effectiveness of the device. During the review, FDA may, on its own initiative or if requested by the applicant, refer the PMA to an advisory committee representing the appropriate medical field for a “panel” review. FDA will request such a review when it lacks the knowledge or experience to evaluate the safety and effectiveness questions posed by the device or when it is otherwise appropriate to obtain advice on scientific matters. Problems identified in FDA’s critique of the scientific evidence can be discussed further during advisory panel meetings. The committee submits a final report to FDA, but the agency is not bound by the committee’s recommendations. The review team also checks the manufacturer’s compliance with the GMP regulation and makes a judgment about the quality controls used in the manufacture of a device. The purpose of the review is to ensure that the manufacturer is capable of producing devices of high quality. At the end of the approval review stage, FDA may take one of the following actions: Issue an order approving the PMA. Issue an order denying approval. Send the applicant an approvable letter indicating that the FDA intends to approve the device if certain problems (for example, labeling deficiencies) are resolved. Send the applicant a not-approvable letter describing significant deficiencies in the application. Eventual approval is not precluded if the manufacturer provides an adequate response. Investigational Device Exemptions Almost all PMAs and a small subset of PMA supplements and 510(k)s require clinical trials to obtain answers to questions on safety and effectiveness. A researcher wishing to conduct a study involving human subjects to develop safety and effectiveness data for a medical device can apply to FDA for an IDE. An approved IDE application permits the use in a clinical study of a device that would ordinarily be subject to market clearance procedures. An IDE approval is needed for a significant risk device. For a nonsignificant-risk device (for example, daily wear contact lenses) investigation, the sponsor presents the proposed study to an institutional review board (IRB) along with a report of prior investigations and the investigational plan. If the IRB approves the investigation as a nonsignificant-risk study, the investigation is considered to have an approved IDE and can begin immediately. FDA is not involved in the approval process of the clinical study. If the IRB or FDA determines, however, that the proposed investigation involves a significant-risk device (for example, a heart valve), the sponsor must submit an IDE application to FDA. The application must contain an investigational plan that includes such information as the purpose of the study, a written protocol, a risk analysis and description of patient selection, a description of the device, monitoring procedures, labeling, and consent materials. An IDE application may also include data on the design of the device and data from bench and animal tests. FDA determines whether the study should be approved, considering such factors as whether the benefits of the investigation outweigh the risks and whether the proposed study is scientifically sound. The investigation can begin after the sponsor obtains both FDA and IRB approval for a significant-risk investigation. FDA conducts bioresearch monitoring inspections to help ensure that clinical investigations are conducted in accordance with study protocols and that the rights and safety of study participants are protected. GMP Inspections FDA determines compliance with the GMP regulation primarily through factory inspections conducted by its field staff. Section 704(a) of the FFD&C Act gives FDA authority to conduct GMP inspections of medical device manufacturers. During these inspections, FDA investigators examine facilities, records of manufacturing processes, and corrective action programs. The results provide information necessary to evaluate a firm’s compliance with the medical device GMP regulation. FDA may initiate a GMP inspection for any of several reasons. These include routine scheduling, the need to obtain data on an industry new to FDA, investigation of a consumer or trade complaint, a product defect report, an adverse reaction to a device, or a device-related death. FDA also conducts GMP inspections in conjunction with approval of products. Postmarketing Surveillance One key provision of the Safe Medical Devices Act of 1990 requires that manufacturers conduct postmarketing surveillance, such as studies to gather data on the safety and effectiveness of certain devices. This requirement applies to devices that (1) are permanent implants, the failure of which may cause serious adverse health consequences or death; (2) are intended for use in supporting or sustaining human life; or (3) present a potential serious risk to human health. FDA also has discretion to require postmarketing surveillance for other devices under certain circumstances. Description of Selected Aspects of European Union System for Regulating Medical Devices This appendix expands on information provided in the report about several features of the EU system for regulating medical devices. Device Classes The EU Medical Devices Directive, which covers most devices, established a four-part classification system for medical devices. The rules for classification take into account the riskiness of the device, the device’s degree of invasiveness, and the length of time the device is in contact with the body. Class I devices are generally regarded as low risk and include most noninvasive products, certain invasive products, and reusable surgical instruments. Class IIa devices are generally regarded as medium risk and include both invasive and noninvasive products, generally for short-term use. This class includes some wound dressings; certain products that channel and store blood for administration into the body; surgically invasive devices for transient or short-term use; most active therapeutic devices that administer or exchange energy; and active diagnostic devices that supply energy (other than for illumination) absorbed by the body, such as ultrasonic imagers. Class IIb devices are also regarded as medium risk, but this class covers active products therapeutically delivering energy or substances at potentially hazardous levels. Devices placed in this class include blood bags, chemicals that clean or disinfect contact lenses, surgically invasive devices for long-term use, radiological equipment, and condoms and other contraceptive devices (except for intrauterine devices, which are in Class III). Class III devices are generally regarded as high risk and include products that are used to diagnose or monitor or that come in contact with the circulatory or central nervous system, such as vascular grafts. This category also includes devices that incorporate medicinal products, such as bone-cement containing an antibiotic. Conformity Assessment Routes Under the EU system, the classification of a medical device governs the type of assessment procedure the manufacturer must undertake to demonstrate that the device conforms to the essential requirements in the relevant medical device directive. Generally, when an NB must perform aspects of conformity assessment, the manufacturer may choose the assessment route from two or more options. Full Quality Assurance System Review (Annex II) This type of review examines every aspect of the manufacturer’s quality assurance system, covering every phase of the manufacture of a device, from design through shipping. The phases involved in producing a new device for the market include a feasibility phase; design phase, which results in a written definition of the device; design verification, which involves creating prototypes of the device; mass production; and full market release. At each of these phases the manufacturer must ensure that it has defined the requirements for completing that phase and that the “deliverable” for that phase, such as a product design or a packaged device, is verified by qualified staff. A manufacturer choosing the full QA system route for a Class III device is also required to submit a design dossier for the NB’s review. The dossier may include specifications and performance data of the product as claimed; an explanation of how the product meets the essential requirements for safety; risk analysis, including risk control methods; electrical/mechanical/chemical constructional data, including drawings; design verification documents; and, when relevant, clinical investigation data. After certifying a manufacturer’s QA system, the NB must carry out periodic inspections to ensure that the manufacturer is continuing to implement the QA system. Additionally, the NB may pay unannounced visits to the manufacturer to check that the quality system is working properly. Under the full QA assessment route, the NB does not need to conduct individual reviews of related devices that are produced under the same QA system. If the NB certifies the manufacturer’s QA system, that certification covers the related devices. This practice allows the manufacturer to place a CE mark on and market all of the related devices without going through an additional conformity assessment review. Type Examination (Annex III) Type examination is a procedure in which the NB ascertains and certifies that a representative sample of the device being reviewed conforms to the essential requirements. The NB reviews documentation on the device that the manufacturer provides and conducts a product test of the device. The NB physically tests a prototype of the device to determine whether it meets certain standards. The documentation reviewed might include documentation of other product tests. Type examination is always linked with a QA review limited to the production phase of manufacture. The QA review is intended to ensure the consistency of product quality. There are three types of limited QA reviews, as follows. Product Verification (Annex IV) In this type of review, the NB must individually test every device produced or test a random sample from every production batch. (This option is also referred to as batch verification.) Few companies choose this approach because it is very expensive. Production Quality Assurance (Annex V) Under this type of review, the NB reviews the manufacturer’s QA system for the production stage of manufacturing devices, including inspection and QA techniques. The NB must carry out periodic inspections after certifying the production QA system and can pay unannounced visits to the manufacturer. Officials who work with the EU system reported to us that this is the type of production phase quality review that manufacturers select most often to complement type examination. Product Quality Assurance (Annex VI) The NB reviews and certifies the manufacturer’s system for inspecting and testing final products in an Annex VI review. The NB must carry out periodic inspections and can pay unannounced visits to the manufacturer. Declaration of Conformity (Annex VII) Under this procedure, which is available only for devices in Classes I and IIa, a manufacturer furnishes a declaration that a device conforms to the essential requirements and maintains technical documentation that would permit review of the device. Assessment Requirements for Device Classes The EU’s MDD specifies which conformity assessment routes each class of devices may use to demonstrate conformity with the essential requirements. Figure III.1 illustrates the assessment routes available to each device class. Annex IV Medical Devices Directive (Figure notes on next page) Class I For Class I products that do not involve measuring devices or sterilization, manufacturers may simply furnish the declaration of conformity (Annex VII) and maintain sufficient technical documentation to permit review of the device. There is no NB review, but the manufacturer must register such devices with the competent authority in the country of the manufacturer’s registered place of business. If the device has a measuring function or must be placed on the market in a sterile condition, the manufacturer is also subject to one of the assessment routes covering production quality (Annexes IV, V, or VI). The NB’s review focuses only on the measurement or sterilization aspect of the device. Class IIa The manufacturer itself may declare conformity with the essential requirements covering the design phase and choose one of the assessment routes covering production quality (Annexes IV, V, or VI). Alternatively, the manufacturer may undergo the full QA system review (Annex II). Class IIb The manufacturer may choose either the full QA system review (Annex II), or type examination (Annex III) plus one of the production quality reviews (Annexes IV, V, or VI). Class III The requirements are the same as for Class IIb, with two exceptions. If the manufacturer chooses the full QA system review (Annex II), it must also submit a design dossier to the NB. If the manufacturer chooses type examination (Annex III), it must choose either product verification (Annex IV) or production quality assurance (Annex V) for the production phase assessment. Product quality assurance (Annex VI) is not an option for Class III devices. The Safeguard Clause The EU’s medical device directives have a safeguard clause that requires each member state’s competent authority to withdraw from the market CE-marked devices that the competent authority finds may compromise patients’ health or safety. The competent authority must immediately inform the European Commission both that it has taken this action and of its reasons for withdrawing the device. If the Commission agrees that the action was justified, it will inform the other member states that the device has been withdrawn. If the Commission believes the withdrawal was unjustified, it informs the competent authority that made the decision and the device manufacturer of that decision. If a competent authority persists in banning a CE-marked product from its country’s market, despite the European Commission’s decision that the device belongs on the market, the Commission can bring a legal proceeding in the European Court of Justice. European officials view the safeguard clause as a last resort, not something to be invoked routinely. If member states could routinely block the sale of CE-marked devices in their countries, the EU system’s goal of facilitating EU-wide trade would be undermined. Related GAO Products Medical Devices: FDA Review Time (GAO/PEMD-96-2, Oct. 30, 1995). FDA Drug Approval: Review Time Has Decreased in Recent Years (GAO/PEMD-96-1, Oct. 20, 1995). Medical Technology: Quality Assurance Systems and Global Markets (GAO/PEMD-93-15, Aug. 18, 1993). Medical Technology: Implementing the Good Manufacturing Practices Regulation (GAO/T-PEMD-92-6, Mar. 25, 1992). Medical Technology: Quality Assurance Needs Stronger Management Emphasis and Higher Priority (GAO/PEMD-92-10, Feb. 13, 1992). Medical Devices: FDA’s 510(k) Operations Could Be Improved (GAO/PEMD-88-14, Aug. 17, 1988). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO compared the Food and Drug Administration's (FDA) and the European Union's (EU) systems for reviewing and approving medical devices, focusing on: (1) key differences between the two systems; (2) the outputs of the two systems; and (3) the feasibility of FDA adopting features of the EU system. GAO found that: (1) U.S. and EU medical device regulatory systems share the goal of protecting public health, but the EU system is designed to facilitate EU-wide trade; (2) while EU reviews medical devices for safety and performance, FDA reviews devices for safety, effectiveness, and benefit to patients; (3) while EU gives major medical device regulatory responsibilities to public agencies and private organizations, FDA has sole responsibility over device regulation in the United States; (4) both systems link the level of medical review to device risk, but the two systems use different procedures to reach approval or clearance decisions; (5) questions and concerns have arisen regarding possible conflicts-of-interest in the EU medical device review process because EU notified bodies carry out a regulatory function within the EU medical device system and conflict-of-interest rules for EU reviewers are less comprehensive than in the United States; (6) sufficient data does not exist on the EU medical device review system to permit meaningful comparison with FDA because the EU system is new and not yet fully operational; and (7) it is too early to evaluate the impact of new FDA streamlined review procedures.
Background Illegal drug use, particularly of cocaine and heroin, continues to be a serious health problem in the United States. According to ONDCP, drug-related illness, death, and crime cost the nation approximately $67 billion annually. Over the past 10 years, the United States has spent over $19 billion on international drug control and interdiction efforts to reduce the supply of illegal drugs. ONDCP has established goals of reducing the availability of illicit drugs in the United States by 25 percent by 2002 and by 50 percent by 2007. ONDCP is responsible for producing an annual National Drug Control Strategy and coordinating its implementation with other federal agencies. The 1998 National Drug Control Strategy includes five goals: (1) educate and enable America’s youth to reject illegal drugs as well as alcohol and tobacco; (2) increase the safety of U.S. citizens by substantially lowering drug-related crime and violence; (3) reduce health and social costs to the public of illegal drug use; (4) shield America’s air, land, and sea frontiers from the drug threat; and (5) break foreign and domestic drug supply sources. The last two goals are the primary emphasis of U.S. interdiction and international drug control efforts. These are focused on assisting the source and transiting nations in their efforts to reduce drug cultivation and trafficking, improve their capabilities and coordination, promote the development of policies and laws, support research and technology, and conduct other related initiatives. For fiscal year 1998, ONDCP estimated that about 13 percent of the $16 billion federal drug control budget would be devoted to interdiction and international drug control activities—in 1988, these activities represented about 24 percent of the $4.7 billion federal drug control budget. ONDCP also has authority to review various agencies’ funding levels to ensure they are sufficient to meet the goals of the national strategy, but it has no direct control over how these resources are used. The Departments of State and Defense and the Drug Enforcement Administration (DEA) are the principal agencies involved in implementing the international portion of the drug control strategy. Other U.S. agencies involved in counternarcotics activities overseas include the U.S. Agency for International Development, the U.S. Coast Guard, the U.S. Customs Service, various U.S. intelligence organizations, and other U.S. agencies. Challenges in Stemming the Flow of Illegal Drugs Into the United States Over the past 10 years, the U.S. agencies involved in counternarcotics efforts have attempted to reduce the supply and availability of illegal drugs in the United States through the implementation of successive drug control strategies. Despite some successes, cocaine, heroin, and other illegal drugs continue to be readily available in the United States. According to ONDCP, the cocaine source countries had the potential of producing about 650 metric tons of cocaine in 1997. Of this amount, U.S. officials estimate that about 430 metric tons were destined for U.S. markets, with the remainder going to Europe and elsewhere. According to current estimates, about 57 percent of the cocaine entering the United States flows through Mexico and the Eastern Pacific, 33 percent flows through the Caribbean, and the remainder is moved directly into the United States from the source countries. According to ONDCP estimates, the U.S. demand for cocaine is approximately 300 metric tons per year. According to DEA, Colombia was also the source of 52 percent of all heroin seized in the United States during 1996. The current U.S. demand for heroin is estimated to be approximately 10 metric tons per year. Drug-Trafficking Organizations Have Substantial Resources, Capabilities, and Operational Flexibility A primary challenge that U.S. and foreign governments’ counternarcotics efforts face is the power, influence, adaptability, and capabilities of drug-trafficking organizations. Because of their enormous financial resources, power to corrupt counternarcotics personnel, and operational flexibility, drug-trafficking organizations are a formidable threat. Despite some short-term achievements by U.S. and foreign government law enforcement agencies in disrupting the flow of illegal drugs, drug-trafficking organizations have found ways to continue to meet the demand of U.S. drug consumers. According to U.S. law enforcement agencies, drug-traffickers’ organizations use their vast wealth to acquire and make use of expensive modern technology such as global positioning systems, cellular communications equipment and communications encryption devices. Through this technology, they can communicate and coordinate transportation as well as monitor and report on the activities of government organizations involved in counterdrug efforts. In some countries, the complexity and sophistication of drug traffickers’ equipment exceed the capabilities of the foreign governments trying to stop them. When confronted with threats to their activities, drug-trafficking organizations use a variety of techniques to quickly change their modes of operation, thus avoiding capture of their personnel and seizure of their illegal drugs. For example, when air interdiction efforts have proven successful, traffickers have increased their use of maritime and overland transportation routes. According to recent U.S. government reports, even after the capture or killing of several drug cartel leaders in Colombia and Mexico, other leaders or organizations soon filled the void and adjusted their areas of operations. For example, we reported in February 1998 that, although the Colombian government had disrupted the activities of two major drug-trafficking organizations, the disruption had not reduced drug-trafficking activities, and a new generation of relatively young traffickers was emerging. Obstacles in Foreign Countries Impede Counternarcotics Efforts The United States is largely dependent on the countries that are the source of drug production and drug transiting points to reduce the amount of coca and opium poppy being cultivated and to make the drug seizures, arrests, and prosecutions necessary to stop the production and movement of illegal drugs. While the United States can provide assistance and support for drug control efforts in these countries, the success of those efforts depends on the countries’ willingness and ability to combat the drug trade within their borders. Some countries, with U.S. assistance, have taken steps to improve their capacity to reduce the flow of drugs into the United States. Drug source and transiting countries face long-standing obstacles that limit the effectiveness of their drug control efforts. These obstacles, many of which are interrelated, include corruption; limited law enforcement resources and institutional capabilities; and insurgencies and internal unrest. Corruption Permeates Institutions in Countries Involved in Drug Production and Movement Narcotics-related corruption is a long-standing problem affecting U.S. and foreign governments’ efforts to reduce drug-trafficking activities. Over the years, U.S. officials have identified widespread corruption problems in Bolivia, Colombia, Mexico, Peru, and the countries of Central America and the Caribbean—among the countries most significantly involved in the cultivation, production, and transit of illicit narcotics. Our more recent reports have discussed corruption problems in the Caribbean, Colombia and Mexico. For example, in October 1997, we reported that the State Department had identified narcotics-related corruption in various transit zone countries in the Caribbean, including Antigua, Aruba, Belize, Dominica, the Dominican Republic, Jamaica, St. Kitts, St. Vincent, and others. We also reported that once the influence of drug trafficking becomes entrenched, corruption inevitably follows and democratic governments may be placed in jeopardy. In March 1998, the State Department reported that narcotics-related corruption problems continue in many Caribbean countries. In June 1998, we reported that persistent corruption within Mexico continued to undermine both police and law enforcement operations.Charged with corruption, many law enforcement officers had been arrested and dismissed. One of the most noteworthy arrests involved General José Gutierrez Rebollo—former head of the Mexican equivalent of DEA. In February 1997, he was charged with drug trafficking, organized crime and bribery, illicit enrichment, and association with one of the leading drug-trafficking organizations in Mexico. Despite attempts by Mexico’s Attorney General to combat corruption, it continues to impede counternarcotics efforts. For example, in February 1998, the U.S. embassy in Mexico City reported that three Mexican law enforcement officials who had successfully passed screening procedures were arrested for stealing seized cocaine—illustrating that corruption continues despite measures designed to root it out. Inadequate Resources and Institutional Capabilities Limit Arrests and Convictions of Drug Traffickers Effective law enforcement operations and adequate judicial and legislative tools are key to the success of efforts to stop the flow of drugs from the source and transiting countries. Although the United States can provide assistance, these countries must seize the illegal drugs and arrest, prosecute, and extradite the traffickers, when possible, in order to stop the production and movement of drugs internationally. However, as we have reported on several occasions, these countries lack the resources and capabilities necessary to stop drug-trafficking activities within their borders. In 1994, we reported that Central American countries did not have the resources or institutional capability to combat drug trafficking and depended heavily on U.S. counternarcotics assistance. Two years later, we said that equipment shortcomings and inadequately trained personnel limited the government of Mexico’s ability to detect and interdict drugs and drug traffickers. These problems still exist. For example, we reported in June 1998 that the Bilateral Border Task Forces, which were established to investigate and dismantle the most significant drug-trafficking organizations along the U.S.-Mexico border, face operational and support problems, including inadequate Mexican government funding for equipment, fuel, and salary supplements for personnel assigned to the units. Countries in the Caribbean also have limited drug interdiction capabilities. For example, we reported in October 1997 that many Caribbean countries continue to be hampered by inadequate counternarcotics capabilities and have insufficient resources for conducting law enforcement activities in their coastal waters. We reported that St. Martin had the most assets for antidrug activities, with three cutters, eight patrol boats, and two fixed-wing aircraft, whereas other Caribbean countries had much less. Insurgency and Civil Unrest Limit Counternarcotics Efforts Over the years, our reports have indicated that internal strife in Peru and Colombia have limited counternarcotics efforts in these countries. In 1991, we reported that counternarcotics efforts in Peru were significantly hampered because of the threat posed by two insurgent groups.Currently, Colombia’s counternarcotics efforts are also hindered by insurgent and paramilitary activities. In 1998, we reported that several guerrilla groups made it difficult to conduct effective antidrug operations in many areas of Colombia. Since our report, the situation has worsened. For example, during this past summer the insurgents overran a major police base that was used as a staging area for aerial eradication efforts. Efforts to Improve Counternarcotics Capabilities Some countries, with U.S. assistance, have taken steps to improve their capacity to reduce the flow of illegal drugs into the United States. For example, in June 1998, we reported that Mexico had taken efforts to (1) increase the eradication and seizure of illegal drugs, (2) enhance counternarcotics cooperation with the United States, (3) initiate efforts to extradite Mexican criminals to the United States, (4) pass new laws on organized crime, money laundering, and chemical control, (5) institute reforms in law enforcement agencies, and (6) expand the role of the military in counternarcotics activities to reduce corruption. Many of these initiatives are new, and some have not been fully implemented. Colombia has also made progress in making efforts to improve its counternarcotics capabilities. In February 1998, we reported that Colombia had passed various laws to assist counternarcotics activities, including money laundering and asset forfeiture laws, reinstated extradition of Colombian nationals to the United States in November 1997, and signed a maritime agreement. Obstacles Inhibit Success in Fulfilling U.S. Counternarcotics Efforts Our work over the past 10 years has identified obstacles to implementing U.S. counternarcotics efforts, including (1) organizational and operational limitations, and (2) planning and management problems. Over the years, we have criticized ONDCP and U.S. agencies involved in counternarcotics activities for not having good performance measures to help evaluate program results. Efforts to develop such measures are currently underway. Organizational and Operational Limitations The United States faces several organizational and operational challenges that limit its ability to implement effective antidrug efforts. Many of these challenges are long-standing. Several of our reports have identified problems involving competing priorities, interagency rivalries, lack of operational coordination, inadequate staffing of joint interagency task forces, lack of oversight, and lack of knowledge about past counternarcotics operations and activities. For example, our 1995 work in Colombia indicated that there was confusion among U.S. embassy officials about the role of the offices involved in intelligence analysis and related operational plans for interdiction. In 1996 and 1997, we reported that several agencies, including the U.S. Customs Service, DEA, and the Federal Bureau of Investigation, had not provided personnel, as they had agreed, to the Joint Interagency Task Force in Key West because of budgetary constraints. In October 1997, we reported that according to U.S. officials, the small amount of aircraft and maritime assets hindered U.S. interdiction efforts in the Eastern Pacific and that their ability to interdict commercial and noncommercial fishing vessels was limited. We also reported in 1993 and 1997 that reduced radar capability was limiting operational successes in this region. We also reported on instances where lessons learned from past counternarcotics efforts were not known to current planners and operators, both internally in an agency and within the U.S. antidrug community. For example, in the early 1990’s the United States initiated an operation to support Colombia and Peru in their efforts to curtail the air movement of coca products between the two countries. However, U.S. Southern Command personnel stated in 1996 that while they were generally aware of the previous operation, they were neither aware of the problems that had been encountered nor of the solutions developed in the early 1990s. U.S. Southern Command officials attributed this problem to the continual turnover of personnel and the requirement to destroy most classified documents and reports after 5 years. These officials stated that an after-action reporting system for counternarcotics activities is now in place at the U.S. Southern Command. We have also reported that a key component of the U.S. operational strategy is having reliable and adequate intelligence to help plan interdiction operations. Having timely intelligence on trafficking activities is important because traffickers frequently change their operational patterns and increasingly use more sophisticated communications, making it more difficult to detect their modes of operations. ONDCP is in the process of reviewing U.S. counternarcotics intelligence efforts. Planning and Management Limitations Over the years, our reviews of U.S. counternarcotics efforts have indicated planning and management limitations to U.S. counternarcotics efforts. Our recent reports on Colombia and Mexico have shown that the delivery of U.S. counternarcotics assistance was poorly planned and coordinated. In February 1998, we reported that the State Department did not take adequate steps to ensure that equipment included in a 1996 $40-million Department of Defense assistance package could be integrated into the U.S. embassy’s plans and strategies to support the Colombian police and military forces. As a result, the assistance package contained items that had limited immediate usefulness to the Colombian police and military and will require substantial additional funding before the equipment can become operational. We reported a similar situation in Mexico. In June 1998, we noted that key elements of the Defense Department’s counternarcotics assistance package were of limited usefulness or could have been better planned and coordinated by U.S. and Mexican officials. For example, we reported that the Mexican military was not using the four C-26 aircraft provided by the United States because there was no clearly identified requirement for the aircraft and the Mexican military lacked the funds needed to operate and maintain the aircraft. In addition, inadequate coordination between the U.S. Navy and other Defense Department agencies resulted in the transfer of two Knox-class frigates to the Mexican Navy that were not properly outfitted and are currently inoperable. Further, Mexican Navy personnel were trained in the frigates’ operation, but these personnel may not be fully utilized until the two frigates are activated. Our work has also shown that, in some cases, the United States did not adequately control the use of U.S. counternarcotics assistance and was unable to ensure that it was used as intended. Despite legislative requirements mandating controls over U.S.-provided assistance, we found instances of inadequate oversight of counternarcotics funds. For example, between 1991 and 1994, we issued four reports in which we concluded that U.S. officials lacked sufficient oversight of aid to ensure that it was being used effectively and as intended in Peru and Colombia. We also reported that the government of Mexico had misused U.S.-provided counternarcotics helicopters to transport Mexican military personnel during the 1994 uprising in the Mexican state of Chiapas. Our recent work in Mexico indicated that oversight and accountability of counternarcotics assistance continues to be a problem. We found that embassy records on UH-1H helicopter usage for the civilian law enforcement agencies were incomplete. Additionally, we found that the U.S. military’s ability to provide adequate oversight is limited by the end-use monitoring agreement signed by the governments of the United States and Mexico. Importance of Measuring Performance We have been reporting since 1988 that judging U.S. agencies’ performance in reducing the supply of and interdicting illegal drugs is difficult because the agencies have not established meaningful measures to evaluate their contribution to achieving the goals contained in the National Drug Control Strategy. In February 1998, ONDCP issued its annual National Drug Control Strategy, establishing a 10-year goal of reducing illicit drug availability and use by 50 percent by 2007. In March 1998, ONDCP established specific performance effectiveness measures to evaluate progress in meeting the strategy’s goals and objectives. While we have not reviewed the performance measures in detail, we believe they represent a positive step to help gauge the progress in attaining the goals and objectives. Ways to Improve the Effectiveness of U.S. Counternarcotics Efforts We recognize that there is no easy remedy for overcoming all of the obstacles posed by drug-trafficking activities. International drug control efforts aimed at stopping the production of illegal drugs and drug-related activities in the source and transit countries are only one element of an overall national drug control strategy. Alone, these efforts will not likely solve the U.S. drug problem. Overcoming many of the long-standing obstacles to reducing the supply of illegal drugs requires a long-term commitment. Over the years, we have recommended ways in which the United States could improve the effectiveness of the planning and implementation of its current counternarcotics efforts. These recommendations include (1) developing measurable goals, (2) making better use of intelligence and technologies and increasing intelligence efforts, (3) developing a centralized system for recording and disseminating lessons learned by various agencies while conducting law enforcement operations, and (4) better planning of counternarcotics assistance. Mr. Chairmen, this concludes my prepared testimony. I would be happy to respond to any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the U.S. counternarcotics efforts in the Caribbean, Colombia, and Mexico, focusing on the: (1) challenges of addressing international counternarcotics issues; and (2) obstacles to implementing U.S. and host-nation drug control efforts. GAO noted that: (1) its work over the past 10 years indicates that there is no panacea for resolving all of the problems associated with illegal drug trafficking; (2) despite long-standing efforts and expenditures of billions of dollars, illegal drugs still flood the United States; (3) although U.S. and host-nation counternarcotics efforts have resulted in the arrest of major drug traffickers and the seizure of large amounts of drugs, they have not materially reduced the availability of drugs in the United States; (4) a key reason for the lack of success of U.S. counternarcotics programs is that international drug-trafficking organizations have become sophisticated, multibillion-dollar industries that quickly adapt to new U.S. drug control efforts; (5) as success is achieved in one area, the drug-trafficking organizations quickly change tactics, thwarting U.S. efforts; (6) other significant long-standing obstacles also impede U.S. and source and transit countries drug control efforts; (7) in the drug-producing and -transiting countries, counternarcotics efforts are constrained by corruption, limited law enforcement resources, institutional capabilities, and internal problems such as insurgencies and civil unrest; (8) moreover, drug traffickers are increasingly resourceful in corrupting the countries' institutions; (9) some countries, with U.S. assistance, have taken steps to improve their capacity to reduce the flow of illegal drugs into the United States; (10) among other things, these countries have taken action to extradite criminals, enacted legislation to control organized crime, money laundering, and chemicals used in the production of illicit drugs, and instituted reforms to reduce corruption; (11) while these actions represent positive steps, it is too early to determine their impact, and challenges remain; (12) U.S. counternarcotics efforts have also faced obstacles that limit their effectiveness; (13) these include: (a) organizational and operational limitations; and (b) planning and management problems; (14) over the years, GAO has reported on problems related to competing foreign policy priorities, poor operational planning and coordination, and inadequate oversight over U.S. counternarcotics assistance; (15) GAO has also criticized the Office of National Drug Control Policy and U.S. agencies for not having good performance measures to evaluate results; and (16) GAO's work has identified ways to improve U.S. counternarcotics efforts through better planning, sharing of intelligence, and the development of measurable performance goals.
Background SEC introduced its ARP program in 1989 because of capacity and other problems in the exchanges’ and clearing organizations’ information systems. The program resulted from SEC’s November 1989 policy statement that noted that many exchanges and other organizations experienced problems in their systems during the high trading volumes that occurred in October 1987 and again in October 1989. This policy statement also cited disasters, such as fires or earthquakes, that required exchanges to implement their contingency planning procedures. Since the ARP program was created, exchanges, clearing organizations, and the systems that link the stock and options markets have continued to periodically experience capacity-related problems or other disruptions. Under the ARP program, SEC called on the SROs to ensure that the information technology systems they use to conduct market operations have adequate processing capacity for current and future estimated trading volumes. In addition, SEC sought assurances that SROs were taking steps to assess the risk to their operations from internal and external threats, such as unauthorized use, computer vandalism, or computer viruses. The first ARP policy statement called on the SROs to establish capacity planning procedures to estimate current and future information system capacity needs and to periodically conduct capacity stress tests. In addition, the statement recommended that the SROs have assessments performed of their systems capacity and their vulnerability to physical threat. In a second policy statement issued in May 1991, SEC provided more specific guidelines to the SROS that identified five primary areas it expected the SROs to have reviewed, including the general controls and security relating to computer operations and facilities, telecommunications, systems development, capacity planning and testing, and contingency planning. The ARP program is administered by staff in SEC’s Office of Technology and Enforcement within the Division of Market Regulation. Scope and Methodology To determine the adequacy and completeness of the criteria SEC uses to conduct capacity and security oversight, we compared the criteria with guidance issued by other financial regulators and organizations that have developed standards for auditing information systems, including the information security manual we developed for use by federal agencies.We also used a list of criteria we developed based on the procedures recommended in a publication written by experts in the field of capacity planning for information systems and on the findings from our prior reports or testimonies that address automation issues in the securities markets. In addition, we reviewed SEC inspection work plans that the ARP staff uses to conduct on-site inspections and held discussions with SEC staff on the criteria that they use to conduct their oversight. To determine the scope and frequency of the ARP on-site inspections, we obtained from SEC a list of on-site inspections conducted between 1995 and June 2001 of 27 SROs and electronic communication networks (ECN). We reviewed a total of 11 SEC ARP inspection reports that addressed capacity planning or security-related issues, including the written reports and supporting work papers on ARP inspections of 7 SROs which included the most active exchanges as well as some of the smaller exchanges, and just the written reports for 4 other SROs. We discussed our observations of these reviews with ARP staff. To determine the scope and frequency of the independent reviews, we examined certain recent audit reports and a summary of audit reports prepared by SEC ARP staff. We also discussed the results of our assessment of these reports with SEC staff. Specifically, we reviewed copies or summaries of 37 reviews done by SRO internal audit staff for the 3 largest SROs in 2000. In addition, we examined seven independent reviews of five SROs performed by external organizations that were included in the supporting work papers of the SEC inspections we reviewed. To address how the voluntary nature of the ARP programs affects SEC oversight capabilities, we reviewed various documents prepared by the ARP staff. The documents included analyses of SRO systems, on-site inspection reports, a printout of SEC staff’s database of the status of recommendations made during inspections, and oversight work plans. We conducted this work in Washington, D.C., from November 2000 to June 2001 in accordance with generally accepted government auditing standards. SEC Uses a Wide Range of Criteria but Lacks a Consolidated Guide for Planning and Conducting Inspections To plan and conduct inspections and other oversight activities, the ARP program uses criteria from a variety of sources that address aspects of capacity planning, security, and other information system issues. The second ARP policy statement discussed five primary areas that SEC expected that SROs would address regarding their information systems. Using these areas, the ARP staff work with SROs to develop a checklist as an initial guide for use by SEC staff in conducting their on-site inspections. This checklist was also provided to the SROs in 1991 for use as part of the independent reviews of their systems. The ARP program staff told us that they regularly update the inspection checklist by consulting professional standards and guidance relating to information systems established by other regulatory, audit, or industry bodies. Figure 1 shows how SEC staff and the SROs use the various sets of guidance. In our review of SEC on-site inspection work papers, we observed instances in which ARP staff used the steps from this checklist to plan certain segments of work they would perform at individual SROs. SEC officials said they also expect the areas examined during inspections of SRO systems to be based on the ARP policy statements as well as on industry standards for conducting systems audits and the reviewers’ professional judgment. In the work papers we reviewed, we found examples of individualized checklists that ARP staff had created that incorporated steps from the 1991 checklist and other sources for use in particular inspections of SROs. SEC staff said that checklists created for past inspections are also used to plan subsequent inspections. ARP staff also described performing frequent Internet searches to monitor the latest information on standards and issues from various auditing and information system organizations, such as the Information Systems Audit and Control Association. At a minimum, SEC officials said that they expect their staff to perform these searches before each inspection. In their view, these additional information sources provide up-to-date, comprehensive criteria for assessing capacity planning, security, and other relevant systems issues. ARP program staff also explained that they use their own knowledge and experience to plan the inspections and the ongoing monitoring they conduct. They said that they also added review steps to their inspections to address any current challenges facing the SROs that would affect information systems. For example, they added steps to inspections recently to address the industry’s transition to decimal pricing as well as for the Year 2000 date change. They also added steps to inspections of individual SROs when new systems are being implemented or outages occur. The Lack of a Consolidated Inspection Guide Creates the Potential for Inconsistency in Reviews Because of continuous change in technology, SEC staff need to refer to up- to-date criteria and standards to conduct their oversight. However, they lack a consolidated guide for their staff. The 1991 inspection checklist that the ARP staff continuously updates to serve as criteria for their inspections does not address some developments in the markets and advances in information technology. For example, SEC’s checklist addresses some security issues but does not include steps relating to intrusion detection. The 1991 checklist also does not address the increased risk of unauthorized access faced by SROs with information systems connected to the Internet. Although SEC officials explained that the SROs do not generally operate critical systems that use the Internet, some are using it to transmit information for less important systems, and others are considering or are already developing Internet-based systems. SEC’s 1991 inspection checklist is also missing some elements relating to capacity planning. For example, the checklist did not specifically address certain issues relating to volume forecasts used in capacity planning in which some SROs have had problems. In 2000, the National Association of Securities Dealers’ (NASD) transition to decimal pricing was delayed because the system NASD planned to use for decimal trading lacked sufficient capacity. A review by an external organization later found that NASD’s volume forecasts had not adequately accounted for the increasing volatility in its trading and processing volumes. Although SEC ARP staff had identified deficiencies and made recommendations to NASD to improve its capacity planning processes, we did not find volatility of trading volumes specifically addressed in SEC’s 1991 checklist or the other work plans that SEC staff prepared for the inspections we reviewed. Because the ARP program does not have a consolidated guide for its staff, the burden of maintaining consistent quality in ARP oversight falls primarily on the most experienced ARP staff. Both SEC and other regulators frequently use comprehensive guides to ensure that the rule- compliance reviews their staff perform are consistent. For example, staff in SEC’s Office of Compliance Inspections and Examinations use examination modules that consolidate the procedures for the reviews they expect their staff to perform consistently at the broker-dealers and other entities they review. However, without a similar consolidated guide, the ARP program staff must make continual efforts to consult numerous sources to supplement the areas not contained in SEC’s 1991 ARP materials. Conducting quality reviews of the various SROs also requires ARP program staff to have broad knowledge of relevant issues and to be aware of how market developments could affect the systems at each SRO. We found that the various work plans, risk analyses, and other documents prepared by the ARP program staff were generally thorough and addressed issues adequately. However, the level of detail and extent of documentation varied across staff members. Although the quality of SEC’s oversight depends heavily on individual staff, the ARP program has experienced considerable staff turnover. SEC officials said that the ARP program staff has experienced turnover rates of almost 30 to 40 percent in some years. The officials said that finding replacements is always difficult, as the salaries SEC offers for people with information system skills are not competitive with the private sector. As of June 15, 2001, 4 of the 10 ARP program staff had 2 years or less of experience, including 2 staff who had just joined the program. SEC officials said that only experienced staff prepare updated workplans and lead on-site inspections. However, the lack of a consolidated written guide could lead to inconsistency in planning and conducting inspections, given the high turnover rate of ARP staff. SEC Inspections of SROs Address Key Issues but Are Less Frequent Than SEC Staff Prefer We found that, for the most part, SEC’s on-site inspections addressed key capacity and security issues. However, resource limitations have prevented SEC from conducting inspections as frequently as their staff would prefer. During an on-site inspection, the ARP staff usually review SRO procedures, examine supporting documents, and hold discussions with SRO staff over the course of 4 to 5 days. During each inspection, ARP staff focus on the information system issues from the ARP guidance that are most relevant to the particular SRO. Although SEC staff do not conduct detailed steps to review all ARP issues during each inspection, most inspections begin with a presentation by the SRO, which the ARP staff told us covers all ARP issues. SEC staff also reported conducting some 1-day on-site inspections that focused on more limited issues. ARP staff then prepared a report that was later provided to the SROs’ management. Our review of the ARP on-site inspection reports and the supporting work papers addressing capacity and security issues indicated that, for the most part, these inspections addressed the key issues relating the SROs’ procedures. We reviewed reports and supporting work papers for the most recent on-site inspections done at seven SROs and four additional inspection reports prepared between 1996 and 2000. In these documents, we found examples of detailed audit work plans that were specifically designed to address the objectives of each ARP inspection. The work papers also included documents prepared by the SROs, including their formal capacity plans and trading volume and processing load projections, which SEC staff had asked to review as part of the inspections. We also found that SEC staff had collected documents the SROs had prepared on vulnerability assessments, as well as summaries of security staff meetings. In addition, we observed instances in which SEC staff documented their reviews of the security-related steps from the review checklist. The reports ARP staff prepared after conducting on-site inspections frequently contained numerous substantive recommendations to the SROs that addressed capacity planning, security, and other issues. For example, in an inspection done at one SRO, ARP staff made seven recommendations, including that the SRO increase the capacity of its systems, improve the security procedures for two major systems, and increase the frequency of disaster recovery testing. Limited Staff and Other Priorities Prevent More Frequent On-site Inspections Although SEC officials told us that the ARP program has no formal goal for the frequency of inspections, ARP staff said that they would prefer conducting on-site inspections every 12 to 18 months. However, limited staff and the need to monitor industrywide information technology initiatives have prevented them from conducting examinations this frequently. According to the information SEC provided us, SEC staff conducted 41 on-site inspections of exchange or clearing organization SROs from 1995 through June 2000. During this 6-year period, ARP staff inspected most SROs once every 2 to 3 years and addressed capacity and security issues in most of these inspections. However, at least eight of these inspections lasted only 1 day. Furthermore, over this 6-year period the total number of days that ARP staff were actually on each SRO’s premises was very limited. According to the data SEC provided us, we calculated that the number of days that ARP staff were on site averaged a total of 7 days at each SRO during this 6-year period, with ARP staff being on site at the least visited SRO for a total of only 4 days and at the most visited SRO for a total of 19 days. ARP program officials explained that because of their small staff they conduct only seven or eight inspections per year. Although the ARP program had a staff of 10 as of June 15, 2001, it has had as few as 4 during some years because of generally high turnover. The staff also explained that they had spent a considerable amount of time addressing major industrywide initiatives, some of which spanned several years. These initiatives included preparations for the Year 2000 date change and the transition to trading using decimal instead of fractional prices. SEC officials told us that they take other steps to ensure that the SROs are adequately addressing information system issues. SEC staff meet annually with the SRO officials responsible for information systems. During these day-long annual report meetings, the SRO staff provide presentations on prior and upcoming changes to their systems and on activities relating to market events that could affect system capacities, such as decimal trading and other initiatives. SEC staff told us that these meetings allow them to question the SROs and obtain copies of relevant materials. When an SRO is subject to an on-site inspection, the officials explained that the first day is usually a presentation of the SRO staff’s annual report. Independent Reviews Mostly Conducted By SRO Internal Auditors Although SEC originally envisioned that SRO systems would also be reviewed under the ARP program by independent external organizations, SRO internal auditors now perform the majority of these reviews. The reviews now address the key areas of ARP cyclically based upon an annual risk analysis, but we were unable to determine whether all the issues are being addressed with sufficient frequency. In addition, SEC has requested reviews by external organizations when internal audits have been insufficient or when deficiencies existed in SRO systems and procedures. Most Independent Reviews Are Now Done By Internal Auditors Using a Risk- Based Approach In the 1989 policy statement announcing the ARP program, SEC called for annual independent reviews of SROs that would cover capacity planning, security, and other areas. SEC staff told us that at that time, SEC proposed that external organizations perform these independent reviews. However, ARP staff said that the SROs later raised concerns about the costs of implementing such reviews and the potential overlap with the SROs’ own internal audit processes. ARP staff told us that they had also identified a need to modify the independent review guidance to ensure that the reviews were of sufficient depth. As a result, SEC issued the second ARP policy statement in 1991. In addition to expanding the areas that should be reviewed at the SROs, this statement also clarified that SROs could use their internal auditors to perform the independent reviews. However, the statement noted that, if internal auditors were to be used, they should adhere to the standards set by various groups, such as the Institute of Internal Auditors and the Information Systems Audit and Control Association. In addition, SEC asked that an external organization periodically assess the SRO internal auditors’ independence, competency, and work performance. Since this change, the majority of the independent reviews are now done by the SROs’ internal audit processes, rather than by external organizations. SEC and the SROs have also agreed to a change in the type and frequency of the independent reviews. In December 1993, SEC and the SROs agreed that SRO staff would plan the independent reviews addressing the key areas identified in the ARP guidance, using a risk-based approach. Using this approach, the SROs’ internal auditors are to determine which areas should be examined by conducting a yearly risk analysis of the SRO’s information systems. This risk analysis allows the internal audit staff to develop an audit plan that identifies the critical areas that need to be reviewed that year and less urgent issues that can be deferred until a later date. Although the SROs evaluate the risks in their systems against all of the ARP areas under this approach annually, the SEC officials explained that the SROs are not expected to review all of the key areas of the ARP guidance each year. As discussed in our Federal Information Systems Audit Control Manual (GAO/AIMD-12.19.6, January 1999), and elsewhere, such an approach is considered appropriate for reviews of this type. Internal Audits’ Coverage of ARP Areas Is Unclear Although SEC staff said that they were generally satisfied with the quality and scope of the reviews the SROs’ internal auditors had performed, we could not determine from the documents SEC staff prepared whether these reviews were addressing all the areas contained in the ARP guidance with sufficient frequency. To verify the adequacy of the SROs’ efforts, ARP staff said that they also perform their own risk analyses for the SROs each year. They then are to review the SROs’ risk analyses, audit plans, and past audit results to assess whether the SROs’ independent reviews are addressing the ARP guidelines appropriately. When an SRO has not addressed an issue warranting attention, SEC requests a review of that area. The ARP program staff said they were also satisfied with the internal audits because many include testing of controls and compliance with procedures. In addition, the ARP staff told us that the SROs have increased the number of internal audit staff that review information system issues and that the quality of these audits has improved over time. When the ARP program first began, some of the SROs did not have internal auditors who could review information systems, and ARP staff said that their oversight efforts have resulted in increased internal audit staffing at the SROs. According to SEC staff, in the mid-1990s two major SROs had only one internal auditor specializing in information systems issues. As a result of SEC staff efforts, one of these SROs gradually increased the number of information systems auditors it employs to five. Nevertheless, from our review of the SROs’ internal audits conducted during 2000, we were unable to determine whether these reviews were addressing all of the important areas in the ARP policy statements with sufficient frequency. In one analysis we reviewed, ARP staff noted that the internal audit staff at one major SRO had not reviewed at least two of the five areas specified in the ARP policy statements since 1992 and did not state when reviews had last been conducted for the other three areas. ARP staff told us that in this case, auditors for the SRO’s service vendor had reviewed at least one of the areas and information had been provided to SEC about the other area periodically. At another SRO, the SEC staff’s inspection report noted that the internal auditors had not conducted an independent review of the SRO’s capacity planning process in 8 years. ARP staff told us that they had performed reviews of this area at least twice during this period. With the pace of technological change and developments in the markets, it is unclear whether this level of attention to SRO capacity planning is sufficiently frequent or appropriate. SEC Has Called For External Reviews to Supplement Internal Audits ARP staff told us that when SRO internal auditors do not address all the issues addressed in the ARP policy statements, the ARP staff take steps to see that they do. If their analysis indicates that internal audits have not reviewed particular issues, ARP staff said that they would consider the areas not addressed as high risk for the SROs and they would try to include them in their next on-site inspection. In some cases, ARP staff request that the SROs obtain independent reviews by an external organization when internal audits have not sufficiently addressed systems issues or because of recurring systems problems at some SROs. We found that SEC staff had recommended in at least five recent on-site inspection reports that the SROs contract with external organizations to perform reviews of SROs’ capacity planning processes. For example, ARP staff requested an external review of NASD’s overall capacity planning process before that market announced that the system it intended to use to transmit price quotations did not have sufficient capacity to allow it to implement decimal trading by the date SEC had set for the securities markets. In a July 2000 inspection report, the ARP program staff requested that another SRO obtain a review of all aspects of its capacity planning process because that SRO’s trading volume had grown dramatically and its internal auditors had not recently addressed this process. And in 1997, after the two systems that transmit information between the stock and options markets experienced numerous delays or queues in their transmissions, ARP staff requested an external review be done of the organization that operates these systems. In March 2000, an official whose exchange relies on price data transmitted by the intermarket system for stocks told us that systems problems had caused considerable financial losses to members until its capacity was upgraded. In addition, the options exchanges and the Options Price Reporting Authority, which administers the intermarket system for options, are under an SEC order that requires them to limit the data they transmit across this system because their systems capacity is insufficient. From a review of internal audits done at three SROs during 2000, we found that the internal audits varied in both scope and depth. We reviewed 29 internal audit reports conducted at two SROs during 2000 and a summary prepared by SEC staff of eight audits done in 2000 at another SRO that uses an external organization to conduct its internal audits. In some cases, the audits appeared to address an important ARP area thoroughly and contained substantive recommendations, including one report that identified numerous deficiencies in an SRO’s contingency planning procedures. Some of the reports also indicated that the internal auditors had taken steps to test relevant controls over systems. In general, most of the internal audit reports for the three SROs that we reviewed were limited in scope, covered only one SRO system, or made minor recommendations, such as asking that one SRO obtain the most recent version of a capacity planning software program or recommending that the staff at one SRO use only one entrance to its data center. Most of the reports we reviewed addressed security or other information system issues—such as change management processes—rather than capacity planning issues. Our review of seven reports addressing capacity and security issues that external organizations had prepared for five SROs showed that these reports generally had identified substantial deficiencies. For the most part, the SROs had obtained these reviews in response to requests by ARP staff. In one review, the external organization identified seven problems relating to an SRO’s capacity planning procedures, including finding that the SRO had not collected all the data needed for its capacity planning process, identified the applications that were generating increases in processing demand, or used a standardized forecasting approach for all systems. In addition, external audit reports recommended that SROs create formal capacity planning processes and security procedures for systems that currently lack them. The Voluntary Nature of the ARP Program Affects SEC’s Capacity and Security Oversight Because the ARP program was not established under SEC’s rulemaking authority, it lacks specific rules that SEC can use to sanction SROs for not complying. Although SEC staff reported that the SROs generally comply with the ARP program, we found that in some cases SROs had not implemented ARP staff recommendations and had not always created the requested notices and reports sought under ARP. When establishing the ARP program, SEC left open the possibility of making the program mandatory but did not establish criteria to assess the level of cooperation under the voluntary program. ARP Program Lacks Specific Rules The policy statements issued when SEC began the ARP program established voluntary guidelines for the SROs to follow regarding the capacity and security of their information systems. These guidelines called for the SROs to have independent reviews performed on their systems and to make various reports and notices to SEC. However, the program was not established under SEC’s rule making process. SEC officials explained that the view of the staff at the time was that any specific standards relating to information systems included in such a rule could become obsolete in a short period of time. SEC staff would then be required to seek amendments to the rule, which would also likely take considerable time and effort to complete. In their view, voluntary guidelines afford SEC staff greater flexibility. However, by issuing only voluntary guidelines, SEC staff have no specific rules to require SROs to implement key ARP recommendations or create the reports or notices called for in the policy statements and cannot sanction SROs under the ARP program for failing to do so. SEC officials said that they believed they could bring an official action against SROs whose failure to follow ARP was serious enough to represent a violation of the general requirement that exchanges maintain the ability to operate. They said, however, that SEC rarely uses such authority. Some Significant ARP Program Recommendations Not Being Implemented and Concerns Not Addressed ARP staff acknowledged that SROs have not addressed several significant capacity and security recommendations or concerns raised in ARP inspections. For example, we previously reported that in 1996, ARP staff recommended that NASD establish capacity alternatives to meet unexpected system demand. However, NASD has continued to experience capacity-related problems with several of its systems, disrupting the markets. For example, insufficient capacity in NASD’s price- quotation system delayed the start of decimal trading by all securities markets for 3 months and prevented NASD from fully trading in decimals for an additional 7 months. As a result, investor benefits from the reduced spreads that have resulted from decimal trading on the Nasdaq market were delayed by an additional 10 months. In addition, NASD has experienced capacity-related delays in a system that transmits orders to buy or sell shares in response to displayed price quotations. Officials from a major ECN told us in 2000 that they have experienced losses of up to $1.5 million a day because they are obligated to honor orders that arrive late through this system for shares that have already been sold to their own customers. Honoring these delayed orders can produce losses because the ECN sometimes has to execute new orders at disadvantageous prices if the price of the security has changed since the original transaction. Finally, NASD experienced trading disruptions on June 28, 2001 because the number of market participants given access to one of its systems exceeded the number of market participants that system had been programmed to handle. NASD officials said that the system was set up to handle about 90 users at once; however, by that date the number of users exceeded this figure by about 30 percent, and the system software had not been modified to account for this growth. Other important ARP recommendations and concerns that were not being implemented or addressed dealt with SROs’ security procedures, including their contingency plans for addressing physical threats or damage. In 2000, ARP staff recommended that one SRO develop and publish security policy and procedures and enforce them through a central authority, in accordance with basic industry standards. The SRO disagreed with the ARP recommendations, preferring to leave its security procedures decentralized. Another ARP staff recommendation that one SRO develop a recovery plan for trading facilities used for two of its most actively traded securities has been outstanding since at least 1995. Although this SRO has discussed various alternatives during this period for continuing operations in the event that its trading floor becomes unavailable, as of July 2001, its staff had still not implemented an alternative approach. In addition, although ARP program staff considered the lack of backup facilities to be a major deficiency, ARP program staff have recommended in other cases that SROs perform studies rather than take actions to resolve the deficiencies. In at least three cases, ARP staff recommended that SROs study the feasibility of establishing such facilities to avoid potentially lengthy shutdowns should their trading locations became unusable. One SRO disagreed with the recommendation, citing the costliness of maintaining such facilities, and the other SROs performed or are performing the studies. However, none has taken steps that fully address the ARP staff concerns that major physical damage to the trading floors could render these SROs unable to operate for an extended period. SROs Do Not Consistently Provide Information Although the ARP program calls for the SROs to create certain reports to SEC when outages or other disruptions occur that affects their systems, these reports were not always being made. As stated in the second ARP policy statement, the SROs are to report immediately to SEC systems outages that are expected to last longer than 30 minutes and report shorter outages after systems have been repaired. In addition, the second ARP statement recommended that SROs provide SEC with notices of significant system modifications. According to ARP staff, approximately 100 system outages were reported in fiscal year 2000, and for more than half of these, SEC officials said that they asked the SROs to provide analyses or other documentation of the event. SEC staff said that most SROs provide notices of outages or system modifications, but that some important outages or changes have not been reported. According to the findings from an SEC on-site inspection, one SRO lacked procedures for ensuring that notices of system modifications would be created and provided to SEC. In response, this SRO agreed to implement appropriate procedures. Another ARP inspection found that one SRO had failed to report at least six system outages during 2000. If SROs were required by SEC rule to provide SEC with notifications of significant changes to their automated systems, then the failure to have procedures in place for ensuring that notices of systems modifications are provided to SEC would likely demonstrate a weakness in the SRO’s internal controls. If the deficiency was severe enough, SEC could initiate an enforcement proceeding. In some cases, SEC staff became aware of anticipated SRO system changes from press or trade publications. For example, ARP staff learned of the proposed 1998 sale of one SRO’s options trading operations to another in a newspaper report. Although some of these instances involved proposed system changes that had not been finalized by the SROs, not knowing the most current configuration for the SROs systems could make planning inspections and other oversight activities more difficult for SEC staff. SEC Has Not Developed Formal Criteria and Assessed SRO Cooperation With the ARP Program SEC stated in its initial ARP release that it would consider making the ARP program mandatory if SROs did not cooperate fully. However, SEC has yet to develop formal criteria and perform an assessment of SRO cooperation. In 1998, SEC’s Office of Inspector General reported that SEC had not indicated how it would assess compliance with the ARP program.Because of the increased importance of information technology to the functioning of the securities markets, the Inspector General’s report recommended that the agency consider making the ARP program mandatory. In response to this recommendation, ARP program staff said that they had considered the issue and determined that ARP should remain voluntary. SEC staff said that a substantial lack of cooperation with ARP would be inconsistent with an SRO’s general obligations, but they were satisfied with the extent to which SROs cooperate. Conclusions The use of information technology is pervasive in the securities industry, and the quality of the SROs’ systems is vital to the functioning of the markets. Based on our review, the ARP program provides SEC staff with some assurance that SROs are addressing capacity planning, security, and other information system issues. In addition, the ARP staff performed comprehensive and in-depth inspections of SRO systems, and were actively involved in the industry’s recent completion of efforts to ready systems for the Year 2000 date change and the transition to decimal trading. Various aspects of the ARP program highlight areas in which SEC’s oversight could be strengthened to better assure that the SROs manage their critical information systems sufficiently to prevent major disruptions in the markets. Although SEC staff consulted an extensive array of standards and guidance to ensure that their oversight addresses relevant issues, the lack of a consolidated inspection guide for their staff means that the consistency and quality of SEC’s oversight is heavily dependent on the efforts of the individual ARP staff. A consolidated inspection guide could take the existing five ARP areas and provide additional topics that the SEC staff find are most relevant given the current state of technology in the markets. Rather than duplicating external guidance that SEC staff already use, a consolidated inspection guide could enumerate these other sources and incorporate, by reference, the specific areas that the SEC staff have found relevant to their work. Having a consolidated inspection guide for its staff would better ensure that SEC’s ARP program oversight is conducted thoroughly and consistently across its staff. This is particularly important because the program has high turnover that results in significant portions of its staff having little or no experience. SEC’s ability to oversee information system issues is also hampered by the limited resources available to the ARP program, which constrains its staffs’ ability to inspect the SROs more frequently. SEC now relies largely on the SROs’ own internal auditors to review systems in detail instead of more routinely using external organizations as an independent check on the activities of the SROs, as was originally envisioned under the ARP program. In cases in which the internal audits had not sufficiently addressed issues or when SROs had deficiencies in their information system procedures, SEC staff have called for SROs to obtain external reviews of their systems. When combined with the reliance on internal audits, the ARP program’s voluntary nature raised concerns that SEC’s oversight efforts are not as effective as they could be. SRO cooperation in implementing significant SEC recommendations has been uneven. The SROs’ unwillingness to make recommended improvements may have adversely affected the markets, for example, when capacity problems at one market delayed full implementation of decimal trading for all securities markets. Because some SROs have not addressed ARP staff concerns over the lack of backup trading facilities, securities trading in the United States could be severely limited if a terrorist attack or a natural disaster damaged one of these exchange’s trading floor. When SROs are not implementing significant recommendations or taking steps to remedy identified capacity and security weaknesses, SEC’s Chairman and Commissioners could focus additional SRO attention on the need to take actions to improve their systems. SEC’s ARP policy statements left open the possibility of having a rule- based program if compliance was not adequate. Developing formal criteria and performing an assessment of SROs’ compliance with the ARP program would allow SEC to determine whether a rule-based program would be warranted. Such an assessment also could weigh the advantages and disadvantages of the current voluntary program and whether it provides SEC with sufficient authority to optimally ensure that SROs’ systems are sound. Criteria and an assessment could allow SEC to determine whether failure to implement recommendations risked material disruption in the markets. Making the ARP program mandatory could give SEC the authority it needs to better assure that SROs take cost-effective steps to improve their systems and procedures and reduce the risk of systems- related problems disrupting the markets. On the other hand, if the program were to be made mandatory, SEC would need to build adequate flexibility into the governing rule to deal with technological change. Recommendations Because of the importance of the proper functioning of the SROs’ information systems, we recommend that the Acting Chairman, SEC, take the following actions: ensure that the ARP program develops a consolidated inspection guide for the ARP staff that is updated on a periodic basis, ensure that significant ARP program recommendations and concerns that have not been addressed by the SROs are brought to the attention of the Chairman and the Commissioners, and develop formal criteria for assessing the SROs’ cooperation with the ARP program and perform an assessment to determine whether the voluntary status of the ARP program is appropriate. Agency Comments We obtained comments on a draft of this report from SEC, which are presented in appendix I. In its letter, SEC commented that the draft report was based on an inaccurate view of the ARP program, and that it did not reflect the development of the program since SEC issued its two ARP policy statements in 1989 and 1991. SEC provided an extensive discussion of the ARP program’s evolution over time. In response, we have made language changes where appropriate and believe that our report fairly presents the evolution of the ARP program over time. However, although the ARP program has achieved some important goals, we think that it could be more efficient and effective if our recommendations were adopted. SEC generally disagreed with our recommendations, noting that activities they already perform satisfy the intent of the recommendations. Specifically, SEC did not see a need to develop a consolidated inspection guide because it would quickly become outdated and the ARP staff’s approach to developing work plans for individual inspection results in oversight that addresses key capacity and security issues. The ARP staff’s approach has, to date, generally resulted in oversight that addresses key issues. However, given the high staff turnover and the relative inexperience of many staff, we are recommending that ARP develop a guide that will assure continued consistency. Moreover, we believe that such standard guides are a good business practice and a sound internal control. The type of guide that we recommend would also require minimal effort to update because it would largely incorporate by reference standards and criteria developed by other organizations, which would likely be updated by those organizations regularly. With respect to our recommendation that SEC develop a process to bring significant unimplemented ARP recommendations and outstanding concerns to the attention of the Chairman and the Commissioners, SEC commented it had a process that satisfied the recommendation. In its letter, SEC noted that it already reviews the status of all ARP recommendations. SEC also stated that where an SRO’s response to ARP recommendations is unsatisfactory, SEC has a procedure to bring the matter to the attention of the Division Director and, if necessary, to the Chairman and Commissioners. SEC commented that, based on discussions with us, the staff was enhancing its process for reviewing the status of ARP recommendations and updating the recommendations database. We note, however, that according to SEC officials, no unimplemented ARP recommendations or concerns have been escalated beyond the Division Director level. We believe that some significant unimplemented ARP recommendations and concerns regarding capacity and security weaknesses at the exchanges and clearing organizations warrant attention at the highest levels of the Commission. Involvement at this level would increase the likelihood that SROs would take meaningful action in response to such recommendations and concerns. Therefore, we reaffirm our recommendation. SEC also disagreed with our recommendation that it develop formal criteria for assessing SRO compliance with the ARP program. SEC commented that the risk assessment process the ARP program staff conducts annually for each SRO represents their assessment of the SRO’s compliance with the ARP policy statements and that when SROs do not implement ARP recommendations or remedy concerns, they call for additional inspections and reviews. Although we agree that the ARP staff’s efforts have resulted in some improvements in the SROs’ information systems, we remain concerned that some recommendations that SROs have not fully addressed pose a greater risk of further market disruptions. Moreover, seeking to address noncompliance with the ARP program by performing additional inspections would likely result in ARP staff identifying many of the same discrepancies over time. For example, ARP staff found capacity-related problems over several years at NASD and have had long-standing concerns about contingency planning alternatives at some SROs. SEC’s risk assessment process, although allowing it to adequately plan its oversight, does not constitute or supplant the type of assessment of overall program compliance that we recommend. Instead, by developing formal criteria and assessing the overall level of compliance with the ARP program, SEC would have a sound basis for evaluating the nature of the program. Even if no change in its status were made after such an assessment, periodically reapplying the criteria would allow SEC to assess the pattern of compliance by SROs over time to ensure that the program’s status is not hampering the effectiveness of SEC’s oversight of the SRO information systems that are critical for continued market functioning. SEC also commented that neither GAO nor SEC itself has any basis to believe that the voluntary nature of the program is problematic. However, we did identify various instances in which SROs were not addressing recommendations or taking actions in response to ARP staff concerns or were not making the reports that SEC has requested. Furthermore, we are not recommending that SEC make the ARP program mandatory, but instead have recommended that SEC develop formal criteria to assess whether the program is working as it is currently structured. SEC also provided technical comments that we incorporated as appropriate, including refining our presentation of the extent to which ARP program recommendations have not been implemented. In addition, we revised the language of the report and our recommendation to clarify that the SEC Chairman and Commissioners should be advised when significant recommendations to SROs are not implemented or SRO actions do not address ARP staff concerns. As agreed with you, unless you publicly release its contents earlier, we plan no further distribution of this letter until 30 days from its issuance date. At that time, we will send copies to the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, House Committee on Financial Services; the Chairman, House Committee on Energy and Commerce; and the Acting Chairman, SEC. We will also make copies available to others upon request. If you have any further questions, please call me at (202) 512-8678 or Cody J. Goebel, Assistant Director, at (202) 512-7329. Appendix I: Comments From the Securities and Exchange Commission The following are GAO’s comments on the Securities and Exchange Commission’s letter dated July 18, 2001. GAO’s Comments 1. SEC’s letter states that our report overlooks the important distinction between a program that is tasked with overseeing information technology, in which no single set of standards exists, and other programs based on assessing compliance with rules that lend themselves to bright-line tests. However, we believe that our report acknowledges the evolving nature of information systems and the lack of one source for standards, but also offers suggestions to improve SEC’s oversight of this area. For this reason, we recommended that SEC create a consolidated guide for its staff of the most up-to-date and authoritative sources for criteria for planning their oversight activities. We also believe that rules can be drafted to allow sufficient flexibility for information technology advances. Furthermore, many examination programs assess compliance using professional judgment against criteria even when bright lines do not exist. 2. SEC’s letter states that our report assumes that its 1991 checklist is the principal tool used by SEC staff to conduct inspections. However, our report describes the process SEC staff uses to plan inspections, including drawing on external criteria and using work plans and checklists from more recent inspections. However, we did observe instances in which the staff continued to use the 1991 checklist, but had to supplement the areas that it does not address with all the other sources. Appendix II: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Ronald W. Beers, Emily R. Chalmers, Heather T. Dignan, William Lew, and Jean-Paul Reveyoso made key contributions to this report. Related GAO Products Securities Pricing: Trading Volumes and NASD System Limitations Led to Decimal-Trading Delay, GAO/GGD/AIMD-00-319, Sep. 20, 2000. Securities Pricing: Progress and Challenges in Coverting to Decimals, GAO/T-GGD-00-96, Mar. 1, 2000. Securities Pricing: Actions Needed for Conversion to Decimals, GAO/T- GGD-98-121, May 8, 1998. Financial Markets: Stronger System Controls and Oversight Needed to Prevent NASD Computer Outages, GAO/AIMD-95-22, Dec. 21, 1994. Financial Markets: Computer Security Controls at Five Stock Exchanges Need Strengthening, GAO/IMTEC-91-56, Aug. 28, 1991. Financial Markets: Active Oversight of Market Automation by SEC and CFTC Needed, GAO/IMTEC-91-21, May 10, 1991. Stock Market Automation: Exchanges Have Increased Systems’ Capacities Since the 1987 Market Crash, GAO/IMTEC-91-37, May 10, 1991.
Capacity problems and other disruptions at the securities and options exchanges have caused processing delays within the U.S. securities markets in recent years. These exchanges and clearing organizations have also been concerned about unwarranted access by hackers and other unauthorized users. To address these issues, the securities and Exchange Commission (SEC) created its automation review policy program in 1989. The program calls for the exchanges and clearing organizations that act as self-regulatory organizations to voluntarily follow SEC guidance and submit to oversight of their information systems. The program includes two key policy statements that provide voluntary guidelines to these organizations, periodic on-site inspections by SEC staff, and independent reviews of systems by internal auditors or external organizations. In addition, self-regulatory organizations are expected to provide SEC with reports of system outages and notices of system modifications. This report reviews SEC's effectiveness in its oversight roles. GAO found that the program reasonably ensures that self-regulatory organizations address capacity, security, and other information systems issues. However, SEC could improve its program oversight by consolidating criteria used by program staff into a comprehensive guide. Overall, SEC's inspections addressed the key areas of program guidance and often contained substantive recommendations designed to improve the organizations' procedures.
Background In 1962, the U.S. government enacted conflict of interest laws that were designed to protect against the improper use of influence and government information by former employees, as well as to limit the potential influence that a prospective employment arrangement may have on current federal officials when dealing with prospective private clients or future employers while still in government service. Congress broadened post-employment restrictions as part of the Ethics Reform Act of 1989, including, for example, a restriction against certain former government officials representing, aiding, or advising on foreign entities. The executive branch promotes compliance with post-employment restrictions through agency ethics-in-government programs, which are guided by OGE, an executive branch agency. OGE is responsible for providing overall direction to executive branch policies related to preventing conflicts of interests on the part of officers and employees of any executive agency. Individual agencies are responsible for the day-to-day administration of their own ethics programs. In contrast to post-employment restrictions specific to former government personnel, there are federal disclosure statutes concerning foreign representation and lobbying activities that affect all individuals. Enacted in 1938, FARA is a disclosure law that requires all individuals in the United States working as agents of a foreign principal to publicly disclose these connections. LDA is a disclosure law that requires all individuals working a certain percentage of the time as lobbyists to publicly disclose these activities. Lobbying regulations began with the Federal Regulation of Lobbying Act of 1946, which required lobbyists to disclose the identities of their clients, report the receipts and expenses involved, and describe the nature of the legislative objectives that were pursued for each client. Lobbying was interpreted under the 1946 act as direct communication with a member of Congress in an attempt to influence the passage or defeat of any proposed or pending legislation. Congress replaced this law with the Lobbying Disclosure Act of 1995, which expanded the definition of lobbying to include communications with “covered” employees in both the legislative and executive branch regarding legislation, regulations, policies, or the nomination or confirmation of a person for a position subject to confirmation by the Senate. Post-Employment Restrictions for Former Federal Officials and Disclosure Laws Related to Foreign Representation and Lobbying Post-employment restrictions in the Revolving Door law prohibit federal employees from engaging in certain conduct with the intent to influence government officials for a specified period of time after leaving federal employment. In contrast to post-employment restrictions for former government officials, the disclosure laws in FARA and LDA are not specific to former government employees. FARA and LDA do not prohibit any activities; rather, they require individuals engaging in certain foreign representation and lobbying activities to make these activities public. Table 1 describes the key attributes of the three laws. Revolving Door Law Revolving Door Restrictions More Stringent for Former Senior and Very Senior Officials The post-employment restrictions contained in the Revolving Door law prohibit categories of former federal employees from conducting certain activities, with the intent to influence government officials, for various periods of time once they have left federal government employment. Most executive branch employees are affected by only one restriction: a life- time ban on “switching sides,” that is, representing any person with the intent to influence, in a communication to or appearance before a government official, in connection with a matter (1) in which the United States is a party or has a direct and substantial interest, (2) in which the former executive branch employee had worked personally and substantially for the government, and (3) that involved specific parties at the time of the former employee’s participation. Additional restrictions, however, apply to senior or very senior employees: a 1-year “cooling off” period bars certain former senior employees from representing anyone with the intent to influence individuals at their former agency and a 2-year “cooling off” period bars former very senior employees’ representation and attempted influence concerning any matter. These former senior and very senior employees are also banned for 1 year from representing, aiding, or advising foreign entities with the intent to influence a decision of a government official, and former U.S. Trade Representatives and Deputy Trade Representatives are banned for life from such activity. Level of pay and certain designated positions are used to categorize employees as “senior” or “very senior.” Senior employees include employees whose rate of pay is specified in or fixed according to the Executive Schedule, as well as certain other employees who hold specific appointed positions or who meet a specific financial threshold—86.5 percent of Executive Schedule Level II. Most employees in the Senior Executive Service are considered senior employees under the Revolving Door law because their pay exceeds this financial threshold. “Very senior employees” include employees whose rate of pay is equal to the rate of pay for Level I of the Executive Schedule, and employees in certain other named and appointed positions. The number of senior and very senior officials separating from USTR, ITA, and USITC, and to whom certain Revolving Door restrictions apply, varies from year to year. From 2004 through 2009, a total of 19 senior or very senior officials separated from USTR, 47 separated from ITA, and 5 , 47 separated from ITA, and 5 separated from USITC (see fig. 1). separated from USITC (see fig. 1). One section of the Revolving Door law is of specific relevance to former officials who participated in treaty negotiations. This section prohibits all former employees (regardless of level) who participated personally and substantially in ongoing treaty negotiations for 1 year from aiding or advising any other person in that treaty negotiation, on the basis of certain “covered” information to which the employee had access. This section of the law also used to apply to employees who negotiated certain trade agreements; however, the specific definition of “trade agreements” as used in the section of the law refers only to the “fast track” trade agreement authority that expired in 1993. According to OGE, when Congress restored similar fast track authority in 2002, it did so by creating new authority rather than by amending the prior fast track provisions that are referenced in the section of the Revolving Door law, and made no conforming changes to reference the new fast track provisions. Consequently, the prohibition no longer applies to former government employees who negotiated trade agreements. As a result, former employees may advise another party on “covered” information related to trade negotiations as long as doing so would not violate other provisions of the Revolving Door law. OGE did not take a position on whether this section of the law should be amended to again cover fast track trade agreement authority. However, an OGE official told us that the Revolving Door prohibition relating to trade negotiations had applied only in relatively narrow circumstances, for example when the employee used information that he or she knew was designated as exempt from disclosure under the Freedom of Information Act. See appendix II for a more detailed discussion of all post-employment restrictions in the Revolving Door law. USTR, ITA, and USITC Ethics Officials Train Current Employees and Advise Former Employees on Revolving Door Restrictions Ethics officials at USTR, ITA, and USITC described a variety of activities they use to inform senior employees of the post-employment restrictions, such as conducting training programs and providing counseling to former employees. Ethics officials at all three agencies told us that they provide special, one-on-one counseling to senior officials separating from the government regarding what post-employment activities are permitted. They said that they also advise former employees who contact them with questions on post-employment restrictions. The ethics officials at the three agencies described the ethics training they provide that is specific to post- employment restrictions: USTR. According to USTR’s designated ethics official, all new employees receive training from the Executive Office of the President’s Office of Administration as well as an additional briefing from a USTR ethics official on issues of particular concern to USTR. New senior employees receive one-on-one training. Current employees receive annual ethics training during which employees are encouraged to contact the ethics official with any questions or concerns regarding contact they receive from former employees. All separating employees receive one-on-one training that addresses the post-employment provisions applicable to them; they also receive an outline of the post-employment restrictions. This counseling is documented on the employees sign out form, which is retained by the agency. Senior employees complete financial disclosure reports that identify any agreements they have for future employment. Separating employees are informed that they may contact the USTR ethics official after leaving the agency to ask questions on post-employment restrictions. The USTR ethics official said that many former employees do contact the ethics office; specific advice provided is documented in either an e-mail or in notes of the conversation. ITA. According to a Commerce ethics official, all new ITA employees located in the Washington, D.C., area receive an in-person briefing at the time of appointment; employees located outside of Washington, D.C., receive a written copy of the summary of ethics rules for new employees. All ITA officials appointed by the President receive individual ethics briefings from the Assistant General Counsel for Administration, upon appointment and each year thereafter, including a post-employment briefing. Current ITA employees who are required to file a private or public financial disclosure report (which includes all senior officials) receive a written copy of the summary of ethics rules and those in the Washington, D.C., area attend an in-person ethics briefing every year. The ethics office also routinely provides in-person briefings at regional conferences to ITA employees stationed at foreign posts. The office also provides individual briefings to separating ITA officials upon request. The office typically provides a 1-page summary of the post-employment restrictions and/or a 17-page detailed summary of the post-employment restrictions for employees requesting post-employment guidance. A Commerce ethics official reported that former ITA officials have contacted the ethics office for post-employment guidance on numerous occasions. USITC. According to an USITC ethics official, all employees separating from the USITC have one-on-one meetings with an ethics official to receive counseling and documentation on post-employment restrictions. Senior officials receive specific information regarding the parts of the Revolving Door restrictions specific to them. All separating employees must sign a form to acknowledge receipt of a memorandum describing the post-employment restrictions. Attached to the memorandum is OGE guidance on post-employment restrictions, a copy of the Revolving Door law, and various other information regarding how ethics rules apply to former officials’ post-employment activities. The packet also contains information on a rule specific to the USITC: no former officer or employee of the USITC who personally and substantially participated in a matter that was pending in any manner or form before the USITC during his or her employment shall be eligible to appear before the USITC as attorney or agent in connection with such matter. No former officer or employee of the USITC shall be eligible to appear as attorney or agent before the USITC in connection with any matter that was pending in any manner or form before the USITC during his or her employment, unless he or she first obtains written consent from the USITC. The memorandum also explains that the USITC’s ethics counseling service is available to employees with any questions concerning post-employment activities. Revolving Door Enforcement Former government officials who violate the Revolving Door law may be subject to criminal and civil penalties. prohibited activity can be imprisoned for up to 1 year, or fined for each violation, or both. Any person who willfully engages in conduct violating the provisions of the law may be imprisoned for up to 5 years, or fined for each violation, or both. In addition to criminal punishment, the Attorney General is authorized to bring civil suits against anyone who violates the law. If found to have engaged in misconduct, the defendant can be subject to a civil penalty up to $50,000 for each violation, or the amount of compensation that he or she received or was offered for the prohibited conduct, whichever is greater. Finally, the Attorney General may also petition for injunctive relief in federal court to prevent the defendant from engaging in conduct that violates the law. Our discussion on enforcement of the Revolving Door law refers only to enforcement of 18 U.S.C § 207, the sections of the law related to post-employment activities of former federal employees. violations in conjunction with Inspectors General. Justice officials reported that the record of Revolving Door prosecutions is limited and that there are no prosecutions on record for violations by former USTR, ITA, and USITC officials. Through its annual Conflict of Interest Prosecution Survey, OGE collects information from Justice on all indictments, pleas, convictions, etc., that deal with the conflict of interest laws. According to OGE’s prosecution surveys, there have been 26 reported cases for Revolving Door prosecutions from 1990 through 2008. These cases included, for example, prosecutions of former officials who had violated their cooling off periods or life-time representation ban. None of these cases involved prosecutions of the section of the law that prohibits former senior officials from representing, aiding, or advising a foreign interest. Justice officials cited several reasons for the limited number of prosecutions. First, they said that they receive a limited number of referrals from the investigative agencies. In particular, the Civil Division reported that the division had received no referrals of Revolving Door violations by USTR, ITA, and USITC officials in at least 15 years. Second, it is difficult to bring cases to a criminal level because Justice must show that the former employee knowingly broke the law. One Justice official noted that it is difficult to prove that the former employee knew he or she was violating the law and that it is often hard to prove that the employee’s actions resulted in real harm. Moreover, it is possible that a former official misunderstood guidance received from his or her ethics official regarding post-employment restrictions, or that the former official did not receive accurate guidance. Justice officials said they viewed the Revolving Door law as being more useful as a preventative measure rather than a tool for prosecution; they believed that guidance from agency ethics officials deterred most violations. In many cases, according to a Justice official, the former employee can be counseled to stop doing what he or she is doing and that is all that needs to be done. Foreign Agents Registration Act (FARA) FARA is a disclosure law that requires all individuals acting as agents of foreign principals to register their activities with Justice, unless exempt by law. FARA registration requirements are not specific to former federal employees, but rather apply to all individuals and organizations performing certain activities on behalf of a foreign principal, unless specifically exempt. The purpose of the act is to ensure that the U.S. government and the American people are informed of the source of representational activity in the United States and the identity of persons attempting to influence U.S. public opinion, policy, and laws. Under FARA, a person is considered an agent of a foreign principal when the person acts in any capacity at the order or request or is under the control, supervision, or financing of the foreign principal, and engages in the following within the United States: political activities for or in the interest of the foreign principal; public relations, information-service employment, or political consulting for or in the interest of the foreign principal; fundraising, collecting, or disbursing of money or things of value for or in the interest of the foreign principal; or representing the interests of a foreign principal before any agency or official of the U.S. government. Justice’s Registration Unit, in the National Security Division, is responsible for the administration of the law. FARA requires individuals engaged in the listed activities above to file a registration statement, which collects detailed information on the registrant and the activities he or she will perform on behalf of the foreign principal listed. Additionally, foreign agents are required to file a supplemental statement every 6 months for the duration of the foreign principal-agent relationship, providing updated information on the agent’s activities. According to the Registration Unit, 5 of the 71 former senior officials who separated from USTR, ITA, and USITC from 2004 through 2009 registered as foreign agents at one point, but only one is currently registered. The other four individuals were FARA registrants for periods of time while working for private law firms either before their government service or in between 2 periods of government service. They are no longer registered. The individual who is currently registered was a former senior official at USTR and separated from the federal government in 2006. This individual first registered under FARA in 2008, more than 2 years after separating from the federal government, and, as of April 2010, remains actively registered. According to the individual’s initial FARA registration in 2008, the individual is employed by a law firm and facilitates interaction between the U.S. government and the government of Mexico’s agriculture department on meat inspection issues and Mexican meat imports. Numerous FARA Exemptions Individuals and organizations engaging in certain diplomatic, humanitarian, commercial, and legal activities on behalf of a foreign principal are exempt from registering with Justice. FARA regulations state that the burden of establishing the availability of the exemption is on the person for whose benefit the exemption is claimed; however, there is no requirement for such persons to provide any notification about their exempted activities and thus they are not formally tracked. Diplomatic and consular officers of foreign governments, officials of foreign governments, and staff members of diplomatic and consular officers of foreign governments are exempt from registering under FARA. Diplomatic and consular officers must be accredited by the Department of State, and foreign officials and diplomatic and consular staff must file with the Department of State notifications of status with a foreign government. Other exempted categories of agents of foreign principals include individuals who (1) engaged only in private and nonpolitical activities in furtherance of trade or commerce for the foreign principal; (2) engaged in collecting funds or contributions within the United States for humanitarian purposes such as medical aid, food, and clothing; and (3) engaged only in activities in furtherance of bona fide religious, scholastic, academic, scientific, or artistic pursuits. Activities are considered “private” for the commercial trade exemption, so long as they do not directly promote the public or political interests of the foreign government. This applies even if the foreign principal is a corporation that is owned by the foreign government. The religious and scholastic pursuit exemption does not apply to any agent of a foreign principal who is engaged in political activity for the foreign principal. Lawyers engaging in legal representation of foreign principals before an adjudicatory body in the United States are exempt from registering under FARA. The legal exemption does not include attempts to influence officials other than in the course of judicial or law enforcement investigations or proceedings. Examples of activities for which lawyers still must register include attempts to influence the formulation, adoption, or change of domestic or foreign U.S. policy, or to persuade agency personnel or officials with reference to the political or public interests of a foreign country or foreign political party. Agents of foreign principals who have registered as lobbyists under LDA are exempt from registering under FARA, if the LDA registration is connected with the agent’s representation of the foreign principal. However, this exemption does not apply if the foreign principal is a foreign government or foreign political party. Another exemption exists for agents whose foreign principal is a government of a foreign country, the defense of which the President deems vital to the defense of the United States. This exemption is only available for people who are conducting activities that do not conflict with any domestic or foreign policies of the United States, who only disseminate accurate information within the United States and disclose their true identity in the disseminated information, and whose government has furnished information to the United States about the identity and activities of the agent of the foreign principal. This exemption does not become available until the President has published in the Federal Register the country whose defense is deemed vital to the defense of the United States. FARA Compliance and Enforcement According to Justice officials, the cornerstone of the Registration Unit’s enforcement efforts is encouraging voluntary compliance. This includes providing registration forms, copies of the FARA law, and other information to registrants, as well as members of the public and press. The Registration Unit proactively outreaches to various professional communities (e.g., law, advertising, political, and public relations firms) from which the majority of foreign agents are drawn, as well as educates prosecutors and other federal agencies about FARA. The Registration Unit meets with potential registrants to discuss their possible obligation to register, and with current registrants regarding whether they should continue to register. Justice has a public Web site that provides an overview of FARA and key information. In addition, Justice officials said they answer inquiries from agency ethics officers on FARA requirements and provide information on FARA registration and reporting requirements for federal employees to ethics officers. Justice officials said that the Registration Unit is proactive in identifying potential registrants. It reviews publications such as Congressional Quarterly; monitors the Lobbying Disclosure Web site; and acts on tips provided from various sources. These referrals may come from sources such as the Department of State and the Federal Bureau of Investigation, or from competing legal firms or members of the public. The Registration Unit officials send out letters of inquiry to individuals it has reason to believe may be acting as foreign agents. The letters of inquiry start a process in which the Registration Unit requests more information on these individuals’ activities to determine whether they need to register. In 2008, we reported that from January 2004 through May 2008, the Registration Unit had sent letters to approximately 130 individuals or firms it believed may have had an obligation to register as foreign agents under FARA, and received approximately 25 registrations as a result of these letters; the remaining entities were either determined not to have an obligation to register or were still being reviewed at the time of our 2008 report. We requested updated information from Justice regarding inquiries sent from June 2008 through March 2010; Registration Unit officials reported that it had sent 18 letters of inquiry, and that two individuals were found to have obligations to register and have since registered. The remaining 16 were either found to have no obligation to register or the Registration Unit is continuing to evaluate whether a registration obligation exists. Civil and criminal penalties exist for willful violations of FARA requirements and for willful false statements or omissions on FARA registration statements and supplements. Individuals who willfully violate FARA, or willfully make a false statement or omission on their registration form, can be imprisoned for up to 5 years or be fined up to $10,000, or both. For certain violations, including the failure to properly label propaganda that is disseminated in the United States, the punishment is imprisonment for up to 6 months or a fine of up to $5,000, or both. The Registration Unit handles enforcement of FARA violations. According to the Registration Unit, it has prosecuted one violation of FARA since 1990. Lobbying Disclosure Act The Lobbying Disclosure Act of 1995, as amended, requires public disclosure by registrants of certain lobbying activities. The LDA was enacted to enhance public awareness of paid lobbyists’ efforts to influence the public decision-making process in the legislative and executive branches, and to increase public confidence in the integrity of government. Under LDA, a registrant can be an individual, a lobbying firm, or an organization that has employees lobbying on its own behalf, depending on the circumstances. Registrants are required to file a registration with the Secretary of the Senate and the Clerk of the House of Representatives for each client on whose behalf a lobbying contact is made if a minimum dollar threshold is passed. For reporting purposes, a lobbyist is defined as a person who has made two or more lobbying contacts and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during any quarter. Registrations and reports must also identify any covered official positions a lobbyist held in the previous 20 years. Lobbyist registration requirements apply to all individuals conducting lobbying activities, not only to former federal officials. Within 45 days of first making a lobbying contact or being employed to make a lobbying contact with a covered official, whichever is earlier, the lobbyist or organization employing the lobbyist must register with the Secretary of the Senate and Clerk of the House of Representatives. The Secretary and the Clerk are required by law to provide guidance and develop common standards and procedures for compliance with the LDA. They must also review, verify, and inquire to ensure the timeliness and accuracy of the reports, as well as develop a publicly available list of all registered lobbyists and their clients. The Secretary and the Clerk must retain registrations and reports for 6 years after they are filed, and make all filed documents searchable on the Internet for free. If lobbyists are not in compliance with the LDA, the Secretary and Clerk must notify them in writing, after which the registrant has 60 days to provide an appropriate response. If the registrant does not reply, the Secretary and Clerk must refer the noncompliance to the U.S. Attorney for the District of Columbia. According to the Clerk of the House of Representatives, 15 of the 71 former senior or very senior officials who separated from USTR, ITA, and USITC registered as lobbyists. Nine were former USTR officials, four were former ITA officials, and two were former USITC officials. During a calendar quarter, lobbyists who do not spend at least 20 percent of their time conducting lobbying activities, or whose lobbying activities do not meet the above mentioned financial thresholds for either a particular client or for total expenses, are not required to register. In addition, communications made on behalf of a government of a foreign country or a foreign political party and disclosed under FARA are Agents of foreign excluded from the definition of a “lobbying contact.” principals registered under FARA, whose only contacts are on behalf of foreign governments or political parties, therefore do not meet the definition of a lobbyist, because they would not have conducted any activities that meet the definition of “lobbying contact.” These individuals are therefore exempt from registering as lobbyists under the LDA for these activities. LDA Compliance and Enforcement The U.S. Attorney’s Office for the District of Columbia is responsible for the enforcement of the LDA. It fulfills its responsibilities, administratively, by researching and responding to referrals made from the Secretary of the Senate and the Clerk of the House of Representatives of non-complying lobbyists by sending additional noncompliance notices to the lobbyists, requesting that the lobbyists file reports or correct reported information. The U.S. Attorney’s Office has the authority to pursue a civil or criminal case for noncompliance. Civil penalties exist for instances in which lobbyists knowingly fail to remedy a defective filing within 60 days after notification from the Secretary or Clerk, and for any other knowing failure to comply with provisions of the LDA. If these violations occur, lobbyists can be subjected to fines up to $200,000, depending on the gravity of the violation. In addition, anyone who knowingly and corruptly fails to comply with the LDA requirements can be imprisoned for up to 5 years or fined, or both. In past work, we have reported in detail on the U.S. Attorney’s Office’s enforcement efforts. To enforce LDA compliance, it has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. The letters request that the lobbyists comply with the law and promptly file the appropriate disclosure documents. In our 2008 lobbying disclosure report, we noted that the U.S. Attorney’s Office had settled with three lobbyists and collected civil penalties totaling about $47,000 in 2005. All of the settled cases involved a failure to file. Since then, no additional settlements or civil actions have been pursued, although the U.S. Attorney’s Office is following up on hundreds of referrals each year. In a response to a GAO recommendation, the U.S. Attorney’s Office developed a system to help monitor and track its enforcement efforts. Agency Comments and Our Evaluation We provided a draft of this report to USTR, ITA, USITC, Justice, and OGE and requested that they provide comments. We also provided a draft to staff at the Clerk of the House of Representatives and the Secretary of the Senate. We received comments from officials at all of these agencies, except ITA, to clarify our descriptions of the various laws, regulations, and agency practices. We considered their suggestions and made changes throughout the report in response, as appropriate. We are sending copies of this report to the United States Trade Representative, the Secretary of Commerce, the Chairman of the U.S. International Trade Commission, the Director of the Office of Government Ethics, and the Attorney General, as well as appropriate congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report describes a summary and comparison of the relevant federal statutes governing post-employment, foreign representation, and lobbying activities. This information includes data on the number of senior officials who separated from the Office of the United States Trade Representative (USTR), the Department of Commerce’s International Trade Administration (ITA), and the United States International Trade Commission (USITC) from 2004 through 2009, as well as information on the number of these officials who registered under FARA or LDA. To address this objective, we reviewed the post-employment restrictions of 18 U.S.C. § 207, referred to as the “Revolving Door” law in this report, the Foreign Agents Registration Act (FARA), and the Lobbying Disclosure Act (LDA), as amended. We interviewed officials from the Office of Government Ethics (OGE) regarding the interpretation and implementation of the Revolving Door law. We interviewed ethics officials from USTR, ITA, and USITC regarding these laws and the guidance these officials provide to current, separating, and former employees of their respective agencies. We included USTR and ITA in our scope because their respective missions concern trade policy formulation and trade promotion. We included the USITC because of its role in administering U.S. trade remedy laws and providing independent analysis on trade matters. We interviewed Department of Justice (Justice) officials regarding administration of FARA and enforcement actions for post- employment restrictions and FARA. We focused our work specifically on the number of former senior and very senior officials separated from these agencies because post-employment restrictions are more stringent for former senior and very senior officials. We obtained data on the number of former senior officials who separated from USTR, ITA, and USITC from 2004 through 2009 from each of the respective agencies and cross-referenced these data with data we extracted from the Office of Personnel Management’s Central Personnel Data File. We defined “senior” as being any official who met the definition of “senior employee” in the post-employment restrictions on former federal employees: Employees whose rate of pay was specified in or fixed according to the Executive Schedule. For employees whose rate of pay was not tied to the Executive Schedule, any employee whose rate of basic pay was at 86.5 percent or higher of the rate of basic pay for Level II of the Executive Schedule. For the period between November 24, 2003, to November 24, 2005: employees who, as of November 23, 2003, were in a position for which the rate of basic pay was equal or greater than the rate of basic pay payable for Level 5 of the Senior Executive Service in 2003. We collected data on all senior officials separated from these three agencies, regardless of job title or description. We did not include officials who had separated from one of the agencies but who had continued working for the federal government at another agency. For contextual purposes, we queried the Central Personnel Data File to ascertain the number of all staff who separated from USTR, ITA, and USITC from 2004 through 2009 and left government service. To assess the reliability of data on senior officials who separated from USTR, ITA, and USITC, we used data from the Central Personnel Data File to determine the reliability of the agency-supplied data. After reconciling discrepancies with the agencies and receiving revised data from them, we determined that the data provided to us from the agencies were sufficiently reliable. Using the names of the 71 former senior officials we had identified as having separated from USTR, ITA, and USITC between 2004 and 2009, we requested Justice’s Registration Unit to determine which of these individuals had registered under FARA. As our work focused only on senior level employees, we did not ask the Registration Unit to search for FARA registrations for all former employees from these three agencies. We did not attempt to determine whether any of these 71 former senior officials who did not register should have registered under FARA. Using this same list of 71 former senior officials, we requested the Clerk of the House of Representatives to conduct a search of the publicly-available LDA database maintained by the Clerk to ascertain the number of these individuals who had registered as lobbyists. As our work was focused only on senior employees, we did not search the LDA databases for all former employees from these three agencies. We did not attempt to determine whether any of these 71 former senior officials who did not register should have registered under LDA. To assess the reliability of the FARA and LDA registration data, we reviewed documentation related to the data sources and interviewed knowledgeable agency officials about the data. Although both FARA and LDA are publicly-available databases, we requested that officials at Justice’s Registration Unit search the FARA database and that officials at the Clerk of the House of Representatives search the LDA database as those officials are knowledgeable about search terms for their databases. These officials described to us the structure of their databases and the methods used for searching. We determined that the data were sufficiently reliable for the purpose of our report. We conducted our work from January 2010 to June 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objective. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for the findings in this product. Appendix II: Overview of Revolving Door Law and Regulations OGE has promulgated regulations that clarify many of the terms in the Revolving Door law and that provide examples of what constitutes prohibited behavior by a former federal employee, as discussed below: 18 U.S.C. § 207(a)(1) Permanent restrictions on representations in particular matters for all officers and employees of the executive branch. This section of the law prohibits any former employee of the executive branch from knowingly, with the intent to influence, making a communication to or appearance before a federal employee on behalf of another person in connection with a particular matter involving a specific party, in which he or she participated personally and substantially as an employee, and in which the United States is a party or has a direct and substantial interest. For the life-time ban under this section of the law, federal regulations state that “communications” occur when information of any kind is transmitted by any means, as long as the employee intends the information to be attributed to himself or herself. “Appearances” occur when the former employee is physically present before an employee of the federal government in either a formal or informal setting. This section does not prohibit any behind-the-scenes assistance from former federal employees, so long as no communications or appearances occur. For example, if a former employee of an agency accompanies representatives of a grantee of that agency to an agency meeting, the former employee is considered to be making an appearance, even if he or she never speaks during the meeting. Communications and appearances are only prohibited if they are made knowingly and with the intent to influence the United States government. Federal regulations clarify that this occurs when the former employee’s purpose is to: (1) seek a government ruling, benefit, approval, or other discretionary government action, or (2) affect government action in connection with an issue or aspect of a matter which involves actual or potential controversy. For example, a former employee who calls an agency official to complain about how that agency is auditing the employee’s current employer has made a communication with the intent to influence government action. Certain communications and appearances are not considered to be made with the intent to influence, including routine requests not involving controversy, factual statements or inquiries that are not in dispute or do not seek discretionary government action, and purely social contacts. For example, a former employee who calls his or her prior agency to ask for the date of a scheduled hearing for her current client is not intending to influence the government. However, if he or she calls his or her former agency to request that the hearing date be moved, that may be considered a communication made with the intent to influence. The prohibition in this section of the law applies only to communications or appearances made in connection with particular matters involving specific parties. According to federal regulations, “particular matters involving specific parties” include those that involve specific proceedings that affect the legal rights of parties, or an isolatable transaction between identified parties, such as specific contracts, grants, licenses, product approval applications, enforcement actions, administrative adjudications, or court cases. “Particular matters involving specific parties” do not include matters of general applicability, such as rulemaking or the formulation of general policies. The regulations state that international agreements may sometimes be considered particular matters involving specific parties, depending in part on whether the agreement focuses on specific property or claims, or instead includes a large number of diverse issues. For example, the regulations state that a former employee of the Department of State who participated in a treaty negotiation concerning transfer of ownership of a piece of land may not later represent the foreign government in the final stages of that negotiation without violating this provision. The prohibition in this section of the law only applies to employees who participate “personally and substantially” in the matter. Federal regulations state that this means the employee participated directly or through direct and active supervision, and that the employee’s involvement was of significance to the matter. Substantial participation requires more than official responsibility or involvement on an administrative or peripheral issue. 207(a)(2) Two-year restrictions on all former executive branch employees for particular matters under official responsibility. A 2- year prohibition, similar to the life-time prohibition that exists under section 207(a)(1) for all former federal employees, exists under section 207(a)(2). However, whereas section 207(a)(1) requires personal and substantial participation in a matter on the part of the former employee, section 207(a)(2) merely requires that the employee had “official responsibility” for the particular matter. According to OGE regulations, “official responsibility” means direct administrative or operating authority to approve, disapprove, or otherwise direct government action. 207(b)(1) One-year restrictions on aiding or advertising concerning treaty negotiations. All former federal employees who participated personally and substantially in an ongoing treaty negotiation are prohibited for 1 year from aiding or advising any other person in that treaty negotiation, if the employee had access to certain covered information. According to OGE regulations, “covered information” means agency records that the employee had access to and that were designated exempt from disclosure under the Freedom of Information Act. The same prohibition used to exist for employees who negotiated certain trade agreements; however, the specific definition of “trade agreement,” as used in section 207(b)(2)(A), refers only to the fast track trade agreement authority, which expired in 1993. According to OGE, when Congress restored similar fast track authority in 2002, it did so by creating new provisions rather than by amending the prior fast track law that is referenced in section 207(b) and made no conforming changes to section 207(b) to reference the new fast track provisions. Consequently, section 207(b) no longer covers any existing trade agreement authorities. 207(c) One-year restrictions on former senior officials concerning any matter. For 1 year following termination of service in a senior position, former senior employees may not knowingly, with the intent to influence, make communications to or appearances before their former agency, if it is made on behalf of another person in connection with a matter on which the former employee seeks official action by the agency. “Senior employees” include employees whose rate of pay is specified in or fixed according to Level II of the Executive Schedule, as well as certain other employees who meet a specific financial threshold or hold specific appointed positions. Federal regulations state that a senior employee seeks official action when he attempts to induce a current employee to make a decision by his communication or appearance. Additionally, “matter” is not limited to just “particular matters” for this section, but also includes the consideration of broad policy options, new matters that were not previously pending at the employee’s former agency, and matters pending before any other agency or the legislative or judicial branches of government. 207(d) Two-year restriction on former very senior employees’ representations concerning any matter. A similar prohibition that applies to former senior federal employees also applies to former very senior federal employees, except that the prohibition lasts for two years, and also applies to representational contacts with Executive Schedule officials in the federal government and the President and Vice President. “Very senior employees” include employees whose rate of pay is equal to the rate of pay for Level I of the Executive Schedule, and employees in certain other named and appointed positions. 207(e) Two-year restriction on former Senators and 1-year restriction on former members of the House of Representatives and congressional staff. For 2 years for Senators, and for 1 year for members of the House of Representatives and congressional staff, former members of Congress and employees are prohibited from contacting current members of Congress and congressional staff with the intent to influence any matter on which the former member or staffer seeks action. 207(f) One-year restriction on former senior and very senior employees’ representations on behalf of, or aid or advice to, a foreign entity. For 1 year after leaving a senior or very senior position, employees cannot knowingly represent a foreign government or foreign political party before the United States government, or aid or advise the foreign entity with the intent to influence decisions of U.S. government officials. A life-time ban on representing, aiding, or advising foreign entities in this capacity applies to the United States Trade Representative and the Deputy United States Trade Representatives. Section 207(f) states that “foreign entity” means both “foreign government” and “foreign political party” as defined in FARA. Under FARA, foreign governments include any person or groups of persons exercising actual or legal political jurisdiction over any foreign country or portion thereof, including factions and insurgents that may exercise governmental authority but have not been recognized by the United States. Foreign political parties include organizations outside the United States that are engaged in activities devoted to establishing, controlling, or acquiring control of foreign governments, or that are furthering or influencing political or public interests, policies or relations of a foreign government. However, it is sometimes difficult to discern whether certain foreign organizations meet the definition of “foreign entity” under 207(f). Justice’s Office of Legal Counsel issued a legal opinion in 2008 stating that a foreign corporation can be considered a foreign entity for purposes of 207(f) if it exercises sovereign authority in fact or by formal delegation. In this opinion, Justice clarified that ownership of a foreign corporation by the foreign government does not itself make the corporation a foreign entity, but that if the corporation “exercises political jurisdiction over part of a foreign country,” then it would be considered a foreign entity under 207(f) and the prohibition would apply. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Loren Yager (202) 512-4347 or yagerl@gao.gov. Staff Acknowledgments In addition to the individual named above, Adam Cowles (Assistant Director), Kate Brentzel, Ashley Alley, Greg Wilmoth, and Karen Deans made key contributions to this report.
Congress has enacted laws to safeguard against former federal employees, including former trade officials, from using their access to influence government officials. These former officials' post-employment activities are restricted by a federal conflict of interest law, known as the "Revolving Door" law. Two other laws--the Foreign Agents Registration Act (FARA) and the Lobbying Disclosure Act (LDA)--are disclosure statutes that do not prohibit any activities per se, but require individuals conducting certain representation activities to publicly disclose them. FARA and LDA are not specific to former federal officials; they apply to all individuals. GAO was asked to provide a summary of the Revolving Door law, FARA, and LDA. GAO reviewed these laws, as well as guidance from the Office of Government Ethics (OGE). GAO interviewed ethics officials at three agencies whose missions focus on trade--the United States Trade Representative (USTR), the International Trade Administration (ITA), and the International Trade Commission (USITC)--and collected data on the number of senior officials who separated from these agencies from 2004 through 2009. In addition, GAO interviewed Department of Justice (Justice) officials concerning enforcement of these laws. GAO makes no recommendations in this report. Post-employment restrictions in the Revolving Door law, codified at 18 U.S.C. 207, prohibit some federal employees from engaging in certain activities, such as communicating with their former agency with the intent to influence government action, for a specified period of time after leaving federal service. The restrictions include a ban, for 1 year, on all former senior and very senior employees of federal agencies from representing, aiding, or advising a foreign government or political party with the intent to influence a government official, including the President, Vice President, and members of Congress. Level of pay and certain designated positions are used to categorize employees as "senior" or "very senior." A life-time ban on representing or advising foreign entities in this capacity applies to former U.S. Trade Representatives and Deputy Trade Representatives. In addition, all former federal employees who participated personally and substantially in an ongoing treaty negotiation are prohibited for 1 year from aiding any other person in that negotiation, if the employee had access to certain nonpublic information. Ethics officials at USTR, ITA, and USITC reported that they counsel current, as well as former, employees on post-employment restrictions. Justice officials said they viewed the Revolving Door law as being more useful as a preventative measure rather than a tool for prosecution; they believed that guidance from agency ethics officials deterred most violations. In contrast to post-employment restrictions specific to former government officials, FARA and LDA are disclosure laws that require all individuals, unless exempt, to publicly disclose certain foreign representation or lobbying activity. Individuals who act as agents of foreign governments or foreign political parties must register with Justice's Registration Unit. Individuals who conduct a certain amount of lobbying must register with the Secretary of the Senate and the Clerk of the House of Representatives. Both FARA and LDA disclosure information is publicly available.
Background WIC, which began as a 2-year pilot program in 1972 and was authorized as a permanent program in 1974, is part of the nutrition safety net available to low-income women and their children. FNS provides annual cash grants for food benefits and nutrition services to fund program operations at 88 state- level WIC agencies (including agencies in all 50 states, the District of Columbia, American Samoa, the Commonwealth of Puerto Rico, Guam, the U.S. Virgin Islands, and 33 Indian Tribal Organizations). Some of these state-level agencies—those that operate the program at both the state and local levels—retain all of their federal WIC grants. Most state-level agencies, however, retain a portion of their grants and pass the remaining funds to over 1,800 local WIC agencies. In fiscal year 2000, about $2.8 billion in federal program funds were used to provide food benefits to participants. Typically food benefits are in the form of vouchers or checks that participants can use to obtain approved foods at authorized retail food stores. An additional $1.1 billion in federal funds were used for nutrition services and program administration. Program administration includes, among other things, activities related to accounting and record keeping, outreach, monitoring and financial audits, and general management. Nutrition services include activities related to determining participants’ eligibility and issuing food benefits, as well as the following: Nutrition education: WIC offers classes, counseling, and other activities to teach participants about proper nutrition, positive food habits, and the prevention of nutrition-related problems. Breastfeeding promotion and support: To promote breastfeeding, WIC offers individual and group counseling sessions at WIC clinics or the hospital. Breastfeeding support can include telephone or in-person consultation with breastfeeding mothers. Referral to health care and social services: WIC agencies provide participants with information on health care and social services and refer them to providers including immunization clinics and the Food Stamp and Medicaid programs. By law, spending for nutrition education and breastfeeding promotion and support activities combined must equal at least one-sixth of a state’s total annual expenditures for nutrition services and administration plus a target amount for breastfeeding promotion and support that is established by FNS at the beginning of each fiscal year. There is no minimum spending requirement for referral activities. Over the past 20 years, government agencies such as USDA, the Centers for Disease Control and Prevention, and GAO, as well as universities and private research organizations, have conducted a substantial body of research on the effects of the entire WIC program. Some of the accumulated body of WIC research and evaluations provides nationwide assessments of WIC’s effects. Most of it has focused on the effect of program participation on birth outcomes and the nutritional status of program participants. USDA has a review under way describing and assessing research on the diet and health outcomes of its nutrition programs, including WIC. The results of this review, set for release later this year, will provide detailed information on over 70 studies, most of which examine WIC’s effects on birth outcomes or on the nutrition status of participants. While the USDA review will not focus on the impacts of specific nutrition services, it will include studies that examined the WIC program’s effects on the initiation and duration of breastfeeding and the immunization status of children. These two health-related outcomes are directly linked to two of the three nutrition services addressed in this report—breastfeeding promotion and support and referral services. However, because the USDA review is generally focused on overall program impacts, its report probably will not include descriptions or assessments of many of the demonstration studies included in this report. Most of the Recent Research Evaluates Demonstrations of Special Interventions The 19 studies we identified included almost twice as many demonstration studies as impact studies. Of the 12 demonstration studies, 3 look at special interventions in nutrition education, 6 look at special interventions in breastfeeding promotion and support, and 3 look at special interventions in referral to health and social services. Of the seven impact studies, one examines nutrition education, four assess breastfeeding promotion and support services, and two evaluate WIC health referrals. Most of the studies have a relatively limited geographic scope. Among the 12 demonstration studies, 11 are at the substate level. They generally study multiple WIC sites and/or multiple counties, but without sufficient sampling rigor to draw valid statewide conclusions. The results of one demonstration study are generalized to an entire state. Among the seven impact studies, three are at the substate level, one is statewide, and three are national in scope. The 19 studies received funding from various sources. Table 1 provides details on project funding for the 12 demonstration studies and 7 impact studies. Appendix II provides detailed information on the funding sources for the 19 studies reviewed for this report. Special Interventions Improve Participant Outcomes but Research Says Little About the Effectiveness of Individual Nutrition Services While all 19 studies suffer from methodological limitations, those limitations have varying consequences. Despite their limitations, the results of the 12 demonstration studies suggest that special interventions have some potential to improve nutrition service effectiveness over WIC interventions typically used; though, based on our analysis of these studies, it appears that additional resources may have to be committed to achieve this added effectiveness. However, the methodological limitations of the seven impact studies enable them to provide only very limited information on the effects of any one nutrition service. Appendixes III and IV contain lists of the demonstration and impact studies, respectively, reviewed for this report. Demonstration Studies Indicate Some Special Interventions Improve Participant Outcomes The 12 demonstration studies evaluate a range of different special interventions. To varying degrees, all were more effective than usual WIC interventions. Examples of the special interventions include the following: Breastfeeding promotion and support. Gross and others evaluated special interventions designed to encourage breastfeeding among African-American WIC participants. Mothers in the three special intervention groups were provided a motivational video or peer counseling or a combination of the video and counseling. Mothers in the control group received the standard WIC service that incorporated encouragement and support to breastfeed and brochures about breastfeeding during discussions about infant feeding. Mothers in the special intervention groups were twice as likely as mothers receiving the standard WIC infant feeding education to be breastfeeding 8 weeks and 16 weeks after giving birth, even accounting for factors that could increase breastfeeding duration, such as prior breastfeeding experience. Health referrals. Birkhead and others evaluated two special interventions designed to increase the number of WIC-eligible children who receive measles immunizations. The special interventions included having WIC staff escort children to an on-site immunization clinic and a food voucher incentive in which WIC staff provided only a 1-month supply of vouchers to parents, rather than the usual 2-month supply, until the parents provided documentation that their children’s immunizations were up-to-date. The standard WIC immunization referral consisted of notifying parents that immunizations were due, providing information on the benefits of immunizations, and providing the names and telephone numbers of local health facilities that immunize children. Children at escort sites were about five times more likely to be immunized than children at standard referral sites; children at voucher incentive sites were about three times more likely to receive immunizations. Nutrition education. Havas and others evaluated a special intervention designed to increase WIC participants’ consumption of fruits and vegetables. The special intervention—Maryland’s “5-A-Day” program— was a series of three 45-minute group sessions taught by former WIC participants, or “peer educators,” that incorporated special visual materials and included direct mailings to participants. The standard service generally included less than 10 minutes of nutrition education conversation between WIC staff and participants when they picked up their food voucher every other month. Compared to participants receiving the standard WIC nutrition education program, participants exposed to the special intervention displayed a significant increase in nutrition knowledge and in the consumption of fruits and vegetables. Each of the demonstration studies we reviewed suffers from methodological limitations that, while not invalidating the study’s findings, should be taken into account. The limitations we identified are common in studies that attempt to assess the extent to which social or health program interventions—not other factors—are responsible for changes in program participants’ behaviors or health. The major methodological limitations of the demonstration studies we reviewed include the following: Lack of control group. To help isolate the effects of an intervention, an evaluation study must compare people receiving the special intervention to similar people receiving standard WIC services. The difference between these groups can provide insight into whether the special intervention is more effective than standard WIC practice. Not having such a comparison obscures the relationship between the intervention and participant outcomes. Four of the studies we reviewed had a weak research design associated with a lack of control group. For example, Hoekstra and others attempted to evaluate the effectiveness of a new voucher incentive program in increasing WIC children’s rates of immunization. However, the researchers did not compare the group receiving the special intervention to a group receiving standard services. Instead, they compared three special intervention groups to themselves at different points in time over a period of 14 months. Without a control group that does not participate in the voucher intervention, it is difficult to attribute any changes the researchers noticed to the special intervention. Inappropriate data analysis techniques. The analytic techniques used in a study must suit the available data and the research design—in particular, they should be selected for their ability to help isolate the effects of the intervention. Three of the demonstration studies we reviewed used questionable analytic techniques. For example, Havas and others found that a special peer counselor program was effective in increasing nutrition knowledge and the consumption of fruits and vegetables. To reach this finding, Havas and others compared the average fruit and vegetable consumption of the special “5-a-Day” group to the average fruit and vegetable consumption of a group exposed to the standard WIC nutrition education. The comparison, which examined the linkage between demographic characteristics, such as race, and fruit and vegetable consumption, did not take into account the simultaneous influence of other characteristics, such as education level, on consumption. Without an analysis technique, such as multiple regression, that can account for the influence of several factors at once, determining the extent to which the observed differences in fruit and vegetable consumption are the result of the “5-A-Day” program is greatly complicated. Selection bias. Ideally, study participants should be randomly assigned to intervention and control groups to ensure that all participant characteristics will, on average, be the same from one group to another. A selection bias exists if the two groups differ in some systematic way. Selection bias makes it more difficult to attribute an observed difference in outcomes between the two groups to any one factor, such as the intervention. Six of the demonstration studies we reviewed have a possible selection bias. For example, Tuttle and Dewey examined the influence of a new, culturally sensitive breastfeeding education intervention on the initiation and duration of breastfeeding among Hmong WIC participants in Northern California.10, However, the study’s research design depended on women to volunteer to participate in the study—the women self-selected themselves to participate in the special intervention. Thus, those choosing to participate in the special intervention may have shared characteristics (for example, an already existing inclination to breastfeed) that did not exist in those women who chose not to participate. If present, selection bias could lead the researcher to overstate the benefits of the special intervention. Cynthia Reeves Tuttle and Kathryn G. Dewey, “Impact of a Breastfeeding Promotion Program for Hmong Women at Selected WIC Sites in Northern California,” Journal of Nutrition Education, Vol. 27, No. 2 (1995), p. 69. to agree. Excessive missing data, or poor response rates, may skew research findings. Missing data or poor response rate was a limitation in five of the demonstration studies. For example, Ahluwalia and others evaluated five new breastfeeding interventions, and attributed significant improvements in breastfeeding initiation to them. However, the database employed by the study contained breastfeeding initiation data for only 52 percent of the women in the sample. Measurement error. For an analysis to produce reliable results, the measures in the analysis must be accurate. Measurement error is the difference between a measured value and its true value. Five of the studies have potential measurement errors. For example, Shaw and Kaczorowski sought to examine the effectiveness of a peer counseling program on breastfeeding initiation and duration by asking new mothers to recall interactions with breastfeeding peer counselors that took place at the time of birth. Mothers were interviewed 6 weeks to 6 months after giving birth. If memory lapses occurred, new mothers may have incorrectly recalled their dealings with peer counselors, thereby potentially introducing measurement error into the data. Appendix V shows the major findings and methodological limitations of each of the 12 demonstration studies. Our analysis suggests that the effective interventions described in the demonstration studies may cost more than standard WIC approaches. For example, most breastfeeding special interventions were specifically designed to increase the amount of counseling and support provided to prenatal and postpartum women. Although only one of the demonstration studies provided information about the additional costs associated with such interventions, it is reasonable to expect that such one-on-one support will cost more than the standard WIC program. Two of the demonstration studies help to illustrate the linkage between resource commitment and results achieved. The first, Ahluwalia and others, which found that a hospital-based strategy providing bedside support and counseling to women who had just given birth, was the most effective at increasing breastfeeding initiation rates out of five new strategies evaluated. This strategy clearly required more resources than the standard practice of providing counseling and brochures to participants during a visit to the WIC clinic. The second study, Weimer, which was funded by USDA, focused on a special intervention that was similarly resource-intensive. It reported that providing one-on-one support in the hospital after delivery, followed by an in-home visit within 72 hours of birth, increased breastfeeding duration. However, neither study provided any information about the additional costs needed to implement these special interventions. Only one of the 12 demonstration studies we reviewed, Hutchins and others, provided any information about the costs associated with the implementation of the special intervention. This study reported that vaccinations increased at sites providing vaccination screening and voucher incentives (until their children are immunized, a family must visit the clinic monthly—rather than every 3 months—to pick up WIC vouchers). This study uses what its authors term “crude” cost-effectiveness ratios to estimate the average cost for each additional child with up-to-date immunizations. These costs range between $30 and $73, depending on the number of enrolled children and rates of active participation. Impact Research Provides Very Limited Information on the Effectiveness of WIC’s Individual Nutrition Services The seven impact research studies we reviewed provide few conclusive insights into the recent effectiveness of WIC breastfeeding promotion and support, referral services, or nutrition education services. Breastfeeding Promotion and Support Three of the four impact studies that focused on breastfeeding promotion and support—Schwartz and others, Balcazar and others, and Timbo and others—use old data from the 1988 National Maternal and Infant Health Survey. Although, according to FNS officials, this survey represents the most recent data available, much has changed in the program since 1988, including the characteristics of WIC participants and the emphasis the program places on breastfeeding. As a result, these studies’ findings shed little light on the program’s current effects. Although the fourth study, Wiemann and others, uses data from the mid-1990s, its limited scope, in terms of geography and participants, constrains the applicability of its findings. This study, with data collected from 684 adolescent mothers who gave birth at a hospital in Galveston, Texas, could have some specialized usefulness, but would have to be replicated at many other sites to provide insights into the broader effectiveness of WIC’s breastfeeding promotion and support services. In addition, since adolescent mothers comprise only about 11 percent of all WIC mothers, the study’s focus on them further compromises its more general usefulness. Taken as whole, the inconsistency in the findings of these four studies further limit their usefulness in assessing the effects of WIC’s breastfeeding promotion and support program. For example, Wiemann and others and Balcazar and others find that WIC enrollment is a significant factor in some mothers’ decision to bottle-feed, while Timbo and others, and Schwartz and others, conclude that WIC participation increases breastfeeding. No consistent message emerges from the studies. Referral Services The two referral service impact studies have methodological constraints that, to varying degrees, limit their usefulness in assessing the effectiveness of WIC referral services. The first study, Suarez and others, using survey data from 30 counties in Texas, found that children who are enrolled in WIC are significantly more likely than children who are not enrolled to be up-to-date on their immunizations, regardless of other intervening factors such as the child’s age, ethnicity, or the family’s income. Although this study likely contains some measurement error, it provides at least limited evidence that WIC referral services are effective in increasing immunization rates. In contrast, due to major methodological problems, the second referral study, McCunniff and others, provides little useful information on the effectiveness of WIC referral services. This study is based on self-administered questionnaire data collected from a sample of mothers at three WIC sites in Kansas City, Missouri. The study found that when taking into account factors such as WIC referral, child’s age, household size, and availability of dental insurance, only the age of the child had a significant, independent effect on the likelihood that children will visit a dentist. There are two principal limitations to this study. First, almost 40 percent of the sampled children were younger than 1 year old. Because many children less than 1 year of age do not yet have teeth, they are much less likely to have made a dental visit, thus reducing the study’s ability to identify factors associated with dental visits other than age. The second major limitation is the measurement error associated with the reliance on self-reported questionnaire data about visits to the dentist. The study did not attempt to verify questionnaire responses through a review of dental records. The authors suggested that such reviews would have increased the accuracy of the self-reported data. As a result of these serious methodological problems, it is likely that McCunniff and others has only limited relevance to understanding the effectiveness of WIC referral services. Nutrition Education The one study that primarily focused on the impact of nutrition education, Fox and others, also examined breastfeeding programs and their effectiveness. However, Fox and others was limited geographically, and had other limitations that reduce its usefulness in assessing the effectiveness of WIC’s nutrition education. For example, its scope was limited in that it focused on pregnant and postpartum women at six WIC sites, in three states. Within this limited context, the study describes program and participant characteristics; the nutrition services offered (including breastfeeding promotion and support); participants’ receipt of and satisfaction with these services; and changes in participants’ knowledge and behaviors between the time of prenatal WIC certification and 4 to 6 months postpartum. The study also attempted to assess the impact of WIC nutrition education on participants’ knowledge and behavior. Although the study concluded that participants’ nutrition knowledge and behavior improved significantly over the course of the study, attributing these changes to WIC is problematic because the study did not use a control group. Instead, Fox and others compared intervention groups at different points in time. The study also concluded that (1) WIC participation did not significantly increase breastfeeding initiation or duration; and (2) women’s decisions regarding infant feeding are strongly associated with intentions formed during pregnancy. Conclusion Demonstration studies, despite some methodological limitations, provide program managers and policymakers with some useful information about the types of WIC nutrition service interventions that can have positive impacts on participants. However, only one recent demonstration study provides any information on the costs associated with implementing various interventions. Given the limited resources available to provide WIC nutrition services, information about the costs to provide effective services could play a critical role in managers’ decisions to implement the intervention and policymakers’ decisions about funding the intervention. Recommendation In order to maximize the value of nutrition education, breastfeeding promotion and support, and referral service demonstration and evaluation research funded by USDA, we recommend that the Secretary of Agriculture direct officials responsible for implementing such research to require that this research include an assessment of the costs associated with the special intervention being evaluated. Agency Comments and Our Response We provided a draft of this report to the Department of Agriculture’s Food and Nutrition Service for review and comment. We met with Food and Nutrition Service officials, including the Acting Associate Deputy Administrator for Special Nutrition Programs. The agency officials generally agreed with the report’s findings and recommendation. However, the officials questioned why our recommendation did not address actions that WIC researchers should take to deal with some of the methodological limitations we identified in research evaluating the effectiveness of WIC services. We believe that USDA has a responsibility to ensure that the WIC and other nutrition program research it funds are of high quality. However, our review was not designed to examine USDA’s policies and procedures to ensure the quality of the research it funds or the practices it employs to promote high-quality research in studies it does not fund. As a result, we are not making any specific recommendations concerning how USDA might improve the quality of WIC research at this time. The officials also provided some technical changes and clarifications to the report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; interested Members of the Congress; the Honorable Ann M. Veneman, Secretary of Agriculture; the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget; and other interested parties. We will also make copies available upon request. If you or your staff have any questions about this report, please contact me or Thomas E. Slomba at (202) 512-7215. Key contributors to this report are listed in appendix VI. Scope and Methodology To identify recent studies that examine the effectiveness of the Special Supplemental Nutrition Program for Women, Infants and Children (WIC) nutrition education, breastfeeding promotion and support, and referral services, we searched relevant databases, such as National Technical Information Service, Sociological Abstracts, and Wilson Social Science Abstracts. We also consulted with the U.S. Department of Agriculture (USDA) WIC program staff and other program stakeholders, including officials from the National Association of WIC Directors. Through this process, we identified 209 items published from 1988 through 2000 dealing with various aspects of the WIC program. To be used in our review, individual items had to meet each of the following criteria: publication in a refereed medium (for example, a journal article, book or book chapter, USDA-issued report); publication date of 1995 or later; examination of one or more of WIC’s nutrition services (breastfeeding promotion and support, nutrition education, or health referrals); and original analysis of a specific nutrition service’s effectiveness. Altogether, only 19 items met all four criteria. Many—86 of the 190 items we rejected—were published prior to 1995, and therefore do not satisfy our definition of recent studies. (We established 1995 as the cutoff to enable us to better examine the program as it currently operates.) We eliminated the remaining 104 items because they did not meet one or more of our criteria. For example, some items appeared in our literature search as professional papers delivered at conferences; thus, they did not undergo any formal referee process. Other items were published as reviews or summaries of original research, but did not include any original research of their own. Some items do not focus on the effectiveness of specific WIC nutrition services. For example, one study examines the general effects of food programs—including WIC and other food assistance programs such as food stamps—on diet, but does not evaluate the effectiveness of specific WIC nutrition service programs. Once we narrowed the scope of our study, we met with staff in the USDA’s Food and Nutrition Service Office of Analysis, Nutrition and Evaluation to ensure that our methodology did not exclude any important studies. According to these officials, our approach successfully identified all of the major recent evaluation studies on WIC nutrition services. We then conducted detailed reviews of the 19 studies. These reviews entailed an evaluation of each study’s research methodology, including its data quality, research design, and analytic techniques, as well as a summary of its major findings and conclusions. We also assessed the extent to which each study’s data and methods support its findings and conclusions. Research Funding Sources Funding source(s) Bibliography of Demonstration Studies Breastfeeding Promotion and Support Ahluwalia, Indu B., Irene Tessaro, Laurence M. Grummer-Strawn, and others. “Georgia’s Breastfeeding Promotion Program for Low-Income Women.” Pediatrics, Vol. 105, No. 6 (2000), pp. 85–91. Shaw, Elizabeth, and Janusz Kaczorowski. “The Effect of a Peer Counseling Program on Breastfeeding Initiation and Longevity in a Low-Income Rural Population.” Journal of Human Lactation, Vol. 15, No. 1 (1999), pp. 19–25. Gross, Susan M., Laura E. Caulfield, Margaret E. Bentley, and others. “Counseling and Motivational Videotapes Increase Duration of Breast- Feeding in African-American WIC Participants Who Initiate Breast- Feeding.” Journal of the American Dietetic Association, Vol. 98, No. 2 (1998), pp. 143–148. Weimer, Jon P. Breastfeeding Promotion Research: The ES/WIC Nutrition Education Initiative and Economic Considerations. Washington, D.C.: U.S. Department of Agriculture, Economic Research Service, 1998. Grummer-Strawn, Laurence M., Susan P. Rice, Kathy Dugas, and others. “An Evaluation of Breastfeeding Promotion Through Peer Counseling in Mississippi WIC Clinics.” Maternal and Child Health Journal, Vol. 1, No. 1 (1997), pp. 35–42. Reifsnider, Elizabeth, and Donna Eckhart. “Prenatal Breastfeeding Education: Its Effect on Breastfeeding Among WIC Participants.” Journal of Human Lactation, Vol. 13, No. 2 (1997), pp. 121–125. Tuttle, Cynthia Reeves, and Kathryn G. Dewey. “Impact of a Breastfeeding Promotion Program for Hmong Women at Selected WIC Sites in Northern California.” Journal of Nutrition Education, Vol. 27, No. 2 (1995), pp. 69–74. Nutrition Education Abusabha, Rayane, Cheryl Achterberg, and Jeannie McKenzie. “Evaluation of Nutrition Education in WIC.” Journal of Family and Consumer Sciences, Winter (1998), pp. 98–104. Havas, Stephen, Jean Anliker, Dorothy Damron, and others. “Final Results of the Maryland WIC 5-A-Day Promotion Program.” American Journal of Public Health, Vol. 88, No. 8 (1998), pp. 1161–1167. Referrals Hutchins, Sonja S., Jorge Rosenthal, Pamela Eason, and others. “Effectiveness and Cost-Effectiveness of Linking the Special Supplemental Program for Women, Infants, and Children (WIC) and Immunization Activities.” Journal of Public Health Policy, Vol. 20, No. 4 (1999), pp. 408– 426. Hoekstra, Edward J., Charles W. LeBaron, Yannis Megaloeconomou, and others. “Impact of a Large-Scale Immunization Initiative in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC).” Journal of the American Medical Association, Vol. 280, No. 13 (1998), pp. 1143–1147. Birkhead, Guthrie S., Charles W. LeBaron, Patricia Parsons, and others. “The Immunization of Children Enrolled in the Special Supplemental Food Program for Women, Infants, and Children (WIC).” Journal of the American Medical Association, Vol. 274, No. 4 (1995), pp. 312–316. Bibliography of Impact Studies Breastfeeding Promotion and Support Fox, Mary Kay, Nancy Burstein, Jenny Golay, and others. WIC Nutrition Education Assessment Study: Final Report. Alexandria, Va.: U.S. Department of Agriculture, Food and Nutrition Service, 1999. Wiemann, Constance M., Jacqueline C. DuBois, and Abbey B. Berenson. “Racial/Ethnic Differences in the Decision to Breastfeed Among Adolescent Mothers.” Pediatrics, Vol. 101, No. 6 (1998), pp. 11–23. Timbo, Babgaleh, Sean Altekruse, Marcia Headrick, and others. “Breastfeeding Among Black Mothers: Evidence Supporting the Need for Prenatal Intervention.” Journal of the Society of Pediatric Nurses, Vol. 1, No. 1 (1996), pp. 35–46. Balcazar, Hector, Catherine M. Trier, and Jose A. Cobas. “What Predicts Breastfeeding Intention in Mexican-American and Non-Hispanic White Women? Evidence From a National Survey.” Birth, Vol. 22, No. 2 (1995), pp. 74–80. Schwartz, J. Brad, Barry M. Popkin, Janet Tognetti, and others. “Does WIC Participation Improve Breast-Feeding Practices?” American Journal of Public Health, Vol. 85, No. 5 (1995), pp. 729–731. Referrals Fox, Mary Kay, Nancy Burstein, Jenny Golay, and others. WIC Nutrition Education Assessment Study: Final Report. Alexandria, Va.: U.S. Department of Agriculture, Food and Nutrition Service, 1999. McCunniff, Michael D., Peter C. Damiano, Michael J. Kanellis, and others. “The Impact of WIC Dental Screenings and Referrals on Utilization of Dental Services Among Low-Income Children.” Pediatric Dentistry, Vol. 20, No. 3 (1998), pp. 181–187. Suarez, Lucina, Diane M. Simpson, and David R. Smith. “The Impact of Public Assistance Factors on the Immunization Levels of Children Younger Than 2 Years.” American Journal of Public Health, Vol. 87, No. 5 (1997), pp. 845–848. Demonstration Studies: Major Findings, Scope, and Major Limitations Major finding(s) Scope (timeframe) Major limitation(s) Nutrition education lectures and facilitated group discussions were more effective than brochures at increasing participants’ nutrition knowledge. Facilitated group discussion was more effective than brochures at increasing participants’ confidence in performing specific nutrition related behaviors. Seven WIC clinics in New Mexico (not specified) Havas and others, 1998 Consumption of fruits and vegetables increased after an education program consisting of a series of three 45-minute group sessions taught by paid peer educators and incorporating special visual materials and a direct mailing to participants. Sixteen WIC clinics in Maryland (not specified) Gross and others, 1998 The duration of breastfeeding among African- American WIC participants increased with peer counselor support or viewing promotional breastfeeding videos. Four WIC sites in Baltimore, Md. (1992– 1994) WIC participants increased the initiation of breastfeeding when exposed to (1) an enhanced education program with access to a hotline, (2) a free breast-pump loan program, (3) a hospital- based program with bedside support and counseling after delivery, (4) community coalitions, and (5) peer counseling provided by former participants. State of Georgia (1992–1996) 1. Breastfeeding initiation and duration rates increased after volunteer peer counseling. 2. Breastfeeding initiation and duration rates in Iowa (not specified) 1. Missing data; selection bias 2. Lack of control group; increased after paid peer counseling. Michigan (not specified) 3. Lack of control group 4. Selection bias postdelivery contact with mother in hospital was followed up with support (including home visits) by a specially trained paraprofessional. North Carolina (not specified) increased with culturally appropriate breastfeeding education provided in high school or WIC clinics. 4. Local agency in Guam (not specified) Duration of breastfeeding increased after prenatal nutrition education classes focusing on breastfeeding. WIC clinics in three rural Oklahoma counties (1986) Tuttle and Dewey, 1995 Breastfeeding initiation rates increased among Hmong WIC participants after a culturally sensitive prenatal breastfeeding class and prenatal and postpartum counseling. Seven WIC clinics in three California counties (1991–1992) Major finding(s) Scope (timeframe) Major limitation(s) Breastfeeding initiation and duration increased after counseling and support provided by WIC participants, trained and paid as peer counselors. WIC programs in nine West Tennessee health departments (1996–1997) Clinics with paid peer counselors had higher rates of breastfeeding initiation than clinics without peer counselors. Clinics with a lactation specialist or consultant, and peer counselors, had higher rates of breastfeeding initiation than clinics with only peer counselors. However, the benefits of lactation specialists were offset when peer counselors spent at least 45 minutes with individual participants. Fifty-one WIC clinics in Mississippi (1989– 1993) Children were 5.5 times more likely to be immunized, and immunized more rapidly, at WIC sites where staff escorted children to a pediatric clinic in the same facility for immunization. Children were almost 3 times more likely to be immunized, and immunized more rapidly, at sites with a voucher/check incentive. (Until immunization, a family must visit the clinic monthly, rather than every other month, to pick up WIC voucher/checks.) Six WIC sites in New York City (1991) Immunization rates increased at sites with a voucher/check incentive (until children are immunized, a family must visit the clinic monthly, rather than every 3 months, to pick up WIC voucher/checks). Nineteen WIC sites in Chicago (1996–1997) Vaccinations increased at sites with vaccination screening and a voucher/check incentive (until immunization, a family must visit the clinic monthly, rather than every 3 months, to pick up WIC voucher/checks). Seven WIC sites in Chicago (1991–1993) GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Judy Hoovler, Sara Ann Moessbauer, Corrina Nicolaou, Judy Pagano, Debra Roush, and Eugene Wisnoski made key contributions to this report. Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Despite methodological limitations, demonstration studies provide program managers and policymakers with some useful information on the types of Special Supplemental Program for Women, Infants and Children (WIC) nutrition service interventions that can have positive results for participants. However, only one recent demonstration study provides any information on the costs associated with implementing various interventions. Given the limited resources available to provide WIC nutrition services, information on the costs to provide effective services could play a critical role in managers' decisions to implement the intervention and policymakers' decisions on funding the intervention.
Background Stress Test Types and Purposes Stress testing is one of many risk-management tools used by both financial institutions and regulators. Complex financial institutions need management information systems, internal controls, and other processes that can help to identify, assess, and manage a range of risks across the organization that may arise from both internal and external sources, including rapid and unanticipated changes in financial markets. Stress testing has been used throughout the financial industry for several decades. But as noted in a Federal Reserve Bank of New York staff report, stress testing before the recent financial crisis was seen as one of many risk-management tools and was not a major component of banking regulators’ supervisory programs. Since the recent financial crisis, the report explains that comprehensive firm-wide stress testing has become an integral and critical part of firms’ internal capital adequacy assessment processes and of banking regulators’ supervisory regimes. The expanded role of supervisory stress testing is discussed later in this report. IMF has identified four major categories of stress testing, differentiated by purpose or goals: (1) internal risk management, used by firms to manage risks from their investment or asset portfolios and as an input for business planning; (2) crisis management, used by supervisors to assess whether institutions need additional capital during times of financial sector distress—such as with SCAP—and as an input for business restructuring plans; (3) microprudential (supervisory), used by supervisors to assess the health of an individual institution; and (4) macroprudential (surveillance), used by central banks and other authorities to analyze system-wide risks and vulnerability in addition to institution-specific risks. As discussed later in this report, the Federal Reserve’s CCAR and DFAST have elements of the internal risk management, microprudential, and macroprudential approaches. Federal Banking Regulators Federal banking regulators supervise the activities of banking institutions and require them to take corrective action when the institutions’ activities or overall performance present supervisory concerns or could result in financial losses to FDIC’s deposit insurance fund or violations of law or regulation. See table 1 for an overview of their functions. Bank Capital For banking institutions, capital exists to absorb unexpected losses and the amount of capital an institution holds is critical to its ability to continue operating by making loans to businesses and consumers. The Federal Reserve, FDIC, and OCC require institutions to maintain certain minimum levels of capital to promote stability across the banking industry and protect the nation’s financial system. These requirements identify various types of regulatory capital including common equity Tier 1 capital, additional Tier 1 capital, Tier 2 capital, and total capital. According to Federal Reserve staff, common equity tier 1 capital is considered the most significant capital that a banking institution can have to support its operations and absorb unexpected financial losses. It consists primarily of retained earnings (the profits a bank has earned but has not paid out to shareholders in the form of dividends or other distributions) and common stock, with deductions for items such as goodwill and deferred tax assets. Tier 2 capital contains supplementary capital elements such as subordinated debt, a portion of loan loss reserves, and certain other instruments. Total capital consists of the sum of Tier 1 and Tier 2 capital. Regulators establish required capital levels in comparison with various measures of an institution’s assets and the minimum requirements are specified as a ratio (regulatory capital ratio). Regulators use different ratios to assess an institution’s capital adequacy. Among these are the Tier 1 risk-based capital ratio, which measures Tier 1 capital as a share of risk-weighted assets, and the Tier 1 leverage ratio, which measures Tier 1 capital as a share of average total consolidated assets. Other measures include the total risk-based capital ratio (total capital as a share of risk-weighted assets) and the common equity Tier 1 ratio (common equity tier 1 capital as a share of risk-weighted assets). Federal Reserve’s Stress Test Programs Are Coordinated but Serve Different Purposes The DFAST and CCAR programs vary in terms of the firms to which they apply and in their uses. DFAST applies to a broad range of banking institutions and consists of supervisory- and company-run stress tests to generate forward-looking information about institutions’ capital adequacy for the firms’ internal use and for public disclosure. The Federal Reserve uses CCAR (which builds on information from DFAST) to quantitatively and qualitatively evaluate the capital adequacy and capital planning processes of large bank holding companies. The Federal Reserve and other bank regulators have issued similar rules for DFAST company-run stress tests as required by the Dodd-Frank Act but have differed in their use of exemptions. Several of the companies subject to Federal Reserve stress tests that we interviewed identified a range of benefits from the tests and many companies described costs as well as factors contributing to those costs, although several companies did not track costs specifically related to the tests. Under DFAST, the Federal Reserve and Subject Institutions Project Capital Levels under Common Economic Scenarios and Disclose Comparable Information The Federal Reserve and a broad range of banking institutions use DFAST to (1) project how hypothetical adverse scenarios would affect an institution’s revenues and losses and ultimately its capital levels as measured by regulatory capital ratios, and (2) disclose comparable information on test results. The Federal Reserve’s primary goals for DFAST are the production of capital adequacy information for firms’ internal use and for public disclosure. The Federal Reserve aims to provide subject companies, the public, and supervisors with forward- looking information to help gauge the potential effect of stressful economic and financial conditions on the companies’ ability to absorb losses and continue operations. Federal Reserve staff we interviewed explained that the purpose of the public stress test disclosures was transparency and the promotion of market discipline by providing market participants with comparable information on the financial condition of banking institutions. The Federal Reserve also intends banking institutions to incorporate stress testing into their internal capital planning activities. Subject Firms and Test Components DFAST consists of supervisory- and company-run stress tests that are required by the Dodd-Frank Act and are based on a banking institution’s size and type (see table 2). Federal Reserve-supervised banks and holding companies with more than $10 billion in total consolidated assets must perform company-run tests. The Federal Reserve also conducts a supervisory stress test for bank holding companies with total consolidated assets of $50 billion or more. Banking institutions with more than $10 billion in total consolidated assets that are supervised by FDIC and OCC are subject to stress test rules issued by the agencies that also require the completion of annual company-run stress tests. For the supervisory tests, the Federal Reserve uses data provided by the companies and a set of models developed or selected by the Federal Reserve. Companies use their own data and models to complete the company-run tests. The supervisory stress tests performed by the Federal Reserve and the company-run stress tests conducted by DFAST firms share several key components necessary for producing post-stress capital ratios. These ratios, which are an important output of the stress tests, reflect projections of risk-weighted assets and balance sheet and income statement items under the stress scenarios and measure the amount of capital a banking institution has available to cover unexpected losses. For example, the Federal Reserve’s DFAST stress test rules generally require both types of tests to include, over a nine-quarter planning horizon, projections of revenues, regulatory capital ratios. Other areas in which the tests share some common elements include data, assumptions, and scenarios. As stated previously, companies use their own data to complete company-run tests and report similar data to the Federal Reserve that it uses to perform the supervisory tests. The Federal Reserve’s stress test rules also prescribe a standard set of assumptions that are used in both types of tests. The assumptions involve capital actions, or decisions about transactions affecting capital levels, such as raising capital by issuing new capital instruments or returning capital to shareholders through dividend payments or share repurchases. These transactions can affect the outcome of regulatory capital ratios measured by the stress tests. As required under the Dodd Frank Act, the Federal Reserve also annually defines three stress test scenarios—baseline, adverse, and severely adverse—that it uses for the supervisory stress test and requires DFAST firms to use in the annual company-run tests. The scenarios consist of hypothetical projections for 28 macroeconomic and financial variables. For instance, the variables include measures of the unemployment rate, gross domestic product, housing and equity prices, interest rates, and financial market volatility. Baseline scenario. Generally reflects economic conditions expected by economic forecasters. Adverse scenario. Features mild to moderate economic and financial stress driven by selected potential risk factors. Severely adverse scenario. Features severe economic and financial stress, generally driven by a different set of risk factors than the adverse scenario. The Federal Reserve’s rule requires certain elements to be included in public summaries of DFAST results. Pursuant to the Dodd-Frank Act, Federal Reserve rules for DFAST require subject banking institutions to submit their stress test results to the Federal Reserve and also to publicly disclose a summary of the results of the severely adverse scenario. The rules require the summary from the severely adverse scenario to include (1) a description of the types of risks and methodologies included in the stress test; (2) estimates of aggregate losses, pre-provision net revenue, provision for loan and lease losses, net income, and projected regulatory capital ratios; and (3) an explanation of the most significant causes for the changes in regulatory capital ratios. The Dodd-Frank Act also requires the Federal Reserve to disclose a summary of its supervisory stress test results. The Federal Reserve has done so in the aggregate and individually for each of the banking institutions subject to the supervisory tests and the disclosures have included similar types of information as required for DFAST institutions. The use of standard approaches in the supervisory and company-run stress tests enhances the comparability and usefulness of public results disclosures. Specifically, the common capital action assumptions, stress scenarios, and nine-quarter planning horizon used in DFAST allow for more consistent capital adequacy assessments, while also allowing for a focus on the particular characteristics of different institutions. For example, as the Federal Reserve explained in its 2016 DFAST supervisory stress test results, differences in loan loss rates across institutions reflect differences in the risk characteristics of the portfolios held by each institution (both in relation to the type of lending of each portfolio and the loans within each portfolio). Stress test rules issued by FDIC and OCC for their supervised institutions also include the use of common scenarios and test horizons. Uses The Federal Reserve uses DFAST results to supplement its ongoing supervision and inform its CCAR evaluations. The Federal Reserve follows two different approaches that are based on an institution’s size and type. The Federal Reserve conducts supervisory stress tests of the largest bank holding companies (those with at least $50 billion in total consolidated assets). The Federal Reserve then uses the supervisory and company-run stress test information as a basis for quantitative and qualitative CCAR evaluations. As we discuss in more detail later in the report, CCAR is a comprehensive assessment of a company’s capital adequacy and capital planning processes. For all other DFAST institutions (holding companies and banks), the Federal Reserve does not conduct a supervisory test. Rather, it uses company-run stress tests to supplement its regular, ongoing supervision. For example, internal Federal Reserve guidance states that examiners are expected to assess the quality of a firm’s stress testing process and overall results as part of the broader assessment of a firm’s capital adequacy and risk-management process. In addition, the Federal Reserve performs targeted DFAST examinations that consider how $10 to $50 billion institutions are completing DFAST stress tests and using the information they produce as part of their risk- management and capital planning processes. According to Federal Reserve documentation, in these examinations staff assess whether institutions that are not subject to CCAR have sound practices for meeting the Federal Reserve’s requirements and expectations for DFAST company-run stress tests. The examinations are structured around the requirements in the Federal Reserve’s DFAST company-run stress test rules, the standards in its final DFAST supervisory guidance for institutions with $10 billion–50 billion in assets, and the data reporting requirements in the reporting form for those companies. Staff from regional Federal Reserve Banks perform the targeted DFAST examinations. Federal Reserve staff we interviewed said that examiners performing DFAST examinations also conducted general supervisory examinations. According to Federal Reserve staff and examiner guidance, DFAST does not represent a separate supervisory assessment, and the Federal Reserve does not make supervisory decisions based solely on a company’s stress test processes or results. Instead, for banking institutions that are not subject to CCAR, the Federal Reserve considers the results of the DFAST examinations and annual company-run stress tests as one of several factors influencing an institution’s supervisory examination rating. For example, the Federal Reserve’s internal guidance instructs examiners to consider the tests as one of many tools available to assist in the assessment of a company’s capital position and planning process and not to rely primarily upon a firm’s internal stress test results in assessing overall capital adequacy or risk management. Furthermore, Federal Reserve staff we interviewed said that DFAST was part of the overall supervisory framework for these firms and provided additional information for examiners to consider when assessing capital planning and other areas of supervisory focus. Federal Reserve guidance and staff also indicated that there were no DFAST-specific expectations for companies to meet minimum capital levels or any associated regulatory approvals (such as for proposed capital distributions). CCAR Uses DFAST Stress Test Results and Plays a Larger and More Direct Role in Supervision CCAR is a separate exercise in which the Federal Reserve uses information produced in DFAST as a key input to its supervisory evaluations of a subset of firms—large bank holding companies with total consolidated assets of $50 billion or more. The Federal Reserve’s goals for CCAR are to ensure that large bank holding companies have sufficient capital to withstand severely adverse economic and financial conditions and continue operations, and have strong processes for assessing their capital needs and managing their capital resources. Subject Firms and Test Components CCAR applies only to a subset of DFAST firms—the largest top-tier bank holding companies (with total consolidated assets of $50 billion or more) subject to the DFAST supervisory stress test (see table 3). It does not affect the other DFAST institutions—that is, certain banks and savings and loan holding companies with more than $10 billion in total consolidated assets or bank holding companies with total consolidated assets greater than $10 billion but less than $50 billion. CCAR represents a comprehensive and independent supervisory evaluation and includes a quantitative assessment of a firm’s capital adequacy, and a qualitative assessment of its capital planning processes and capital policies. (In this section, we largely focus on the quantitative assessments; we discuss the qualitative assessment in more detail later in this report.) Federal Reserve rules promulgated in conjunction with its stress test requirements call for CCAR institutions to submit annual capital plans to the Federal Reserve that include detailed descriptions of the company’s internal processes for assessing capital adequacy, its policies governing capital actions, and planned capital actions over the nine-quarter planning horizon. Quantitative Assessment In the quantitative assessment, the Federal Reserve evaluates whether a company would be able to make its planned capital distributions and meet minimum capital requirements throughout the stress period based on both supervisory and company-run stress test results. The Federal Reserve coordinates key aspects of the DFAST stress tests and the CCAR quantitative assessment such as the stress scenarios, planning horizon, and reporting requirements and time frames (see fig. 1). For example, companies submit information related to the annual DFAST stress tests to the Federal Reserve at the same time that they provide their capital plan information for CCAR. The inclusion of capital actions occurs after projecting revenues, losses, and net income, as one of the last steps in generating post-stress capital ratios. For the quantitative assessment, the Federal Reserve uses essentially the same data, models, and projections from the DFAST supervisory stress test to calculate post-stress capital ratios for each CCAR firm. But one key distinction exists—for CCAR, the Federal Reserve uses a company’s proposed capital actions rather than the standard ones prescribed for DFAST. As with the supervisory tests, a company can use the same data, models, and projections from DFAST for the CCAR company-run stress tests, but with its planned rather than standard capital actions. As Federal Reserve staff explained, the line items relating to capital actions are the only difference between the stress tests used for DFAST and CCAR. According to these staff, standardized capital assumptions are used to project capital ratios in DFAST because the purpose of the supervisory stress test under DFAST is to estimate and disclose comparable capital adequacy information, while proposed actions are used for CCAR, which also evaluates companies’ planning processes. In addition to the change in assumptions for capital actions, CCAR firms also conduct a stress test using at least one company-designed stress scenario that is specific to CCAR and not publicly disclosed. That is, companies still use the standard supervisory scenarios (baseline, adverse, and severely adverse) and develop another scenario that also would represent stressful conditions. The Federal Reserve requires CCAR firms to focus this additional scenario on the specific vulnerabilities of the company’s business activities and exposures. As such, company- run tests using the company-designed scenario can provide greater insight into firm-specific risks than company-run tests only using the standard supervisory scenarios. The Federal Reserve has said that it considers the results of the company-run tests—under both supervisory scenarios and the company designed scenario—in its quantitative assessment. Inclusion of Company-Run Test Results in the CCAR Quantitative Assessment May Weaken Incentives for Severe Stress Tests The Federal Reserve has stated that its goals for CCAR include ensuring that companies have sound capital planning and risk-management processes. Related to these goals, the Federal Reserve’s capital plan rule requires companies to use the results of company-run tests—under both supervisory scenarios and at least one stress scenario they design (company-designed scenario)—to conduct internal capital adequacy assessments that support their capital plans. Furthermore, CCAR requirements call for companies to use the company-designed scenario to stress the specific vulnerabilities of their risk profile and operations, including those related to the company’s capital adequacy and financial condition. The Federal Reserve uses the results of company-run tests in performing its CCAR quantitative assessments and has stated that a company will not receive an objection to its capital plan based on the assessment if it can meet minimum regulatory capital requirements under the company-run stress tests as well as the supervisory tests. However, this could weaken incentives for companies to create meaningful and severe stress tests that are useful for capital planning and risk management. Based on data we analyzed for CCAR 2013 through CCAR 2015, post- stress capital ratios for the company-run tests (under both supervisory severely adverse scenario and the company-designed scenario) were higher than for the supervisory tests a substantial majority of the time (see fig. 2). That is, capital ratios declined less under the company-run tests than under the Federal Reserve’s, indicating that the companies’ results generally were further from breaching minimum capital requirements and thus less stressful. Federal Reserve staff acknowledged that the inclusion of the company- run tests—specifically the test using the company-designed scenario—in the CCAR quantitative assessment may provide an incentive for companies to create less severe stress tests that would not generate losses large enough to breach minimum regulatory capital requirements. But, the staff did not believe these negative incentives warranted the elimination of the company-run stress test and added that the Federal Reserve could mitigate this risk through its evaluation of companies’ stress testing practices—including their scenarios and models—in the CCAR qualitative assessment. However, stress test modeling and scenario design involve considerable judgment and companies could make subtle changes that would not indicate manipulation or necessarily fail to meet Federal Reserve standards. Federal Reserve staff told us that the company-designed scenario provided an additional view on risk that increased the variety of stresses to which companies were subject under CCAR. In addition, Federal Reserve guidance for the largest and most complex companies indicates that the firms should have more sophisticated models for their most material portfolios than for portfolios that are less significant. However, because the company-run tests typically produced higher post-stress capital ratios, they may not have meaningfully contributed to the CCAR quantitative assessment. Furthermore, in discussing why the Federal Reserve does not require disclosure of results based on company-designed scenarios with Federal Reserve staff, they explained that with a requirement to disclose such results, firms might focus on producing positive results rather than using the scenario to genuinely identify their most salient risks. Using the company-run test results (from supervisory and company-designed scenarios) in the quantitative assessment creates similar conflicting incentives for companies, which could limit the benefits of the tests and the achievement of Federal Reserve goals. CCAR Qualitative Assessments The Federal Reserve also uses DFAST-related information in its CCAR qualitative assessments, which we discuss in additional detail later in this report. For CCAR firms, the Federal Reserve’s qualitative assessment represents a dedicated and wide-ranging assessment of their capital planning processes. A substantial part of the assessment involves examining how companies perform stress tests—including those required under DFAST—and incorporate them in overall risk management and capital planning. As part of this effort, the Federal Reserve assesses different aspects of the processes companies use to generate the DFAST company-run and other stress tests. Among other areas, this includes examining whether companies meet Federal Reserve expectations related to risk identification, scenario design, and loss and revenue estimation. For example, the Federal Reserve considers how a company’s stress testing practices capture the potential increase in losses or decrease in revenue that could result from the firm’s risks, exposures, and activities under stressful scenarios. Semiannual Tests Based on Federal Reserve Dodd-Frank Act requirements, institutions subject to CCAR must conduct company-run stress tests semiannually. The semiannual and annual stress tests are similar, but for the semiannual tests the Federal Reserve requires companies to develop and use at least three scenarios appropriate for their own risk profile and operations. The Federal Reserve has not used these tests as part of its annual CCAR quantitative assessment, and their role in the qualitative assessment has been limited. According to Federal Reserve staff, the Federal Reserve has used the semiannual tests significantly less than other DFAST stress test components and has not done the type of rigorous assessment of the semiannual stress test that it has done for the annual tests. However, Federal Reserve staff and internal guidance that we reviewed indicated that information from the semiannual tests could be used to help identify areas requiring additional focus in future CCAR cycles. For example, Federal Reserve staff said they performed limited reviews of companies’ semiannual stress test submissions—including information on stress scenarios and test results—to identify anomalies and other insights such as structural portfolio, modeling, or scenario changes. Uses and Disclosure CCAR plays a larger and more direct role in the supervision of subject institutions than DFAST. For example, in its publications of CCAR results the Federal Reserve has stated that it has made CCAR a cornerstone of its supervision of the largest and most complex financial institutions. The Federal Reserve uses the CCAR quantitative and qualitative assessments to determine whether to object or not object to an institution’s capital plan (including proposed capital actions such as dividend payments and share repurchases that affect the firm’s capital levels). The Federal Reserve can object to a company’s capital plan based on the quantitative or the qualitative assessment. Federal Reserve staff also stated that the Federal Reserve has moved the focal point of its supervisory process for the largest firms toward the promotion of strong capital adequacy and liquidity planning, in addition to assessing the firms’ preparedness for recovery and resolution in bankruptcy. The staff said that the CCAR qualitative assessment is a primary contributor to overall supervision and a focal point for evaluating a company’s risk-management and internal controls. They explained that strong capital and liquidity planning requires companies to identify and measure risks, understand how risks change in adverse economic scenarios, and have robust internal controls and governance, all of which are elements of the qualitative assessment under CCAR. The Federal Reserve also has been integrating the CCAR qualitative assessment into its regular, ongoing supervisory activities. According to the Federal Reserve’s instructions for the 2016 CCAR cycle, it is to conduct certain supervisory activities throughout the year that inform the annual CCAR qualitative assessment—which allows the Federal Reserve to consistently incorporate supervisory findings from all its examination work into the overall qualitative assessment. For example, Federal Reserve staff we interviewed said that if their supervision identified weaknesses in a company’s internal audit functions, these weaknesses would be relevant to the internal controls and governance assessments within CCAR. Federal Reserve staff explained that as part of the integration of the CCAR into year-round supervision, the Federal Reserve has established teams of subject-matter experts from across the Federal Reserve System to link their work with CCAR. For example, the Federal Reserve has already gathered subject-matter experts in modeling loss revenue to contribute to CCAR throughout the year. The Federal Reserve publishes firm-specific CCAR results separately from its DFAST disclosures. Since it initiated CCAR in 2011, the Federal Reserve has increased disclosure of firm-specific CCAR results, including its supervisory assessments. After the initial CCAR cycle, the Federal Reserve published only an overview of its objectives and methodology and did not reveal whether it had objected or not objected to an institution’s capital plan. The Federal Reserve currently discloses firm- specific capital ratio projections (based on supervisory stress test results and firms’ proposed capital actions) and its decision to object or not object based on the quantitative or qualitative assessment. The Federal Reserve also discloses the reasons for any qualitative objections. Regulators Developed Comparable Stress Test Rules, but OCC Has Made Greater Use of its Supervisory Flexibility The Federal Reserve and other bank regulators issued similar company- run stress test rules but have used different degrees of supervisory flexibility to implement them, with OCC granting the most extensions or exemptions for firms. In addition to the Federal Reserve, the Dodd-Frank Act called for certain financial regulatory agencies, including OCC and FDIC, to issue rules requiring the financial companies they supervise with more than $10 billion in total consolidated assets to conduct stress tests. OCC and FDIC rules apply to certain banks and savings associations, some of which may have a holding company subject to the Federal Reserve’s stress test rules. The act required the agencies to coordinate their stress test rules so that they are consistent and comparable. In 2012, the Federal Reserve, OCC, and FDIC each adopted substantively similar rules implementing this requirement. For example, the agencies’ rules have similar reporting time frames, public disclosure requirements, and stress test methodologies and practices. However, OCC has made greater use of supervisory flexibility in implementing the stress test requirements. Each agency’s rule allows for the acceleration or extension of time frame requirements, while OCC’s and FDIC’s rules also include provisions for them to modify any or all of the stress test requirements. Furthermore, OCC staff stated that the agency had interpreted the Dodd-Frank Act as not requiring all banking institutions within a bank holding company to conduct separate stress tests. But according to information from the Federal Reserve and FDIC, banking institutions covered by their rules are required to conduct stress tests regardless of their status within a holding company structure. Each of the agencies extended the time frames for a limited number of firms. As shown in table 4, the Federal Reserve provided five firms with extensions of time—ranging from 3 to 12 months—to conduct and report the results of required company-run stress tests. FDIC provided a 1-year extension to two firms for the first stress test cycle in 2013, as shown in table 5. OCC has granted more stress test extensions than FDIC and the Federal Reserve and is the only agency to have approved exemptions from stress test requirements. OCC issued 14 1-year extensions—including one firm that received 3 consecutive extensions—and 2 shorter extensions for stress test cycles from 2013 to 2016 (see table 6). Three firms were granted stress test exemptions for one or more of the stress test cycles from 2013 to 2016 including a firm that had been exempted for three cycles. The firms OCC exempted were part of a larger banking organization in which both the parent holding company and affiliated bank were required to conduct stress tests. The reasons cited for granting extensions and exemptions varied among the regulators. The time extensions issued by the Federal Reserve and FDIC generally occurred in the initial stress test cycles to allow additional time for firms to implement effective stress testing systems and compile necessary data. Most of the Federal Reserve’s extensions were technical in nature. One firm reduced its total consolidated assets below the $10 billion threshold and represented that it intended to maintain its assets below the threshold for the foreseeable future. Others involved mergers, including an institution that had made a business transaction based on the timing set forth in the Federal Reserve’s original stress test rules, which were subsequently revised resulting in the institution being subject to the requirements sooner than expected. Federal Reserve and FDIC staff stated that they based their determinations for granting extensions on applicable regulatory provisions and the relevant facts and circumstances of the particular case. OCC also issued extensions to provide institutions that had not previously been subject to regulatory stress test requirements with additional time to construct or enhance their stress testing frameworks. OCC staff noted that such institutions lacked the necessary infrastructure to complete a stress test submission at the time its final rule was implemented. OCC granted extensions and exemptions from stress test requirements for different reasons. For example, according to OCC staff and documents we reviewed, OCC granted extensions and exemptions based on the extent of an institution’s activities, its relative size within a large bank holding company, and if the larger entities in the holding company structure—such as the lead bank and parent holding company—were subject to stress test requirements. The bases for the extensions and exemptions also included OCC’s conclusions about the satisfactory nature of an institution’s financial condition and capital levels and the nature of its business activities and strategies. Our prior work on financial regulatory reform identified an important characteristic of consistent financial oversight—that similar institutions and risks, among other areas, should be subject to consistent regulation, oversight, and transparency—to help minimize negative competitive outcomes while harmonizing oversight. Without a consistent approach to the implementation of the stress test rules, regulators may not be regulating financial institutions that pose similar risks similarly, which could contribute to competitive disadvantages between institutions and inconsistent oversight of risk management. Additionally, financial institutions often have options for how to operate their business which affects who will be their regulator. For instance, banks can change their charters if such a change will allow them to have a regulator perceived as having less stringent regulations. Firms Generally Indicated That Stress Tests Offered Important Benefits but Also Required Substantial Resources Several companies subject to DFAST and CCAR that we interviewed identified a range of benefits but also described significant costs related to these exercises. We discussed the Federal Reserve’s stress test exercises with 19 bank holding companies, including 13 of the 31 companies that participated in CCAR in 2015. We also reviewed Federal Reserve statements and interviewed staff about costs and benefits. Companies Views on Benefits The company officials that we interviewed generally identified overall improvements in risk management and capital planning attributable to DFAST and CCAR, as the following examples illustrate. Several firms said that their prior stress test efforts were fragmented with different business units across the firm assessing risks independently. The firms said that Federal Reserve stress test requirements have led to more comprehensive, enterprise-wide, and forward-looking capital adequacy assessments, including the identification and measurement of risks. One firm said the consolidation of the firm’s stress testing and its integration with capital adequacy assessments had helped the firm to quantify its risk appetite, which was not accomplished prior to DFAST and CCAR. Some firms also identified key benefits from improved data quality and capabilities, including enhanced data collection, analysis, and reporting. Some firms said that the stress test exercises have led to a stronger focus on the governance of capital adequacy processes and an increased involvement of senior management and the board of directors in capital planning decisions. Firms that distinguished between the benefits of different stress test components generally said that CCAR has been more beneficial than DFAST and that company-run tests have been more useful than the supervisory tests. Furthermore, several companies explained that the risk management and capital planning improvements have provided additional benefits for the company’s business operations. Several firms’ officials said that the stress tests have helped improve their business decisions, including by taking a more strategic approach to capital, developing tools to analyze portfolio risks, and facilitating business planning and budgeting. For example, according to officials from one CCAR firm, the stress tests have led to better pricing decisions, strategic and investment focus, and optimization of investor returns. In addition, one smaller company that has been subject to DFAST but not CCAR said that it has used stress test information in its strategic decisions about different markets and geographical locations and incorporated information about risks from asset portfolios in its pricing decisions. Finally, several companies also identified broader, system-wide benefits related to the stress tests. Some firms said that the stress tests have led to higher capital levels and improved risk management that have contributed to the stability of the financial system. Other firms noted that comparable stress test results provide an industry-wide view of capital adequacy and comparisons across companies, which can offer a broader view of relative risks than individual firm assessments. Companies’ Views on Costs While companies we interviewed generally recognized benefits, they also cited costs in complying with stress test requirements. Officials from many of the companies we interviewed indicated that DFAST and CCAR have resulted in significant costs, including for staff resources and other expenditures. Most firms said that the stress test-related costs have increased from year to year or were expected to continue increasing, although some firms noted that costs have stabilized or declined. Several firms cited what they viewed as the Federal Reserve’s continually increasing supervisory expectations—in particular those related to CCAR qualitative requirements—as a main reason to expect continued growth in stress test-related costs. At the same time, the firms stated that they have not collected information on the specific costs directly attributable to DFAST and CCAR. Firms generally said that measuring stress test-specific costs is difficult because the tests involve many employees from around the company who have responsibilities beyond DFAST and CCAR. About half of the companies we interviewed provided estimates of their stress test-related costs or staff resources, which varied widely. For example, for the six CCAR firms that provided cost estimates, recurring annual costs related to both DFAST and CCAR ranged from $4 million to $7 million to more than $90 million. Half of the estimates were for $15 million to $30 million of annual costs. Some CCAR firms provided estimates of the amount of staff resources rather than costs. These estimates varied from about 100 staff for one company to approximately 500 employees with part-time responsibility for the stress tests and an additional 2,000-plus employees spending part of their time supporting the stress tests for another company. In addition, more than a third of the companies said that they used consultants—often to a significant extent—to help complete the work required for DFAST and CCAR. For the non-CCAR firms that we interviewed, cost estimates ranged from around $250,000 to $2 million. Many of the companies we interviewed identified particular factors behind what they viewed as the substantial costs required for the stress test exercises. These included requirements related to the CCAR qualitative assessment such as documentation of processes and controls and supervisory expectations for model development and validation. Several companies noted costs from integrating or upgrading stress test-related technology and risk-management systems and the vast amounts of detailed data needed to complete the stress tests. Many firms pointed to expenditures for consultants to assist with their stress testing efforts. Officials from one CCAR firm said that they hire consultants for many tasks including modeling, documentation, and technical writing; another firm cited the risk-identification process and model risk management as two main drivers of consulting expenditures. Some firms also stressed that competition among companies for qualified staff to perform stress testing had increased the cost of individuals with quantitative skills and modeling expertise. Two firms said that the supply of qualified employees has been low compared to demand related to the Federal Reserve’s stress test requirements. In addition to direct costs, some firms indicated that they also faced costs related to holding excess capital to ensure they did not receive an objection from the CCAR quantitative assessment. In the CCAR quantitative assessment, the Federal Reserve can object to a company’s capital distribution plan if stress test results show the company’s post- stress capital ratios falling below required minimum levels. Some of the companies said that they have held more capital than they otherwise would to account for differences between their stress test results and the Federal Reserve’s supervisory test results. The firms stated that limited transparency about the Federal Reserve’s supervisory stress test models leads to uncertainty about exactly how much capital they need to hold to avoid an objection. The companies’ estimates of the amount of additional capital they have held to avoid a CCAR objection ranged from about $500 million for one firm to $15 billion for another firm. Federal Reserve Views on Benefits and Costs The Federal Reserve has stated that the stress tests have provided important benefits to the financial system and subject institutions. In issuing the CCAR results for 2016, the Federal Reserve stated that the increased capital levels of large bank holding companies since the financial crisis have at least in part been due to the stress test exercises. In addition to helping strengthen large firms’ capital positions, Federal Reserve officials have noted other substantial contributions that DFAST and CCAR have made to financial supervision in several key areas. First, officials identified improved risk management, internal controls, and governance practices at institutions subject to DFAST and CCAR. Second, officials noted that the stress test-related exercises have led to a more forward-looking and stress-based approach to assessing capital adequacy by the Federal Reserve and institutions, which represents an improvement over previous practices focused on traditional regulatory capital ratios (which only reflect past performance). Third, officials observed that the horizontal nature of the stress tests— with a simultaneous review across multiple firms—has provided the Federal Reserve with a more consistent and industry-wide perspective on potential risks and vulnerabilities. Fourth, some officials identified greater supervisory transparency associated with DFAST and CCAR, including disclosure of firm- specific information that could lead to greater market discipline and information on the Federal Reserve’s framework and methodology that could contribute to supervisory accountability. According to Federal Reserve staff we interviewed, the recent financial crisis revealed that banking institutions—including the largest and most complex firms—had significant shortcomings and gaps in their risk- measurement and risk-management systems. These deficiencies included limitations in the collection and use of data for risk identification and management and in the ability to assess potential risks to the company during periods of stress, such as a lack of data on firm risks and exposures. The staff noted that these banking institutions needed to make investments to improve these fundamental risk-management capabilities because the largest banking institutions’ financial stability has implications for the financial system and economy. The staff indicated that costs to the firms will normalize at some point and the Federal Reserve did not expect that costs would continue to increase. They explained that initial costs to establish the necessary data, risk management, internal controls, governance, and stress testing capacity can be high but future costs will be less once firms establish the needed capabilities and processes, and required data are available. They noted that although some of the new costs come from the stress test exercises alone, even without the explicit DFAST and CCAR requirements, firms still would have to take these actions and incur costs because of broader supervisory expectations for enhancing risk management, internal controls, and governance processes, including around capital adequacy assessments. CCAR Qualitative Assessments Include Multiple Levels of Review, but Communication of Methodology and Expectations Was Limited The Federal Reserve has identified capital adequacy principles and established an organizational and oversight structure for assessing qualitative CCAR submissions. The assessment framework includes processes to help ensure consistency across evaluations. In the qualitative assessment, the Federal Reserve uses ratings and rankings to compare firms’ capital planning practices against supervisory expectations. However, it has not disclosed sufficient information that would allow for a clear understanding of its methodology. The Federal Reserve has provided companies with information on supervisory expectations and peer practices related to the qualitative assessment, but the infrequent timing of these communications and evolving peer practices can pose challenges to companies that must meet the expectations annually. The Federal Reserve Identified Principles and Established an Organizational Structure for Qualitative Assessments As discussed previously, CCAR qualitative assessments are comprehensive reviews of the capital planning processes and capital policies of large bank holding companies. The Federal Reserve has structured the evaluations for its qualitative assessments around seven principles of an effective capital adequacy process (see table 7), which it has identified in public guidance documents. The principles cover different assessment topics including risk management, stress testing practices, capital policies, internal controls, and overall governance of capital planning. The seven principles each represent distinct aspects of a CCAR evaluation and, according to Federal Reserve staff, each principle could influence others (with a deficiency in one principle often highlighting a deficiency in another principle). For example, weaknesses across different principles can signal a weakness in effective governance. Organizational Structure for Qualitative Assessments The Federal Reserve has established a tiered organizational structure for its CCAR qualitative assessments, with roles and responsibilities assigned throughout the CCAR program. Federal internal control standards state the importance of establishing an organizational structure and clearly assigning responsibility for key roles. According to interviews with Federal Reserve staff and our review of internal agency documents, the Federal Reserve’s structure for completing CCAR qualitative assessments is headed by the Director of the Division of Banking Supervision and Regulation. The entities within the Federal Reserve that have roles in the process range from Reserve Bank examiners to the Board of Governors. Evaluation teams. Teams of designated staff from across the Federal Reserve System initially review and evaluate companies’ capital plan submissions for the qualitative assessment. The two types of staff teams involved are (1) on-site examination teams (supervisory on-site teams), which consist of staff from the Reserve Bank that oversees the firm; and (2) subject-matter experts (horizontal evaluation teams) assigned to assess specific aspects of capital planning and stress testing for each CCAR firm. CCAR Executive Committee. The Executive Committee manages the CCAR program and holds ultimate responsibility for the program’s design and execution. The Executive Committee is chaired by a senior officer from the Board of Governors staff and comprises senior staff from across the Federal Reserve System, including senior staff from the Large Institution Supervision Coordinating Committee (LISCC) Operating Committee, Large and Foreign Banking Organizations (LFBO) Management Group, and the Division of Financial Stability, which monitors financial markets and analyzes potential threats to financial stability. The Executive Committee reviews the evaluation teams’ assessments and provides final ratings for each principle at each company and consolidated rankings of all companies to the LISCC Operating Committee and LFBO Management Group. While the Executive Committee is ultimately responsible for the CCAR program, the Executive Committee delegates execution and administration to the CCAR Program Oversight Group. The CCAR Program Oversight Group works with evaluation teams during their assessments to maintain consistency before providing conclusions to the Executive Committee. LISCC Operating Committee. The LISCC Operating Committee is responsible for setting priorities for and overseeing the execution of the LISCC supervisory program—for the largest and most systemically important financial institutions subject to Federal Reserve oversight. The Operating Committee is chaired by a senior officer from the Board of Governors Staff and includes senior officials from various divisions at the Board of Governors and Reserve Banks. The LISCC Operating Committee chair reports to the Director of Banking Supervision and Regulation. For the CCAR qualitative assessment, the Operating Committee provides final recommendations to the Director of the Division of Bank Supervision and Regulation to object or not object to the capital plans of companies in the LISCC portfolio. LFBO Management Group. The Management Group oversees the supervision of large institutions ($50 billion or more in total assets)— including foreign banking organizations—not included in the LISCC portfolio. In CCAR, the LFBO Management Group reviews and provides feedback to supervisory on-site teams on their company- specific object or non-object recommendations for these firms. The LFBO Management Group does not provide final recommendations to the Director of the Division of Bank Supervision and Regulation. Instead, the Reserve Bank responsible for each non-LISCC firm determines final recommendations in consultation with Board staff. Director of Banking Supervision and Regulation. Banking Supervision and Regulation oversees and develops regulations for Federal Reserve-supervised banking institutions. For CCAR qualitative assessments, the Director of Banking Supervision and Regulation makes the final recommendations to object or not object to the capital plans for each firm to the Board of Governors. Board of Governors. The Board of Governors has ultimate decision- making authority for CCAR qualitative assessment determinations. Federal Reserve CCAR staff brief the Board of Governors on the final recommendations approved by the Director of Banking Supervision and Regulation. According to Federal Reserve staff, the Board reviews all assessments but to date has only voted on whether to implement recommendations to object or conditionally not object to a company’s capital plan. Scope of Qualitative Assessments Federal Reserve procedures call for staff to adjust the scope of the qualitative assessment based on a firm’s size, characteristics and the materiality of risks it poses to the financial system. According to these procedures, the Federal Reserve varies the scope of its reviews based on a company’s size and complexity, so that not all companies are assessed on every aspect of the qualitative assessment each year. However, for LISCC companies, the Federal Reserve reviews all key aspects of their capital planning and capital adequacy processes over the course of an annual CCAR cycle. For non-LISCC companies the Federal Reserve procedures dictate a risk- focused approach to identify significant aspects of a company’s capital adequacy process that it will assess in that year’s CCAR cycle. According to Federal Reserve staff, this approach places priority on reviews of companies with larger risks and systemic importance by assigning additional resources based on company characteristics. These staff also said that risk-focused reviews help to efficiently allocate Federal Reserve staff resources and that the focus could change from year to year based on the Federal Reserve’s views on certain risk areas. For example, in 2016 the Federal Reserve horizontal evaluation teams reviewed only certain areas of their assigned principles for non-LISCC companies. Staff further explained that, when applying this approach, the Federal Reserve instructs its staff to avoid focusing on risks that may be immaterial. Federal Reserve staff said that the risk-focused approach influences the staffing of evaluation teams for the qualitative assessment. For example, horizontal evaluation teams involved in the qualitative assessment include subject-matter experts from across the Federal Reserve System and the risk-focused approach helps ensure an appropriate number of subject- matter experts can be assigned to the teams. Federal Reserve Has Established Processes to Help Ensure Consistency across Qualitative Assessments According to our examination of Federal Reserve documentation, the process for qualitative assessments includes procedures, documentation, and training intended to help ensure consistency across the reviews as well as multiple levels of review and oversight. A number of teams are involved in CCAR evaluations (see fig. 3). As previously discussed, the Federal Reserve assigns two types of teams to review qualitative submissions, supervisory on-site teams and horizontal evaluation teams. In CCAR, supervisory on-site teams are to assess company submissions based on evaluation principles 1, 4, 5, 6, and 7. Supervisory on-site teams are also responsible for documenting Federal Reserve staff recommendations as their assigned company’s submission progresses through the assessment process and making their own recommendations to the LISCC Operating Committee and the LFBO Management Group. Cross-Firm Evaluations To help ensure a comprehensive and consistent evaluation across companies, the Federal Reserve has divided specific responsibilities among horizontal evaluation teams. A designated team or a set of teams are to assess areas corresponding to a specific evaluation principle for all the CCAR submissions. While both firm-specific and horizontal evaluation teams are to perform principle-based assessments, in some cases the horizontal evaluation teams are to provide the assessment in relation to a specific principle and in other cases to supplement the work of on-site teams. The horizontal evaluation teams are broken out into risk evaluation teams, capital and revenue assessment teams, and the capital adequacy process review team. Risk evaluation teams evaluate a company’s ability to measure its risk exposures under stress and create estimates of potential losses from these risks. The teams assess and propose ratings for elements covered by principle 2, which encompasses six different areas of loss- estimation practices. Capital and revenue assessment teams evaluate and propose ratings for elements covered by principle 3, which involve a company’s ability to effectively forecast available capital resources—including revenues and expenses—during stress periods. The capital adequacy process review team supplements the work of the supervisory on-site teams. Specifically, the process review team provides a cross-firm assessment of supervisory on-site teams’ evaluations of certain areas of capital planning—such as risk management, capital policy, internal controls, and governance—while also supporting the on-site teams in making their assessments. According to CCAR program documents, the process review team performs an oversight function by providing objective assessments of some elements of companies’ capital plans and by evaluating the reasonableness, consistency, and completeness of the supervisory on-site teams’ assessments. The process review team produces its own summary assessments of principles 1, 4, 5, 6 and 7 to provide the Executive Committee and on-site teams with a horizontal perspective (for instance, the range of practice in key areas of focus across all CCAR companies). Another team—the scenario evaluation team—assists supervisory on-site teams by assessing the completeness and severity of companies’ internally developed stress scenarios for all CCAR firms. Internal CCAR program documents also establish expectations and mechanisms for teams to resolve any differences that arise during their evaluations through communication and collaboration. The program documents direct teams to send unresolved differences to the Executive Committee for further and final deliberation, if needed. Training According to CCAR procedures, the CCAR Executive Committee oversees centralized training for staff participating in the CCAR program, which helps support a consistent approach to evaluations. Federal Reserve staff said that all staff involved in developing and executing CCAR must take annual training and that staff involved in qualitative assessments participate in additional training throughout the course of each CCAR cycle. Training materials have included overviews of the CCAR assessment framework, decision-making process, review processes, and documentation requirements. Horizontal evaluation team leads and subject-matter experts participate in additional technical training on topics such as new modeling techniques, modeling strengths and weaknesses, and leveraging information from past CCAR cycles, according to Federal Reserve staff. In addition, on-site teams and other relevant staff may be required to participate in training on principles relevant to their work as well as lessons learned from past CCAR cycles. Documentation Both types of evaluation teams are to record their findings and conclusions using template forms that also have instructions to further promote consistency across different teams and evaluations. The supervisory on-site teams and horizontal evaluation teams each are to produce memorandums detailing conclusions related to their assigned capital adequacy process principles. The conclusion memorandum contains a summary of a company’s practices related to the principle being evaluated and describes identified weaknesses and shortcomings. For each principle, the relevant teams are to record a proposed rating (on a scale of one to four), trend (such as stable, improving, or declining), and an overall assessment of the team’s conclusions. In addition, the teams’ conclusion memorandums are to highlight trends in a company’s practices compared to prior CCAR exercises and the observed range of current industry practice. Supervisory on-site teams are also to produce recommendation memorandums for object and non-object decisions, which are to provide overall conclusions and support for a team’s recommendation. The memorandums are to describe potential new supervisory findings, including actions the company should take to remediate issues. They also are to provide a summary and status of outstanding CCAR-related supervisory issues (including matters requiring attention and matters requiring immediate attention). Multiple Levels of Review The review process is to start after the two groups of evaluation teams have completed their assessments of each CCAR company’s capital plan and capital adequacy processes and formally documented their conclusions. According to CCAR procedures, the teams are to submit their evaluations and conclusions to successive groups of senior management for additional review, including the CCAR Executive Committee (see fig. 4). As part of the assessment process, supervisory on-site teams are to propose object or not object recommendations based on the CCAR Executive Committee’s review and conclusions. CCAR procedures call for Executive Committee staff to lead sessions with evaluation teams to review team findings and ensure evaluations are conducted in a consistent and comparative manner across all companies. The Executive Committee is to review findings from these sessions and develops overall assessments for each company. These overall assessments are to include findings from the sessions with evaluation teams and other CCAR information and may differ from evaluation teams’ original conclusions. In developing its overall assessments, the Executive Committee is to make adjustments to reflect different supervisory expectations for companies of various sizes and levels of complexity, according to the procedures and Federal Reserve staff. The supervisory on-site teams are to develop internal recommendation memorandums based on the review sessions and Executive Committee assessments. CCAR procedures call for supervisory on-site teams to provide the recommendation memorandums to the LISCC Operating Committee or the LFBO Management Group and to the Reserve Bank responsible for the company’s supervision. According to CCAR procedures, the LISCC Operating Committee (for firms in the LISCC portfolio), LFBO Management Group, and responsible Reserve Banks (for firms not in the LISCC portfolio) are to review the supervisory on-site teams’ recommendations, assessment information provided by the Executive Committee, and other information from throughout the assessment. The LISCC Operating Committee and responsible Reserve Banks then are to make final recommendations for the companies for which they are responsible and provide the recommendations to the Director of Banking Supervision and Regulation for approval. Upon approval by the Director of Banking Supervision and Regulation, the Board of Governors is to be briefed on all CCAR recommendations. However, the Board has only voted on whether to approve objection or conditional non-objection recommendations, according to Federal Reserve staff and internal documents we reviewed. Qualitative Assessments Use Ratings and Rankings to Reflect Evaluation of Firms’ Capital Planning Practices In evaluating a company’s capital plan and completing the CCAR qualitative assessment, the Federal Reserve produces measurements— ratings and rankings—of the extent to which company practices meet supervisory expectations. As discussed previously, evaluation teams are to structure their assessments around the Federal Reserve’s seven principles of an effective capital adequacy process. During the assessments, teams are to evaluate companies’ current practices and assign each individual company a numerical rating for each principle and applicable subcomponents. The ratings are intended to measure the extent to which a company’s capital adequacy process meets supervisory expectations. According to Federal Reserve program documentation, evaluation teams base their rating assessments on established supervisory guidance and supervisory expectations specific to capital planning. For example, when evaluating a company’s modeling practices, Federal Reserve teams may use prior supervisory guidance on model risk management, which is also incorporated into the 2015 supervisory and regulation letters. According to Federal Reserve staff, CCAR reviews have incorporated CCAR-specific expectations and also include other long-standing guidance on internal controls, risk management, and corporate governance, particularly where such guidance is applicable to practices that support capital planning. For example, these staff said that supervisory guidance and expectations for internal controls existed before the implementation of CCAR and that this type of guidance, which was relevant before the current stress testing regime, is now enhanced by the CCAR qualitative assessment. Ratings The Federal Reserve uses evaluation ratings to summarize findings related to the different review components (organized by principle and subcomponents) and to develop its overall qualitative assessment for each company. Federal Reserve program guidance also states that ratings are intended to help facilitate internal discussions around deficiencies in a company’s capital adequacy process and serve as the basis for making qualitative assessment determinations. Supervisory on- site teams develop overall ratings for their assigned capital adequacy process principles. In contrast, risk evaluation teams and capital and revenue assessment teams develop ratings for both subcomponents of their assigned principles and a consolidated rating for the overall principle. The Federal Reserve’s rating system comprises four numerical scores: 1 - strong, 2 - satisfactory, 3 - fair, and 4 - unsatisfactory. The Federal Reserve defines each score by the degree to which a company’s practices meet supervisory expectations. The top score reflects company practices that meet expectations and include sound, transparent, and repeatable processes. Intermediate scores represent practices that either generally meet or do not meet expectations. According to Federal Reserve staff, practices rated below satisfactory may not warrant an objection but would require remediation to avoid future objections. An unsatisfactory rating is used for practices that do not meet expectations, have critical deficiencies, and will require significant corrective action. In addition, Federal Reserve evaluation teams can use plus or minus modifiers to further differentiate the intermediate scores. Horizontal evaluation teams can develop further detailed guidance on rating modifiers but must adhere to general ratings guidelines. Federal Reserve program guidance instructs evaluation teams to consider the trend in a company’s practices relative to other CCAR companies and whether its practices were above, consistent with, or below peer practices. Federal Reserve internal guidance also instructs evaluation teams to consider progress towards remediating previously identified weaknesses when assigning ratings. Rankings The Federal Reserve also develops rankings to compare capital adequacy practices across CCAR companies and help ensure consistency across evaluations. According to CCAR procedures, rankings are developed at various levels of the qualitative assessment (capital adequacy planning subcomponents, principles, and overall). The procedures call for horizontal evaluation teams to develop preliminary rankings directly from assigned ratings by grouping companies for each principle, and the Executive Committee assigns the firms to cohorts based on their ratings and rankings. According to Federal Reserve staff, companies within cohorts are considered more similar in the overall quality of their practices than they are to companies in any other cohort. Federal Reserve staff also said that firms within a cohort are not ranked, but are generally listed in alphabetical order. The procedures also state that the rankings’ relative comparisons allow the Federal Reserve to differentiate among companies that might have the same ratings but also exhibit differences in their processes that would allow for meaningful distinctions (that is, different rankings). Figure 5 provides a hypothetical example of the rating and ranking process. Federal Reserve staff explained that evaluation teams establish rankings by identifying relative strengths and weaknesses in each company’s processes. CCAR procedures call for evaluation teams to provide proposed ratings and rankings to the Executive Committee based on their respective evaluations. The Executive Committee is to review and approve the proposed rankings (in the form of cohorts), including an overall ranking for each CCAR firm that is to include consideration of the company’s size, complexity, and systemic importance. According to Federal Reserve staff, the most important role of the rankings and cohorts is to help ensure consistency in the Federal Reserve’s execution of the qualitative assessment. In addition, Federal Reserve staff stated that cohorts help ensure expectations are properly tailored for LISCC and non-LISCC firms. The staff noted that the comparative analysis and rankings help the Federal Reserve to identify what distinctions exist among individual CCAR firms. The staff explained that the Executive Committee reviews evaluation teams’ findings and supporting analyses to help determine which companies have stronger and weaker practices. Because of the small number of ratings categories, the staff said that a relative comparison allows Federal Reserve staff to distinguish which firms have the best practices among those that might have the same rating. The staff said that while rankings assist the Federal Reserve in ensuring consistency across CCAR assessments, objection determinations are made based on an evaluation of a company’s practices against supervisory expectations—and not in relation to how they compare to other CCAR firms. According to Federal Reserve staff, even the lowest-ranked companies may not necessarily have weaknesses in their capital planning processes that would warrant an objection. The evaluation teams’ ratings and rankings may change after deliberations between the evaluation teams and Executive Committee staff. Differentiated Expectations The Federal Reserve has applied different supervisory expectations to CCAR companies based on firm characteristics. Specifically, the Federal Reserve has identified expectations for capital planning and capital adequacy processes that reflect differences in a company’s size, scope of operations, activities, and systemic importance. For example, in December 2015 the Federal Reserve published supervisory letters explaining how its supervisory expectations differ for two groups of CCAR companies—the largest and most complex firms, and large and non- complex firms. In these and other documents, the Federal Reserve has stated that it has heightened supervisory expectations for the largest and most complex companies in all areas of capital planning and that it expects such companies to have leading practices (in terms of sophistication, comprehensiveness, and robustness) for all of their portfolios and activities compared to other CCAR companies. It has declared that smaller and less complex companies will not be held to the same standard. CCAR documentation we reviewed indicated that the Federal Reserve has used differentiated expectations for larger and more complex companies. For example, the Federal Reserve expects complex companies to have a formal risk-identification process with quarterly updates and use quantitative approaches for risk management. In contrast, the Federal Reserve expects non-complex companies to have a less formal risk-identification process and use either qualitative or quantitative approaches for risk management. Moreover, qualitative assessment results reflected better relative rankings for non-LISCC companies than LISCC companies, which the Federal Reserve attributed in part to its higher expectations for LISCC companies. Federal Reserve staff also explained that successful capital adequacy planning is more difficult for LISCC companies simply because of their large size and complexity, which together with the heightened supervisory expectations explains much of the differences in qualitative assessment results. According to Federal Reserve staff, similar to the Dodd-Frank Act requirement for enhanced prudential standards, the Federal Reserve has higher expectations for the largest and most complex companies because problems at such firms are more likely to have negative consequences for the financial system and economy. It has also indicated that new CCAR companies may need time to build and implement internal systems necessary to meet CCAR requirements. Other Factors Influencing Determinations Multiple factors influence the Federal Reserve’s final qualitative assessment determinations. According to Federal Reserve staff, evaluation teams propose recommendations based on multiple factors related to (1) weaknesses identified in the qualitative assessment and (2) the severity of the weaknesses and the likelihood that a company can remediate weaknesses before the next CCAR review cycle. In particular, Federal Reserve procedures instruct staff to evaluate the severity, materiality, quantity, pervasiveness, and duration of identified process deficiencies when considering whether supervisory findings warrant an objection determination. In addition, objections may be due to deficiencies identified in multiple areas or based on a single fundamental weakness if that weakness is in an area critical to the company’s operations or if the severity of the deficiency places the reliability of the company’s overall capital plan into question. Furthermore, program guidance indicates that, while all capital adequacy principles are important, some principles reflecting foundational practices and deficiencies in these areas may be more likely to result in an objection recommendation than others. Federal Reserve staff said that the Federal Reserve may decide to apply a conditional non-objection for various reasons, including that a company has significant deficiencies in certain areas of their capital planning processes, but would be able to address identified deficiencies relatively quickly, depending on the nature of the deficiency. If a company has a significant weakness in a critical area that is not addressed over time, Federal Reserve staff noted that the company may receive an objection even if it has tried to address the deficiency. Federal Reserve Has Disclosed Limited Information about Its CCAR Qualitative Methodology and Has Not Provided Timely Guidance on Leading Practices While the Federal Reserve has communicated certain information about CCAR qualitative assessments to participating companies and the public, it has not disclosed more detailed information that would allow for an understanding of the methodology for the assessments or updated guidance to firms about leading practices. Limited Disclosures of Information about Methodology and Objections While the Federal Reserve has communicated some CCAR-related information to the public and directly to CCAR companies, it has not provided the level of information necessary for a clear understanding of its qualitative assessment methodology, including its framework for evaluating the extent to which companies have met supervisory expectations and determinations, and for a clear understanding of the reasons for objection determinations. Federal Reserve communications about the qualitative assessments have included information about supervisory expectations and other topics. For example, the Federal Reserve has issued public documents that include an annual CCAR instructions and guidance document, which are published at the beginning of each CCAR cycle. In 2013, the Federal Reserve also issued a separate document describing its capital planning expectations and examples of the observed range of practices among CCAR companies. In December 2015, the Federal Reserve issued supervisory letters to consolidate previously-issued capital planning expectations and to clarify differences in expectations based on firm size and complexity. In addition, at the end of each CCAR cycle, the Federal Reserve has provided CCAR companies with confidential letters describing assessment findings and information on any supervisory matters requiring attention. The Federal Reserve also has publicly released the identity of firms that have received an objection or conditional non-objection based on the qualitative assessment and a general description of the reasons for the objection or conditional non-objection in its annual CCAR results document. For example, the Federal Reserve’s published results have described the capital planning areas in which deficiencies were found, such as risk management and internal controls. While these documents are helpful in providing some information about the CCAR assessment and its results, they do not provide information about the assessment framework, such as the role of various evaluation teams and descriptions of ratings and rankings and other associated processes. The Federal Reserve also has not publicly disclosed information about the nature of the deficiencies in capital planning areas leading to its objection determinations, such as why a firm’s risk management or internal controls were inadequate. For instance, the Federal Reserve’s 2016 CCAR results disclosure states that the reasons for qualitative objections for one firm were based on deficiencies in the risk management and control infrastructure. This includes risk- measurement processes, stress testing processes, and data infrastructure. It also stated that these deficiencies limited the reliability of the firm’s capital planning process and its ability to conduct a comprehensive capital adequacy assessment. The document did not discuss why, or in what ways, these areas did not meet Federal Reserve expectations. For example, it did not explain what elements or characteristics of the company’s risk measurement and stress testing processes and data infrastructure were not reasonable or appropriate. According to an Office of Management and Budget (OMB) directive on open government, transparency promotes accountability by providing the public with information about government activities. While the Federal Reserve is not required to follow OMB’s guidance, this guidance identifies a set of actions for all agencies to take to increase transparency in government operations. Similarly, our prior work has recognized that transparency—balanced with the need to maintain sensitive regulatory information—is a key feature of accountability. Federal Reserve staff told us that they have publicly described some aspects of the qualitative assessment framework, including factors that influence objection determinations, but have not published a description of the evaluation teams and ratings because they consider such aspects to be internal processes. According to these staff, the Federal Reserve may consider providing additional information about the qualitative assessment process in the future through public CCAR documentation or other communication with companies. However, the Federal Reserve has published information about its methodology for completing the supervisory stress test and CCAR quantitative assessment, which also involves internal supervisory processes, as illustrated in the following examples. Although it has not disclosed specific details about its models, the Federal Reserve’s DFAST results document describes the supervisory stress test methodology used to produce the post-stress capital ratios that underlie the quantitative assessment. The disclosed information includes the analytical framework for the supervisory stress test, modeling approach, model methodology and validation, data inputs, and general descriptions of specific models used to project stressed capital ratios. Publicly-available Federal Reserve supervision manuals disclose detailed information about the policies and procedures used in examinations of banking institutions as part of the Federal Reserve’s normal, ongoing supervision. Without disclosing additional information that would allow for a better understanding of the Federal Reserve’s methodology for completing qualitative assessments and the reasons for objection determinations, financial markets and the public may have a limited understanding of this critical aspect of the CCAR program. The limited transparency could hinder public and market confidence in the Federal Reserve’s CCAR assessments and the extent to which the Federal Reserve can be held accountable for its decisions. Federal Reserve Guidance about Practices Not Updated Since 2014 and Companies Cited Concerns about the Communication of Expectations The Federal Reserve has not updated guidance on current and leading capital planning practices used by CCAR companies since 2014 and companies also cited concerns about how expectations were explained. The Federal Reserve has periodically issued guidance on current and leading capital planning practices used by CCAR companies. For example, it included the information in an appendix to the CCAR instructions for the 2015 cycle. The appendix described common themes in company practices that the Federal Reserve observed during the prior year’s review. In the appendix, the Federal Reserve stated that it included the information to build on expectations outlined in previous guidance and to provide additional clarification for specific areas in which companies continued to experience challenges. However, the Federal Reserve has not provided information on the range of current capital planning practices (including those it views as leading and lagging practices) since it issued the instructions document in October 2014. Although the Federal Reserve’s recently published supervisory letters consolidated its capital planning expectations, they did not include information on observed industry practices or the Federal Reserve’s views on what constituted leading or lagging practices. In contrast, the expectations and range of practice document that the Federal Reserve issued in 2013 identified practices among CCAR companies that the Federal Reserve considered to be stronger (leading practices) as well as those that it deemed to be weaker for capital planning purposes. The 2013 document also clarified that practices identified as leading or industry best practices should not be considered a safe harbor and stated that the Federal Reserve anticipated that leading practices would continue to evolve as new data became available, techniques advanced, economic conditions shifted, and new risks emerged. The Federal Reserve also has communicated its expectations for capital planning directly to companies through various channels. According to Federal Reserve staff, the Federal Reserve has communicated specific expectations to companies through its other supervisory activities and through discussions with the CCAR Executive Committee during CCAR reviews. For example, the Federal Reserve has communicated with companies throughout the year about their progress toward remediating any supervisory issues identified during the CCAR qualitative assessment. Staff also said that at the conclusion of a CCAR cycle, the Federal Reserve has directly discussed CCAR findings with companies and sent feedback letters describing the results of CCAR reviews and any related supervisory findings and matters requiring company attention. Federal Reserve staff further explained that these conversations have established what actions the company would take to address the contents of the feedback letter and that additional meetings have been held to obtain updates on progress toward remediating identified weaknesses and matters requiring attention. The Federal Reserve has stated that it generally expects identified weaknesses to be remediated before the next annual capital plan submission, where appropriate, but recognized that some efforts may require additional time. However, many CCAR companies we interviewed expressed concerns about what they viewed as limited or unclear communication of capital planning expectations for the qualitative assessment by the Federal Reserve. While several companies indicated that the Federal Reserve’s communication and guidance had improved over time, most of the companies said that a lack of clarity about supervisory expectations posed challenges for them. Specifically, Nearly all of the companies we spoke to said that at times feedback was inconsistent across Federal Reserve teams or lacked clarity. One company said that ambiguity around Federal Reserve expectations for the qualitative assessment has made it difficult to implement necessary changes. Most CCAR companies we spoke to raised concerns that Federal Reserve expectations have increased annually while guidance has sometimes been unclear, insufficient, or outdated. Several of these companies suggested that the qualitative assessment process could benefit from increased transparency and more granular and timely guidance. Several companies we spoke to noted that Federal Reserve guidance on industry leading and lagging practices that had been provided—in particular, the 2013 document highlighting the Federal Reserve’s views on the range of company practices—has been helpful in understanding supervisory expectations. Federal internal control standards state the importance of relevant, reliable, and timely communications, including with external stakeholders. Federal Reserve guidance also indicates that the Federal Reserve expects the largest and most complex CCAR firms to use leading capital planning practices—those that are the most sophisticated, comprehensive, and robust—and that these leading practices are expected to evolve over time. According to Federal Reserve staff, the Federal Reserve has not decided whether it will issue additional guidance on company practices or supervisory expectations beyond the recently published supervisory letters. However, Federal Reserve staff stated that the Federal Reserve does not intend to update its 2013 guidance on the observed range of CCAR companies’ capital planning practices or publish another “common themes” appendix or similar guidance documents because its recently issued supervisory letters include consolidated guidance on supervisory expectations relating to firms’ capital planning processes. The documents also have technical appendixes containing specific expectations but do not include information on leading practices. The staff said that the Federal Reserve intends to use these documents as the primary mechanism for clarifying expectations. Federal Reserve staff also told us that communicating company-specific expectations occurs through direct communication with CCAR companies, including confidential feedback letters. Yet, without periodically updated guidance on observed capital planning practices and those that the Federal Reserve considers to be leading ones—especially as industry practices evolve—some CCAR companies may have difficulty determining what is necessary to meet Federal Reserve expectations, which could impede the achievement of CCAR goals. Federal Reserve Has an Official CCAR Communications Channel but Has Not Specified Response Times for Questions from Companies The Federal Reserve has designated an official communications channel for CCAR companies to ask questions related to CCAR, but has not communicated time frames for responding to questions. The Federal Reserve has designated a general e-mail address—referred to as the CCAR Communications Mailbox—to field all questions from CCAR companies and provide all responses and other official communications on behalf of the Federal Reserve. The Federal Reserve has instructed firms to submit CCAR-related questions to the mailbox and stated that only responses received through the mailbox will be considered official Federal Reserve responses, although meetings and other discussions with Federal Reserve staff may be arranged. According to Federal Reserve procedures, staff identify questions as being either broadly relevant to all CCAR companies or company-specific. The Federal Reserve distributes broadly relevant questions and responses to all CCAR firms through a frequently-asked-questions (FAQ) report, while the procedures call for the company-specific questions to receive a direct response. The Federal Reserve has established an internal target for response times but it has not communicated this or other time frames to CCAR companies. Federal Reserve procedures for the CCAR mailbox identify an internal goal of providing companies with a response to submitted questions within 7 business days of receipt. Federal Reserve staff indicated that the time frame target is primarily intended to help process the questions internally and prepare responses. However, according to Federal Reserve procedures, communication with companies submitting questions consists only of acknowledging receipt of questions and providing a direct response after the completion of the internal review and response process. However, most companies we interviewed identified limitations in receiving timely or helpful responses from the CCAR mailbox. For example, some companies said that the Federal Reserve’s mailbox responses tended to be general and standardized to apply to all companies rather than tailored to a company’s specific circumstances. In addition, two companies said that answers to some questions simply reiterated the stress test and capital plan rule, which was of limited usefulness. Several companies also explained that it could take multiple weeks or even months to receive responses to mailbox questions, which represented a considerable amount of time in the context of CCAR time frames. Officials from two companies noted that while 2 to 3 weeks may not seem like an excessive amount of time, the delays prevent companies from addressing the capital planning topics covered by the question as they await guidance from the Federal Reserve. The officials also stated that they understand that it takes time to research and review answers before they are sent to companies and that the Federal Reserve wants to provide considered and consistent responses. Several companies also identified improvements in communication with the Federal Reserve’s on- site teams, including their responsiveness to company questions. Federal internal control standards state the importance of relevant, reliable, and timely communications including with external stakeholders. The Federal Reserve has stated that it designed the CCAR mailbox and FAQ process to help ensure that CCAR companies received timely and consistent responses to all submitted questions. Federal Reserve staff also explained that many questions require further research and internal deliberation before a response can be provided, which makes it difficult to commit to a specific response time frame. For example, according to Federal Reserve procedures, questions and responses that set new guidance or involve broad policy implications may require additional review including discussions with the CCAR oversight committees. Internal process documentation calls for Federal Reserve staff with subject-matter expertise in areas such as capital policy, balance sheet items, or model risk management to draft responses to questions while other subject-matter experts, a legal reviewer, and management review and approve the draft responses. However, failure to establish and communicate time frames for responding to company inquiries may complicate companies’ management and planning of their CCAR submissions and hinder their ability to address supervisory concerns in a timely fashion. For example, due to CCAR deadlines, a company awaiting a response to a question the Federal Reserve has deemed to involve broad policy implications may have to proceed with developing its CCAR submission without receiving the guidance it needs from the Federal Reserve. Supervisory Scenario Design Process Integrates Historical Data and Current Risks but Some Limitations Exist Federal Reserve staff design supervisory scenarios for the supervisory stress tests in DFAST and CCAR by integrating data from historical recessions and the recent financial crisis with an assessment of current risks to financial stability. But limitations exist with some aspects of the scenario design, including consideration of trade-offs related to the choice of severity and assessment of the sufficiency of a single severe supervisory scenario. Scenario Design Process Integrates Historical Data and Current Risks Federal Reserve staff design supervisory scenarios for the supervisory stress tests in DFAST and CCAR by integrating data from historical recessions and the recent financial crisis with an assessment of current risks to financial stability. Federal Reserve staff design the scenarios according to Federal Reserve policy outlined in a November 2013 policy statement. As previously discussed, the Federal Reserve annually creates multiple supervisory scenarios for DFAST and CCAR: Baseline scenario. Generally reflects economic conditions expected by economic forecasters. Adverse scenario. Features mild to moderate economic and financial stress driven by selected potential risk factors. Severely adverse scenario. Features severe economic and financial stress, generally driven by a different set of risk factors than the adverse scenario. Global market shock and counterparty default components (applicable to companies with large trading or custodial operations). These two components, applicable to a subset of companies, are designed to stress the trading and private equity (in the case of the global market shock), or counterparty positions (in the case of the counterparty default component) of bank holding companies with significant trading activities. These components are supplemental to both the adverse and severely adverse scenarios. Overlapping teams of staff from across the Federal Reserve System simultaneously develop the macroeconomic scenarios (baseline, adverse, and severely adverse) and the market shock component. The macroeconomic scenarios feature stress events that evolve over 13 quarters while the global market shock and counterparty default components take place at a single point in time. At the start of the process, the macroeconomic scenario design group meets to discuss salient risks that might be included in the scenarios, drawing on the Federal Reserve’s quarterly assessments of risks to financial stability and input from FDIC and OCC staff. Macroeconomic modelers at the Federal Reserve translate these identified risks (such as a decline in housing prices) into projections for each of the 28 economic and financial variables included in the scenarios. The projections (over 13 quarters) represent the quantitative output of the scenario design process. The economic and financial variables include measures of the unemployment rate, gross domestic product, housing and equity prices, interest rates, and financial market volatility. A separate scenario design group develops the global market shock component, which results in a set of instantaneous shocks to a broad range of financial market risk factors. These shocks involve large and sudden changes in asset prices, interest rates, and measurements of market risk. Price changes in the market shock scenario generally have been comparable to financial market developments in the second half of 2008 (the height of the financial crisis) and also featured larger declines when market valuations have been more elevated or to emphasize salient risks identified by the Federal Reserve. Federal Reserve staff present the proposed scenarios to stress test oversight groups and division directors in the Federal Reserve and to officials from FDIC and OCC. After considering feedback, the scenario design group provides options for final scenarios to the Board of Governors chair, vice chair, and other governors involved with bank supervision for their input and preferences. Finally, the scenario design group completes the scenarios based on the principals’ choices, including adding a narrative description of the key factors driving the scenario, and releases the final scenarios on the Federal Reserve website. Limitations Exist with Some Aspects of the Scenario Design Process While the Federal Reserve has implemented a framework for designing supervisory scenarios, some aspects of the process have limitations, in particular regarding the choice of severity and the sensitivity of results to alternative severe scenarios. Federal Reserve Has Not Considered Severity outside Postwar U.S. History The Federal Reserve largely has relied on historical experience to establish the severity of the severely adverse scenario, operationalized primarily through the unemployment rate. More severe scenarios (such as those with higher unemployment rates) generally would have an adverse impact on loans and other assets, increasing losses, reducing income and profitability, and hence reducing the projected post-stress capital ratios for participating companies. IMF principles for supervisory stress testing suggest that supervisors should focus on tail risks—those risks associated with very unlikely but extreme events, including events that have not occurred in the past—and highlight the risks of basing scenario design decisions solely on historical experience. The Federal Reserve’s decisions about the severity of its scenarios have been driven primarily by U.S. postwar historical experience. In designing stress test scenarios, the Federal Reserve primarily has used the change in and level of the unemployment rate to determine the severity of the scenario (with higher unemployment rates associated with more severe scenarios). According to Federal Reserve policy, the unemployment rate should rise by 3–5 percentage points and must reach a minimum of 10 percent in the severely adverse scenario. In practice, the Federal Reserve has increased the unemployment rate in the severely adverse scenario by 4 percentage points in each year from 2013 to 2015 and by 5 percentage points to reach the 10 percent minimum in the 2016 scenario. Federal Reserve staff stated that a 3–5 percentage point increase and a 10 percent unemployment rate minimum were consistent with postwar historical recessions and provided a reasonable basis for determining the overall severity of the severely adverse scenario. In discussing the 2015 severely adverse scenario, which had a peak unemployment rate of 10 percent, staff indicated that a higher or lower unemployment rate—for example, 9 percent or 12 percent—would be difficult to justify based on historical precedent. In particular, staff noted that a 12 percent unemployment rate has never been seen in postwar U.S. history. Also, it would have required an unprecedented 7 percentage point increase in the unemployment rate. Similarly, staff noted that a 9 percent unemployment rate would be relatively low compared to severe U.S. recessions. In other aspects of scenario design, in contrast, Federal Reserve policy is cognizant of the possibility that scenarios may produce risks that appear together in ways that could be outside of historical experience—and Federal Reserve Staff noted that supervisory scenarios have featured combinations of risks that have not occurred historically. Federal Reserve staff also noted that the cumulative severity associated with multiple, simultaneously stressed scenario variables could exceed historical postwar severity. However, the Federal Reserve’s scenario design policy and process are focused on selecting economic conditions that reflect the severity of postwar U.S. recessions. Without proactively considering levels of severity outside postwar U.S. historical experience, the Federal Reserve could miss opportunities to assess and guard against relevant but unprecedented risks to the banking system. For example, if scenario severity decisions had been made in the pre-crisis period based solely on historical conditions that had prevailed prior to 2006, any associated stress tests would have dramatically underestimated subsequent events. Federal Reserve Has Not Assessed Certain Trade-offs Associated with Scenario Severity According to Federal Reserve staff and our review of internal documents, the Federal Reserve has not explicitly analyzed how to balance the choice of the severity of the severely adverse scenario—and its influence on the resiliency of the banking system—with any impact on the cost and availability of credit. The overall severity of a stress scenario will affect how much capital that participating bank holding companies would need to hold to avoid an objection from the CCAR quantitative assessment and make planned capital distributions. Furthermore, a more severe scenario might induce companies to raise additional capital in the short term—and potentially pass costs on to borrowers—but increase the resiliency of the banking system over the long term. OMB guidance states that regulatory analysis—a tool regulatory agencies use to anticipate and evaluate the likely consequences of rules—should be based on the best available scientific, technical, and economic information. While the Federal Reserve is not legally required to follow this guidance, it provides a strong set of analytical practices relevant to significant supervisory and regulatory exercises such as CCAR—and scenario design is a key element of CCAR. Research by the Basel Committee on Banking Supervision (Basel Committee)—an international standard-setting organization for bank regulation—on the potential economic impact of capital requirements provides an example of how the Federal Reserve could use available information to help assess the appropriate degree of severity in stress tests. In its assessments of post-crisis reforms to strengthen bank regulation, the Basel Committee assessed how changes to the level of required capital and liquidity would influence economic growth, the cost and availability of credit, and the likelihood and severity of future financial crises. Federal Reserve staff noted that they were aware of a significant amount of academic literature on the relationship between bank capital and the economy, including the work of the Basel Committee. Moreover, Federal Reserve staff said that the scenarios were designed to match the severity of historical recessions, which assesses the resilience of the banking system and would allow companies to function through a severe recession. However, without more careful assessment of scenario severity, the Federal Reserve cannot be reasonably assured that the scenario design process balances any improvements in the resiliency of the banking system with any impact on the cost and availability of credit. These factors could be especially important when considering a level of severity without a postwar historical precedent—for example, by helping ensure that scenarios that might exceed postwar historical severity do not result in undesired increases in the cost and availability of credit. Federal Reserve Has Not Assessed Sufficiency of a Single Severe Supervisory Scenario The Federal Reserve has not conducted analysis to determine if a single severe supervisory scenario (that is, the severely adverse scenario) is sufficiently robust and reliable to promote the resilience of the banking system against a range of potential crises. The Federal Reserve’s policy statement on scenario design suggests that at times the stress tests may require additional supervisory scenarios to capture a large number of unrelated but significant risks. The CCAR quantitative assessment is based on more than one severe scenario, but the Federal Reserve designs only one of them. Participating institutions design an additional severe scenario, which is intended to reflect particular risks that might differ from the supervisory scenario in substantive ways. The company- designed scenario is implemented with a different capital action assumption, which limits its comparability to other CCAR stress tests. We discussed potential incentive problems associated with company-run CCAR stress tests earlier in the report. There are advantages and disadvantages associated with reliance on a single severe supervisory scenario, as the Federal Reserve does with the supervisory stress tests. Advantages could include simplicity, transparency, and resource use—that is, using a single severe supervisory scenario simplifies communication and limits the resources required to design the scenarios and execute and analyze the supervisory stress tests. While it may be appropriate to use a single severe supervisory scenario, there are also potential disadvantages or risks associated with making the CCAR quantitative assessment based on a single severe supervisory scenario. For example, many different types of financial crises are possible, and the single selected scenario does not reflect a fuller range of possible outcomes. Similarly, IMF principles for supervisory stress tests note that future stress periods are uncertain and could be represented by a range of stress factors, each with a different likelihood of occurrence. Staff at IMF and Bank for International Settlements with whom we spoke also identified trade-offs associated with using one or multiple scenarios. For example, Bank for International Settlements staff noted that firms might hedge against the primary risks in a single scenario but not others that also might be relevant. Similarly, IMF staff said that ideally stress tests would use a large and diverse set of scenarios but also noted that this would increase cost and complexity. Moreover, it is usually necessary to conduct sensitivity analysis to reveal whether, and to what extent, the results of an analysis are sensitive to plausible changes in the main assumptions. For the supervisory stress tests and CCAR quantitative assessment, the design and number of severely adverse scenarios represent key assumptions affecting results. However, the Federal Reserve has not conducted sensitivity analysis to determine whether its single severe scenario is sufficient to accomplish DFAST and CCAR goals. For example, it has not assessed how a range of severe scenarios may lead to different judgments about the overall resiliency of the banking system. Because it has matched the severity of historical U.S. recessions, Federal Reserve officials asserted that the severely adverse scenario would protect against a range of potential crises. Federal Reserve officials also noted that they perform multiple stress tests using alternative scenarios outside of DFAST and CCAR, and also conduct a separate stress test of liquidity. Yet, without assessing the sufficiency of a single severe scenario in the context of DFAST and CCAR—such as performing sensitivity analysis involving multiple scenarios—the Federal Reserve is making CCAR decisions that may not reflect the range of potential crises against which the banking system would be resilient and the magnitude of the range of outcomes that might result from different scenarios. The Federal Reserve therefore may be limited in its ability to understand, communicate, and manage uncertainty associated with its use of the supervisory stress test results. Federal Reserve Has Not Prospectively Assessed Whether Changes in Scenarios Might Inadvertently Amplify Economic Cycles The Federal Reserve has not assessed whether or how the year-to-year changes it makes to the supervisory scenarios over an economic cycle could inadvertently amplify those cycles until after it has completed and published the scenarios. Because the supervisory scenarios used in the stress tests influence a company’s post-stress capital ratios, changes to the scenarios will affect how much capital participating companies need to hold to help ensure they do not receive a CCAR objection. Federal Reserve policy states that supervisory scenarios should feature stressful outcomes that do not induce greater procyclicality—that is, scenarios should not amplify cycles (swings in economic activity between expansion and contraction) in the financial system and economy. Procyclicality could, for example, result in firms needing to raise additional capital or reduce lending during a downturn. After the disclosure of 2015 stress test results, Federal Reserve staff reported to the Board that the supervisory stress tests produced some procyclical results. Specifically, losses on portfolios of loans to consumers (such as credit cards or residential mortgages) in the test were procyclical partly because scenarios had caused the projected losses to shrink as actual economic conditions improved. Based on analysis conducted after the scenarios were finalized, Federal Reserve staff said that most of the decrease in projected losses (compared with the prior year’s stress test) resulted from improvements in bank balance sheets from the previous year, although changes to the scenario also contributed to lower projected losses. Because the scenario contributed to smaller losses as the economy improved, the scenario could produce larger losses as the economy deteriorates, lowering post-stress capital ratios and increasing the amount of capital required to avoid a CCAR objection. Moreover, because the analysis of the impact of annual scenario changes on losses occurs only after scenarios have been developed and made public, the Federal Reserve might learn of procyclical effects too late to take effective preventive steps—for example by adjusting relevant scenario variables before scenarios are made final. As a result, Federal Reserve stress tests could exacerbate future financial stress by increasing requirements on stress test participants while economic conditions are deteriorating. Such an unintended impact of the supervisory scenario on losses could emerge because the complexity of the system of models (discussed in the following section) used by the Federal Reserve makes it difficult to anticipate the combined effects of changes to 28 scenario variables that influence the results of multiple supervisory stress test models. Federal Reserve staff stated that the Federal Reserve’s scenario design policy attempted to avert procyclicality by instituting the 10 percent unemployment rate minimum, to prevent the unemployment rate from falling too much in the scenario when the real economy improved and otherwise allowing the unemployment rate to increase from 3 to 5 percentage points—increasing the unemployment rate more in the scenario when actual unemployment is low, and raising it less when actual unemployment is high. However, the complexity of the system of models the Federal Reserve has used in the supervisory stress test implies that without additional, supporting analysis the Federal Reserve cannot be reasonably assured that small adjustments to the unemployment rate and other variables would produce outcomes that neither amplify nor dampen economic cycles. Federal Reserve Management of Model Risk Has Not Focused on the System of Models The Federal Reserve’s modeling process for the stress tests includes an oversight structure and internal reviews, but it has not focused its model risk management on the system of models that produce stress test results. To estimate the effect of stress test scenarios on companies’ ability to maintain capital, the Federal Reserve has developed individual component models that predict companies’ financial performance in the scenarios. The results of these component models are combined with assumed or planned capital actions of companies and form the system of models used by the Federal Reserve. The Federal Reserve has issued model risk-management guidance that defines model risk as the potential for adverse consequences from decisions based on incorrect or misused model outputs and states that it increases with factors such as greater model complexity and larger potential impact. However, the Federal Reserve has not focused its model risk-management efforts on the system of models, including not conducting sensitivity and uncertainty analyses of how its modeling choices affected the model risk associated with the overall stress test results (post-stress capital ratios). Federal Reserve Has a Process for Development and Oversight of Models Used for Supervisory Stress Tests The Federal Reserve has a development process for the models it uses to predict each institution’s post-stress capital ratios and has an oversight structure for the process. Overview of Supervisory Stress Test Models To estimate the effect of supervisory scenarios on companies’ regulatory capital ratios, the Federal Reserve has developed numerous empirical models that each predict a component of a company’s balance sheet, risk-weighted assets, or income statement (component models) for each of the 9 quarters of the stress test planning horizon. The Federal Reserve then combines the results of the component models with assumed or planned capital actions (for the companies) to produce the post-stress capital ratios. We refer to the combination of the component models that produces the post-stress capital ratios as the system of models. The component models are either predictive or accounting models. The predictive models use historical data to estimate how economic stress events might affect an element of an institution’s financial performance, such as loan losses or revenues. Accounting models apply various accounting rules to an institution’s financial data or to outputs from the predictive models to construct aggregate accounting measures, such as allowances for loan and lease losses or pre-tax net income. Most component models use estimates produced by other component models as a source of data to make their projections. Overview of Modeling Cycle The Federal Reserve has implemented a 2-year development cycle for supervisory stress test models (see fig. 6). According to Federal Reserve staff and documentation, the overall goal of the development cycle is to continue refining and developing the models, while simultaneously producing reliable estimates for the annual DFAST and CCAR supervisory exercises. The Federal Reserve’s 2-year model development cycle (which is described in more detail in the following sections) involves the use of production and development models. Production models are used to produce annual estimates for the DFAST and CCAR exercises. Development models allow for more time to validate and evaluate major model changes before they are incorporated into the actual stress test exercises. Once a development model has been fully reviewed and approved, it replaces the corresponding production model. The Federal Reserve annually refines and continues development of its models. Modeling approaches and variables can change over time. The Federal Reserve has stated that revisions to its models generally have reflected advances in modeling techniques, more detailed data, and longer histories of performance in different economic environments. In addition, changes in market or industry risk characteristics and regulatory or policy changes also may be a reason to make changes to existing models. Responsibilities and Oversight by Process Phase The Federal Reserve has implemented an organizational structure focused on model development, oversight, and review (see fig. 6). For example, according to Federal Reserve staff and program documents, the Model Oversight Group—a cross-functional group of senior Federal Reserve managers and stress test experts—coordinates model development policy and has overall responsibility for the model development process. According to Federal Reserve staff, the Model Oversight Group exercises close oversight of the planning and execution of the supervisory stress tests including changes to stress test models and the Director of Banking Supervision and Regulation has final authority to approve models prior to stress test production. The Model Coordination and Advisory Team assists the oversight group by serving as the first line of oversight over the modeling teams. According to its charter, the coordination team is tasked with initial reviews of model documentation, model changes, and model assessment presentations. Economists, senior quantitative analysts, and other technical specialists from across the Federal Reserve System staff the coordination team. Development. The responsibility for executing model development lies with 11 supervisory modeling teams staffed with subject-matter experts. Each of the teams are responsible for developing models to predict elements of a company’s balance sheet or income statement, which are ultimately combined to predict post-stress capital ratios. For example, one modeling team is responsible for 22 separate models that predict components of net revenue before adjustments for loan loss provisions. Another is responsible for models that combine output from the loss, revenue, and other models to produce the regulatory capital ratios. The Model Oversight Group oversees the development process with assistance from the Model Coordination and Advisory Team. The Model Oversight Group also reviews and determines whether to approve development models to replace prior production models. Preliminary assessment. According to Federal Reserve documentation, supervisory modeling teams then conduct a preliminary assessment of the model production process after the model development phase is complete. The main purpose of the assessment is to test that all models and processes perform as expected and remediate problems before the final production of the annual stress test results. The modeling teams use loan portfolio and other financial data submitted to the Federal Reserve by institutions subject to the supervisory stress test as inputs to their models. They apply the current portfolio data and scenarios to the production models—including the development models awaiting approval to become production models—to test the models and provide information to the Model Oversight Group about the effects on component model outcomes. The modeling teams analyze any differences from the previous year’s results, which involves estimating how much of the overall change in a component model’s result can be attributed to model changes, scenario changes, company portfolio changes, or other factors. Teams then present the results to the Model Oversight Group and identify any production or modeling problems, including any problems with the quality of company-provided data. The modeling teams revise models or data inputs as necessary in response to problems identified by the Model Oversight Group or the Model Coordination and Advisory Team during the preliminary assessment presentations and finalize their model documentation. Senior Federal Reserve staff approve or reject major modeling decisions made by the Model Oversight Group. Validation. In the model validation phase, reviewers in the Model Validation Unit are to examine the models to identify potential problems, ranking them by level of concern and threat to model validity. According to Federal Reserve policy, staff undertaking validation activities are independent of the model development process for the models they review. Before 2016, validation reviewers volunteered to leave their primary duties in the Federal Reserve System to assist with the stress test model review process. For example, for the CCAR 2015 exercise, they spent approximately 8 weeks reviewing the models and returned for an additional 3 weeks to validate model changes made in response to the most severe findings. According to Federal Reserve staff and program documents, the Federal Reserve has been transitioning to a validation program that consists primarily of permanent, full-time staff. For each modeling team, the Model Validation Unit employs various staff that are to review (1) model soundness and performance and (2) model change and implementation controls. Economists and other subject- matter experts from across the Federal Reserve are to evaluate the model design for conceptual soundness and performance (model soundness and performance review). Federal Reserve internal control experts are to evaluate the control processes surrounding model development and implementation (model change and implementation control review). Each group of reviewers in the validation unit is to write a report that summarizes any problems they identified. The reviewers are to rate problems according to their assessment of the problems’ severity, materiality, and risk posed to the reliability of the model. Model finalization. To complete the models, the Federal Reserve has a policy that defines how the Model Oversight Group oversees the supervisory modeling teams’ implementation of responses to the findings of the Model Validation Unit. The oversight group is to review and approve model changes and the validation unit is to validate them. According to Federal Reserve staff, modeling teams generally will address at least those problems identified by validation unit reviewers that pose the highest risk to the validity of the model. With the approval of the Director of the Division of Banking Supervision and Regulation, the oversight group may defer some changes—even those in response to problems rated in the most-risky category—either because (1) the changes require structural modifications to models that the group views as better implemented through the 2-year development cycle, (2) lack of available data, or (3) other priorities taking precedence. For example, in the 2015 stress test cycle, 24 high-risk findings of the validation unit were unaddressed or had associated model changes that had not yet been validated by the time the models went into production. Validation unit documentation also indicated 71 repeat findings (at a lower-risk level) that modeling teams had not addressed for at least a year. Even if a modeling team is able to address a finding, stress test model changes may or may not be validated by the Model Validation Unit during the same cycle in which modeling teams made the change. For example, in the past some model changes occurring at the end of the annual modeling process have not been reviewed by the Validation unit until the following cycle. According to Federal Reserve staff, the new, permanent staff approach for the Model Validation Unit is designed in part to increase the unit’s involvement in validating last-minute model changes. As described later and in appendix II, in a 2015 report the Federal Reserve Office of Inspector General (OIG) made a recommendation related to late-stage model changes, among other things. Final review and approval and production of results. Supervisory modeling teams are then to conduct a second run of the production models to generate results for the final model assessment presentations. The second run is to use the final company data submissions as well as the final versions of the models and stress scenarios developed by the scenario design group. Each modeling team is to present its results to a group that consists of the Model Oversight Group, co-leaders of the Model Coordination and Advisory Team, and the deputy director of the Large Institution Supervision Coordinating Committee (LISCC) for final review and approval. The modeling teams may make certain model adjustments after the presentation in response to feedback. After addressing any concerns, the modeling teams are to calculate final results for all of the models. Federal Reserve staff then are to brief the Board of Governors on the results. Disclosure of Model Methodology The Federal Reserve has disclosed some information about the models underlying the supervisory stress test, including in an appendix to its annual publication of DFAST results. In these documents, the Federal Reserve has described in broad terms its analytical framework for the supervisory stress test as well as its modeling approach and some features of specific models. However, the Federal Reserve has not disclosed other information about the models it uses to execute the supervisory stress tests. For example, it has not disclosed a level of information about the models that would easily allow an external party to replicate the results of the supervisory stress tests. Officials from several CCAR companies we interviewed said that limited transparency about the Federal Reserve’s models impaired their firm’s capital planning efforts. For example, the company officials explained that without more detailed information on the Federal Reserve’s model specifications, they were unable to understand the factors behind the supervisory stress test outcomes or reconcile them with the results of their own company-run tests. Several companies’ officials said that this prevented them from identifying the cause of poor stress test results and taking appropriate actions in response. These officials said that this limited transparency could result in companies holding additional capital as a precaution to better ensure that they do not receive an objection from the CCAR quantitative assessment. However, the limited disclosure by the Federal Reserve reflects its concern about the potential for model information to influence the actions of covered companies in ways that undermine the purpose of the stress test exercises, among other potential adverse consequences. Federal Reserve staff said that more detailed disclosure of the underlying models would make it easier for companies to manage their capital and asset decisions in relation to the supervisory stress test (in other words, “game” the models) without necessarily limiting risk, thus resulting in the potential for a form of regulatory arbitrage (firms’ circumvention of regulation). In addition, Federal Reserve staff have noted that fuller disclosure could reduce the diversity of models used by companies—a problem termed model monoculture—with companies using models that imitated the Federal Reserve’s rather than developing internal models that best reflected their own risks. As Federal Reserve staff explained, companies need to develop models that they believe are best suited for their unique business activities and portfolios. Some company officials with whom we spoke acknowledged these trade-offs and said that they understood the Federal Reserve’s decision to limit disclosure of model details in light of such considerations. In our prior work on international standards for regulatory capital requirements—a regulatory setting analogous to the supervisory stress test—we also have noted that banks can arbitrage certain capital requirements by managing their portfolios specifically to reduce required capital levels without reducing risk. Federal Reserve Reviews of Model Risk Management While the current supervisory stress test modeling process has an oversight and review system in place as described above, both the Model Validation Unit and the Federal Reserve Office of Inspector General (OIG) have conducted reviews of the process and identified areas that would benefit from improvement. According to the Federal Reserve, it is committed to continuous assessment and enhancement of the supervisory models used in the stress testing program. As part of this commitment, the validation unit has completed multiple internal assessments, including an evaluation of the Federal Reserve’s governance of its model risk management activities for supervisory stress testing completed in December 2014. As described in the OIG report, the review by the Model Validation Unit determined that certain governance practices did not fully conform to the Federal Reserve’s supervisory guidance for banking institutions on model risk-management practices and exhibited fundamental weaknesses in key areas. The OIG review was issued in October 2015 and examined the Federal Reserve’s model risk-management practices for supervisory stress tests, with a particular focus on the model validation process. The OIG report found that reviews by the Model Validation Unit to assess the Federal Reserve’s validation and governance activities had identified opportunities for improvement, but that additional actions could further enhance model risk-management practices. Both reviews used the Federal Reserve’s supervisory guidance for banking institutions on model risk-management practices as the primary criteria for evaluating the Federal Reserve’s own processes. See appendix II for more information on these reviews. Federal Reserve Does Not Address Cumulative Risk and Uncertainty from the System of Models That Produce Stress Test Results The Federal Reserve has not focused its risk-management efforts (including those relating to model development guidance, documentation, sensitivity and uncertainty analyses, and risk tolerances) on the system of models that produce the stress test results—the post-stress capital ratios. As mentioned previously, the Federal Reserve’s model risk-management guidance defines model risk as the potential for adverse consequences from decisions based on incorrect or misused model outputs, which increases with factors such as greater model complexity and larger potential impact. The guidance states that organizations should manage model risk both from individual models and in the aggregate and establishes a definition of models that encompasses both component models and a system of models. The guidance also notes that aggregate model risk is affected by interaction and dependencies among models; reliance on common assumptions, data, or methodologies; and any other factors that could adversely affect several models and their outputs at the same time. However, the Federal Reserve’s organizational structure for the stress tests does not include a formal process through which model development or risk management at the aggregate—or system-of-models—level is implemented. The connections and relationships between the individual component models combine to create a system of models that produces the post- stress capital ratios. Figure 7 provides a high-level overview of the interactions among the component models used in the supervisory stress tests. The Federal Reserve uses component models to project a company’s balance sheet assets—the “Asset Balances Models” and “Trading Assets Models” illustrated in the figure—and risk-weighted assets. It then calculates changes in the company’s net income using separate component models to project different parts of an institution’s revenue, expenses, and losses, as well as changes in its loan loss allowance. Next, other component models project changes in equity and regulatory capital by combining projected net income and capital actions. The final step estimates post-stress capital ratios by joining equity and regulatory capital projections with total assets and risk-weighted assets projections. Based on our review of Federal Reserve documentation and interviews with staff, the Federal Reserve has not assessed its entire system of models in relation to the principles that it has applied to individual component models. The Model Oversight Group has developed a set of modeling principles to assist in managing the design of the Federal Reserve’s supervisory stress tests, including managing its risks. According to its procedures, the oversight group intends for these principles to guide design and other decision-making in the model development process. The level at which the Federal Reserve applies the model development principles is important because it is the combined system of models rather than any individual model that generates the relevant stress test results—the post-stress capital ratios. As with all models, the Federal Reserve’s models used to produce supervisory stress test results reflect some amount of uncertainty and are sensitive to the assumptions and modeling decisions made when developing each model. For example, model developers must make assumptions about how model inputs interact, which inputs are relevant, and how or if historical data are relevant to the outcome that the model seeks to predict. Each of those decisions can affect a component model’s results. As such, each component model contributes elements of uncertainty to the overall result of the system of models—the final post-stress capital ratios in the case of the supervisory stress tests. The extent and nature of any interaction among component models in the supervisory stress tests also will introduce risk to the post-stress capital ratio estimates. In addition, the overall effects on post-stress capital ratios of choices about how component models interact may be unclear. For example, poor decisions about component model interactions could result in post-stress capital ratios that are insufficiently responsive to economic changes included in a stress scenario or that respond to economic stress very differently from the way they would in reality. In the Federal Reserve model documentation we reviewed, supervisory modeling teams applied model design principles at the component model level. For example, the modeling teams justified various component modeling choices using the principles. When selecting from two versions of a component model, both of which appeared to perform well, a team cited the principle of simplicity in selecting the version with fewer variables. Although Model Oversight Group reviews and oversight are applied across the system of models, the documentation we reviewed did not discuss key aspects of the interactions between the component models. For example, it did not consider how the component models are combined into the system of models and how and to what extent those choices may introduce statistical uncertainty to the post-stress capital ratios. Federal Reserve staff indicated that model development decisions are closely overseen and approved by the oversight group. The staff explained that the close oversight provided by the group provides an adequate assessment of the effect of component model design decisions on the post-stress capital ratios produced by the system of models. They noted that the application of the principles at the component level combined with the role of the oversight group means that the principles are applied at the system-of-models level. As an example of how that oversight operates, the Federal Reserve staff provided documentation of what they described as the consideration of a cross-model issue. However, the documentation did not discuss the potential effects of design decisions at the component level on post-stress capital ratios, including any effects from the interaction of component models. In addition, the Federal Reserve documentation we reviewed about model development and implementation and oversight by the Model Oversight Group did not support that the Federal Reserve has conducted systems- level analyses of the effect of modeling decisions on the post-stress capital ratios. By largely focusing the modeling principles on the component models and not applying those principles to the system of models, the Federal Reserve has limited its ability to manage the extent to which model risk is introduced into the supervisory stress test models. Lack of Appropriate Documentation for System of Models The Federal Reserve has not developed appropriate documentation of the system of models that would allow for effective management of the risks posed by component model interactions. Appropriate model documentation is necessary for assessing, managing, and communicating model risk, and the Federal Reserve’s supervisory guidance identifies appropriate documentation as one of the elements of a sound model risk-management process. According to Federal Reserve procedures, each supervisory modeling team is responsible for maintaining its own model documentation and each review team in the Model Validation Unit is in charge of documenting its findings related to model limitations and other areas. The procedures call for documentation to follow guidance from the Model Oversight Group and the Model Validation Unit in both format and content. The procedures and guidance indicate that appropriate documentation for component models include descriptions of model design, data, and methodology; a quality control plan; and review reports. The model description documentation is expected to give third parties the ability to understand the model, evaluate it, monitor its development, and replicate it as necessary. The quality control plan provides documentation of modeling team processes to help ensure that the models have been implemented as intended by the design specification and to mitigate the potential for model error or misapplication. The plan specifies roles and procedures for checking or processing model data, documenting and approving changes, testing the model, and other control activities. The review reports record the independent assessment of modeling teams’ models. The reports document the areas the Model Validation Unit reviewed, evaluation of the models, issues uncovered and an assessment of the risk they pose to the reliability and performance of the model, and an assessment of the modeling team responses to previous year’s findings. The Federal Reserve procedures and guidance as well as best practices suggest that similar documentation would be appropriate for the system of models. However, the Federal Reserve does not have a similar set of documentation for the system of models. Instead, Federal Reserve staff stated that they have used a single document to serve this purpose. The document has recorded Model Oversight Group decisions including some that apply to multiple models—which are controlled by more than one modeling team—over the course of a stress test cycle. Yet, the Model Oversight Group decisions document that we reviewed did not include descriptions of model design, data, and methodology sufficient to give third parties the ability to understand the system of models, evaluate it, or replicate it as necessary. It also lacked information that might constitute a quality control plan or a model validation review report. The lack of appropriate documentation of the system of models limits the Federal Reserve’s ability to effectively identify and manage model risk from the entire system of models used for the supervisory stress tests. For example, without such documentation, Federal Reserve staff may miss important connections between elements of component models, which in turn may limit understanding of risks inherent in their modeling choices. The absence of system-level documentation also impedes the ability of independent parties, both internally and externally, to review, understand, or evaluate the system of models. Additionally, it increases risks associated with staff turnover (which could cause the Federal Reserve to lose important knowledge about the design and functioning of the system of models). Sensitivity and Uncertainty Analyses Have Not Considered Effects on Post-Stress Capital Ratios Based on our review of Federal Reserve documentation and interviews with staff, the Federal Reserve has not conducted sensitivity and uncertainty analyses of how its modeling affects the post-stress capital ratios produced by the entire system of models. As previously described, all models reflect some amount of uncertainty and are sensitive to the assumptions and modeling decisions made during model development. According to the Federal Reserve’s model risk-management guidance, an integral element of model development is evaluating model features and overall functioning to determine whether the model is performing as intended, including by demonstrating that a model is accurate, robust, and stable and by assessing potential limitations. The guidance states that such an evaluation should include assessing the model’s behavior over a range of input values and evaluating the impact of assumptions—a type of assessment known as sensitivity analysis. It also notes the importance of understanding the extent of model uncertainty and inaccuracy—either quantitatively, such as with the confidence interval around a statistical estimate, or qualitatively—and accounting for them appropriately. This type of assessment is known as uncertainty analysis. The assessments are related as sensitivity analysis tests the effects of different sources of uncertainty on a model’s output and can help to identify the greatest sources of uncertainty. Model documentation we reviewed included some sensitivity and uncertainty analyses at the component model level, but we did not find that any such sensitivity or uncertainty analyses fully considered effects on the extent of model risk associated with the final post-stress capital ratios. For example, the analyses did not consider these effects for both the numerator and denominator of the ratios. As described earlier, a post- stress capital ratio consists of a numerator that reflects the capital held by the company and a denominator that measures the assets held by a company (either on a total or risk-weighted basis), both projected for each quarter of the stress scenario. When considering changes to component model results (which the Federal Reserve uses as one of their primary forms of sensitivity analysis), the Federal Reserve identifies the major inputs that have changed for that model and analyzes the extent to which changes in the model output can be attributed to each input. One way that the Federal Reserve seeks to put the changes in individual component models in context with each other and with overall capital ratios is to divide the component model output (for example, losses or revenues) by the risk-weighted assets of the company prior to the assets being stressed in the tests. This approach allows the Federal Reserve to understand the component models’ effects on the numerator of the post-stress capital ratio and to put individual component model results on a common scale. But, some component models also affect the denominator in the post- stress capital ratios. For example, the accrual loan loss models (see fig. 8) that estimate losses on different portfolios of loans (such as automobile or commercial loans) also provide an estimate used in some risk-weighted asset calculations. Thus, the Federal Reserve’s approach to sensitivity analysis does not reflect a full consideration of the effects of a component model’s risk and uncertainty on that of the post-stress capital ratios. Model documentation standards from the Model Oversight Group indicate that modeling teams should document the empirical performance of a model to support its validity in projecting stress test losses, including addressing—such as through sensitivity analysis—how a model’s output responds to changes in key inputs and parameters. The oversight group also has issued a guidance memorandum to modeling teams on the types of sensitivity and uncertainty analysis they are encouraged to conduct during model development. However, these guidance documents do not address conducting sensitivity and uncertainty analyses that examine how modeling decisions affect the overall stress test results. Federal Reserve staff told us that performing sensitivity and uncertainty analyses of the system of models was unnecessary because the system was largely additive—a mathematical feature that ensures that tests on a component model will fully capture its effects on the system of models— so that system-wide assessments would be redundant. However, as discussed already, key interactions exist between component models such that their joint outcome would not simply be a sum of the component model outcomes. Federal Reserve staff also explained that performing sensitivity and uncertainty analyses of the system of models—such as calculating statistical confidence intervals for the post-stress capital ratios—would be a complex and resource-intensive undertaking, which may not provide information with a clear use. Although the guidance does not address conducting sensitivity and uncertainty analyses in relation to effects on stress test results, reviewers from the Model Validation Unit who worked on the 2014 soundness review of a stress test model used to project asset balances (balances model) recommended that the modeling team assess the sensitivity of the post-stress capital ratios to the balances model assumptions. The balances model is the root model for the entire system of models—it is a direct or indirect input for almost all other models—and therefore its design is of particular importance to the accuracy, robustness, and stability of the supervisory stress test process (see fig. 9). This model makes assumptions related to market share and portfolio mix for firms subject to the stress tests. As the Model Validation Unit recommended in its review, one way to assess the appropriateness of the assumptions made in the balances modeling approach would be to test the effects of alternative assumptions on the post-stress capital ratios and gauge the potential consequences of any differences. Federal Reserve staff indicated that the supervisory modeling team had not tested alternatives to these specific assumptions because (1) the assumptions implement a policy decision by the Model Oversight Group, and (2) the assumptions accomplished the policy decision while remaining consistent with the oversight group principle of simplicity and transparency. However, basing modeling assumptions on policy goals does not preclude also assessing their effectiveness in accomplishing the policy goals or the risk of unintended consequences through testing the potential effects of alternative assumptions on the stress test output. As of July 2016, the modeling team had not yet addressed the recommendation of the validation unit. Without assessing risks to the post-stress capital ratios posed by the Federal Reserve’s approach to modeling, the Federal Reserve limits its ability to understand, communicate, and manage the risks and reliability of its supervisory stress test results. For example, the Federal Reserve’s model risk-management guidance states that a company’s senior management is responsible for regularly reporting to its board of directors on significant model risks from individual component models and in the aggregate. However, Federal Reserve staff are unable to communicate to the Board of Governors the range and most sources of uncertainty surrounding the post-stress capital ratio estimates produced by the system of models because the Federal Reserve has not conducted the analyses necessary to do so. Furthermore, sensitivity and uncertainty analysis can result in changes to models and even small differences in model estimates can be the difference between the Federal Reserve objecting or not objecting to an institution’s capital plan. Overall Risk Tolerances Not Articulated The Federal Reserve has not articulated overall model risk tolerances— the amount of uncertainty or error margins that it would be willing to accept around the post-stress capital ratios. The Federal Reserve’s model risk-management guidance states that members of an institution’s board of directors should ensure that the level of model risk is within their tolerance. It also states that model risk-management policies approved by the board or its delegates should promote the development of targets for model accuracy and standards for acceptable levels of discrepancies. However, neither the Board of Governors nor high-level management in Banking Supervision and Regulation have identified model risk tolerances for component model output or for the overall stress test results. Instead, Federal Reserve staff said that Model Oversight Group reviews helped to ensure a consistent approach to model risk. In addition to approving decisions included in model documentation, the Model Oversight Group has reviewed options that modeling teams have developed for resolving identified problems with their models and have presented to the oversight group. The presentation includes a discussion of the advantages and disadvantages associated with each option. According to Federal Reserve documentation, the oversight group will direct the modeling team to pursue one of the options or continue developing alternatives. Federal Reserve staff told us that decisions of the Model Oversight Group were based on the principles they had developed and that they weighed a modeling team’s options against how well they meet the principles. But it was not always evident from the documentation what criteria the oversight group used to make its determination about which option to pursue. Even if one principle was cited to support a decision, it was not clear if the option was consistent with other principles, making it difficult to evaluate the consistency of Model Oversight Group decisions or their application of predetermined risk tolerances. In the absence of explicit direction about risk tolerances, supervisory modeling teams may make decisions that have consequences for model risk without evaluating the model risk against set criteria. For instance, it also was not clear from the documentation we reviewed how much the Model Oversight Group had learned about the options modeling teams had considered and rejected before the presentations. In some cases we reviewed, the modeling teams had fully developed the choices for resolving model problems and were able to quantitatively compare differences between the options—which represents a form of sensitivity analysis and allows for an assessment of the model relative to an explicit risk tolerance (although it did not appear from the documentation that risk tolerances were applied in the decision process). In other cases, the team’s work was at a more preliminary stage and did not include a quantitative evaluation of the consequences of the options under consideration. Our review of model documentation also suggests that modeling teams have made model risk tolerance decisions at the individual model level with no documented reference to the impact of those decisions on other models or consistency with other modeling decisions. Furthermore, according to documents we reviewed and discussions with Federal Reserve staff, the Federal Reserve has not made any efforts to determine statistical or other thresholds at which each individual model will produce results within tolerable uncertainty ranges in relation to the post-stress capital ratio estimates. Instead, some modeling teams appeared to be implicitly determining what constituted acceptably small variation between their chosen model’s predictions and the historical data. For example, one team assessed its model with statistical tests and deemed it reasonable, providing supporting evidence in charts. There was no discussion of the criteria the team used to determine their model’s soundness. Based on the documentation we reviewed, this team did not appear to have made any calculation of the practical magnitude of the consequences of these statistical tests. The charts used to show the success of the model show a 0.1 percentage point difference between actual and predicted values in the data, in this case default rates of a loan portfolio. The supervisory modeling team asserted that this was a small difference between predicted and actual default rates. However, such a difference could have material consequences for the post-stress capital ratios of companies, as it represented 12 percent of the total predicted default rate of the portfolio. Without an articulated risk tolerance, it is not clear whether this is a large or small difference for the portfolio or the post-stress capital ratios. As for all component models, this model required the Model Oversight Group’s approval prior to going into production. However, the oversight group’s review and approval is not a substitute for an identified risk tolerance as required by the Federal Reserve’s standards. In addition, the model documentation we reviewed indicated that some supervisory modeling teams tested other individual models in addition to those they reported in the documentation. The undocumented models generally failed to meet statistical or other tests for determining which models would be included in the documentation. But this may lead modeling teams to reject models with attributes that would be desirable at the systems level and the decisions might not be transparent to the Model Oversight Group due to lack of documentation. According to Federal Reserve staff, the newly formed Supervisory Stress Test Model Governance Committee has plans to expand communication to the Board of Governors to provide Governors with more insight into model development, model risk, and other outstanding concerns about models. The expanded communication also may allow Governors to communicate their model risk tolerances with the Federal Reserve staff performing the supervisory stress tests and CCAR quantitative assessment. However, Federal Reserve staff also told us that model risk tolerances cannot be set prior to the completion of the models. But, this is not consistent with the Federal Reserve’s model risk-management guidance, which requires company management to set predetermined thresholds of acceptability and for senior management to ensure that the level of model risk is within their tolerance. In the same manner as for other major areas of risk, tolerances can be articulated in a number of ways including a combination of quantitative and qualitative approaches. Without systematically identifying and communicating acceptable levels of risk in its supervisory stress test models, the Federal Reserve may be limited in its ability to effectively evaluate and manage its model risk. Conclusions The stress test programs implemented by the Federal Reserve during and since the financial crisis of 2007–2009 have played a key role in supervisory efforts to evaluate and maintain the stability of the U.S. financial system. Overall, they represent important advances that augment supervisory approaches to capital adequacy and planning that were in place before the crisis. The Federal Reserve and other bank regulators (i.e., FDIC and OCC) have issued similar stress test rules, but OCC has made greater use of supervisory flexibility—granting extensions to and exemptions from the requirements’ application—in implementing them. This inconsistent approach to implementation could contribute to competitive disadvantages between institutions and inconsistent oversight of risk management by the regulators. The Federal Reserve has integrated DFAST and CCAR into its supervision of large banking organizations and made changes to the programs in recent years at least partly in response to concerns raised by the industry and market observers that, among other things, adjusted the timing of the exercises, consolidated guidance on supervisory expectations for capital planning, and modified certain technical aspects of capital distribution restrictions and capital action assumptions. The Federal Reserve has established an organizational structure for its CCAR assessments that is guided by core principles and some best practices, and it continues to annually refine and develop its stress test models. However, limitations in analytical approaches and to disclosure present challenges to risk assessment by the Federal Reserve and to transparency. In some cases, the Federal Reserve has not always followed its own guidance or principles. Quantitative assessment. The Federal Reserve has based its determinations on the results of both the supervisory and company- run stress tests. However, this creates tension between companies’ desire to avoid failing the CCAR quantitative assessment and the robustness of their stress test decisions. By including company-run tests in the CCAR quantitative assessment, the Federal Reserve limits the risk-management and capital planning benefits for participating companies—one of the Federal Reserve’s goals for CCAR—without significantly increasing the effectiveness of the quantitative assessment. Qualitative assessment disclosure and communication. Although it uses a decision-making framework to assess qualitative CCAR submissions, the Federal Reserve has not publicly disclosed information that would allow for a better understanding of its assessment methodology or the reasons for objection determinations. Transparency is a key feature of accountability and this limited disclosure may hinder understanding of the CCAR program and limit public and market confidence in the program and the extent to which the Federal Reserve can be held accountable for its decisions. The Federal Reserve also has not regularly updated guidance to firms about supervisory expectations and peer practices related to the qualitative assessment. Companies that must meet these expectations annually may face challenges from the irregular timing of communications, which could limit the Federal Reserve’s achievement of its CCAR goals. In addition, the Federal Reserve has not communicated time frames for responding to questions it receives through the CCAR communications mailbox, which could hinder companies’ management and planning of their CCAR submissions and limit their ability to address supervisory concerns in a timely fashion. Scenario design. The Federal Reserve has conducted limited analysis of some decisions that are important to designing stress test scenarios. IMF principles for supervisory stress testing highlight the risks of basing scenario design decisions solely on historical experience, but the Federal Reserve’s decisions about the severity of its scenarios have been driven by U.S. postwar historical experience. Without a broader consideration, the Federal Reserve could miss opportunities to assess and guard against relevant but unprecedented risks to the banking system. In addition, the Federal Reserve has not explicitly analyzed how to balance scenario severity choices’ influence on banking system resiliency with potential economic effects. Without more careful assessment of the trade-offs associated with scenario severity, the Federal Reserve cannot be reasonably assured that the scenario design process balances any improvements in the resiliency of the banking system with any impact on the cost and availability of credit. The Federal Reserve also has not conducted analyses to determine if its single severe supervisory scenario is sufficiently robust and reliable to promote the resilience of the banking system against a range of potential crises. Such analyses—including performing sensitivity analysis involving multiple scenarios—could help the Federal Reserve understand the range of outcomes that might result from different scenarios and explore trade-offs associated with reliance on a single severe supervisory scenario. Additionally, the Federal Reserve has not assessed whether or how changes to the supervisory scenarios could inadvertently amplify economic cycles (procyclicality)—which its scenario design policy aims to avoid—until after it has finalized the scenarios. Without additional analysis prior to completing and publishing its scenarios, the Federal Reserve cannot be reasonably assured that small adjustments to the scenario variables would produce outcomes that neither amplify nor dampen economic cycles. Model risk management. The Federal Reserve’s model risk- management efforts have not focused on the system of stress test models and how component modeling choices affected overall stress test results. In this sense, the Federal Reserve has limited its perspective and it has not always followed its guidance for banking institutions on model risk-management practices. The Federal Reserve has not assessed its entire system of models in relation to the model development principles that it has applied to individual component models. By not applying those principles to the system of models, the Federal Reserve has limited its ability to manage the extent to which model risk is introduced into the supervisory stress test models. It has not developed appropriate documentation of the system of models that would allow for effective management of the risks posed by component model interactions. Without such documentation, the Federal Reserve’s ability to effectively identify and manage model risk from the entire system of models is limited, and staff may miss important connections between elements of component models, which in turn may limit understanding of risks inherent in their modeling choices. The Federal Reserve has not conducted sensitivity and uncertainty analyses of how its modeling affects the post-stress capital ratios. Without such assessments, the Federal Reserve limits its ability to understand, communicate, and manage the risks and reliability of its supervisory stress test results. Furthermore, sensitivity and uncertainty analysis can result in changes to models and even small differences in model estimates can be the difference between the Federal Reserve approving or objecting to an institution’s capital plan. Staff have been unable to communicate information about the range and sources of uncertainty surrounding the post-stress capital ratio estimates to the Board because the Federal Reserve has not conducted the necessary analyses. Unless staff communicate such information, the Board may not be fully informed of significant model risks from individual component models and in the aggregate including when making decisions based on stress test results. Neither the Board of Governors nor senior staff have identified risk tolerances for model output or overall stress test results. Without systematically identifying and communicating acceptable levels of risk in its supervisory stress test models, the Federal Reserve may be limited in its ability to effectively evaluate and manage its model risk. Successfully managing model risk is a key objective because the Federal Reserve uses the system’s overall stress test results with precision to make CCAR determinations. A more holistic approach can help ensure that it makes the determinations with a more complete understanding of the stress test results’ uncertainty and sensitivity to component model decisions and account for them appropriately. Recommendations for Executive Action We are making the following 15 recommendations: To help improve the consistency of federal banking regulators’ stress test requirements and help ensure that institutions overseen by different regulators receive consistent regulatory treatment, the heads of the Federal Reserve, FDIC, and OCC should harmonize their agencies’ approach to granting extensions and exemptions from stress test requirements. To help provide stronger incentives for companies to perform company- run stress tests in a manner consistent with Federal Reserve goals, the Federal Reserve should remove company-run stress tests from the CCAR quantitative assessment. To increase transparency and improve CCAR effectiveness, the Federal Reserve should take the following four actions: Publicly disclose additional information that would allow for a better understanding of the methodology for completing qualitative assessments, such as the role of ratings and rankings and the extent to which they affect final determination decisions. For future determinations to object or conditionally not object to a company’s capital plan on qualitative grounds, disclose additional information about the reasons for the determinations. Publicly disclose, on a periodic basis, information on capital planning practices observed during CCAR qualitative assessments, including practices the Federal Reserve considers stronger or leading practices. Improve policies for official responses to CCAR companies by establishing procedures for notifying companies about time frames relating to Federal Reserve responses to company inquiries. To strengthen the scenario design process, the Federal Reserve should assess—and adjust as necessary—the overall level of severity of its severely adverse scenario by taking the following two actions: establish a process to facilitate proactive consideration of levels of severity that may fall outside U.S. postwar historical experience, and expand consideration of the trade-offs associated with different degrees of severity. To improve understanding of the range of potential crises against which the banking system would be resilient and the outcomes that might result from different scenarios, the Federal Reserve should assess whether a single severe supervisory scenario is sufficient to inform CCAR decisions and promote the resilience of the banking system. Such an assessment could include conducting sensitivity analysis involving multiple severe supervisory scenarios—potentially using CCAR data for a cycle that is already complete, to avoid concerns about tailoring the scenario to achieve a particular outcome. To help ensure that Federal Reserve stress tests do not amplify future economic cycles, the Federal Reserve should develop a process to test its proposed severely adverse scenario for procyclicality annually before finalizing and publicly releasing the supervisory scenarios. Finally, to improve the Federal Reserve’s ability to manage model risk and ensure that decisions based on supervisory stress test results are informed by an understanding of model risk, the Federal Reserve should take the following five actions: Apply its model development principles to the combined system of models used in the supervisory stress tests. Create an appropriate set of system-level model documentation, including an overview of how component models interact and key assumptions made in the design of model interactions. Design and implement a process to test and document the sensitivity and uncertainty of the model system’s output—the post-stress capital ratios used to make CCAR quantitative assessment determinations— including, at a minimum, the cumulative uncertainty surrounding the capital ratios and their sensitivity to key model parameters, specifications, and assumptions from across the system of models. Design and implement a process to communicate information about the range and sources of uncertainty surrounding the post-stress capital ratio estimates to the Board during CCAR deliberations. Design and implement a process for the Board and senior staff to articulate tolerance levels for key risks identified through sensitivity testing and for the degree of uncertainty in the projected capital ratios. Agency Comments and Our Evaluation We provided a draft of this report to the Federal Reserve, FDIC, and OCC for review and comment. The Federal Reserve, FDIC and OCC provided written comments that we have reprinted in appendix III, IV, and V, respectively. The Federal Reserve, FDIC and OCC also provided technical comments that we have incorporated, as appropriate. In their written comments the Federal Reserve, FDIC, and OCC generally agreed with the recommendation that the heads of the Federal Reserve, FDIC, and OCC should harmonize their agencies’ approach to granting extensions and exemptions from stress test requirements. The FDIC agreed that a consistent approach to extensions and exemptions was important and noted its commitment to coordinating closely with the Federal Reserve and OCC. The Federal Reserve, FDIC, and OCC noted that although the agencies coordinate closely in administrating their stress testing programs, going forward they each stated that they would coordinate with the other agencies at least annually and more frequently, if appropriate, to discuss any planned extensions and exemptions prior to any action. In its written comments, the Federal Reserve generally agreed with the report’s other 14 recommendations and offered responses in the following areas: Regarding our recommendation to exclude company-run stress tests from the CCAR quantitative assessment, the Federal Reserve noted in its letter that the agency was already considering a proposal that would set post-stress capital requirements for covered institutions based solely on the supervisory stress tests. It noted that this proposal was consistent with our recommendation. While we have not yet had the opportunity to assess this proposal in detail, excluding the company-run tests from such a capital requirement could improve incentives for the company-run stress tests. Regarding our recommendation to strengthen the scenario design process by considering levels of severity that fall outside U.S. postwar history, the Federal Reserve noted in its letter that the 2012 and 2013 severely adverse scenarios featured unemployment rates that were above what has been experienced in postwar U.S. history. However, the level of the unemployment rate for these years does not imply an established process designed to facilitate a consistent consideration of severely adverse scenarios outside of the postwar historical experience. In response to the Federal Reserve’s written comments we modified the language in our recommendation to clarify that we are recommending an established process for a broader consideration of severity given that the current scenario design policy and process is focused on selecting economic conditions that reflect the severity of postwar U.S. recessions. Without consistently and proactively considering levels of severity outside postwar U.S. historical experience, the Federal Reserve could miss opportunities to assess and guard against relevant but unprecedented risks to the banking system. Regarding our recommendation to expand consideration of the trade-offs associated with different degrees of severity, the Federal Reserve noted in its letter that the scenario design framework was not designed to generate the most severe potential outcomes since that might impinge credit availability. However, our recommendation does not call for the Federal Reserve to generate scenarios that represent the most severe potential outcomes. Our recommendation calls for the Federal Reserve to assess whether more severe, or less severe, scenarios might better balance changes in resiliency against the need to extend credit. As we noted in our report, without a more careful assessment of scenario severity, the Federal Reserve cannot be reasonably assured that the scenario design process balances any improvements in the resiliency of the banking system with any impact on the cost and availability of credit. Regarding our recommendation to assess the sufficiency of a single severe scenario for the supervisory stress tests, the Federal Reserve noted in its letter that expanding the number of scenarios would be costly and burdensome. As we noted in the report, using a single severe scenario could limit the resources required to design and execute the stress tests. Although the Federal Reserve states incorrectly in their letter that we describe these costs as “substantial,” we did not assess these potential costs in this report. More importantly, our recommendation does not call for the Federal Reserve to increase the number of severe supervisory scenarios. Our recommendation calls for the Federal Reserve to assess the sufficiency of a single severe supervisory scenario. Absent such an assessment—which could be supported by sensitivity analysis using more than one severe supervisory scenario—CCAR decisions may not reflect the uncertainty in stress test outcomes that might result from different scenarios. Regarding our recommendation to test the severely adverse scenario for procyclicality before finalization and public release, the Federal Reserve noted in its letter that the scenario design process had a feature designed to counteract procylicality. It also noted that additional changes were under consideration to further reduce procylicality (i.e., further limiting the increase in the unemployment rate during a downturn). However, as we noted in the report, given the complexity of the system of models, without conducting additional testing before releasing scenarios, the Federal Reserve cannot be reasonably assured that small adjustments to the unemployment rate would produce outcomes that neither amplify nor dampen economic cycles. Regarding the recommendations to increase transparency and improve CCAR effectiveness, the Federal Reserve stated that steps were taken to enhance transparency and highlighted guidance released since 2011, including supervisory letters released in 2015 and additional details in the CCAR 2016 results disclosure. Importantly, the Federal Reserve stated that it will continue to enhance transparency in the areas recommended in our report. In addition, the Federal Reserve stated that it will continue to enhance the process for responding to firms’ inquiries while noting that complex questions may take longer to resolve. Regarding our recommendation to improve documentation of the system of models, the Federal Reserve asserts that it already maintains comprehensive documentation of the development, assessment, validation, and finalization of its system of models. While the Federal Reserve does maintain extensive documentation of each element of the system of models, comprehensive documentation of the system of models as a whole requires documentation of how component models interact and key assumptions made in the design of model interactions, which the Federal Reserve was not able to provide to us. Without this additional documentation, the Federal Reserve’s ability to effectively identify and manage model risk from the entire system of models is limited and may limit understanding of risks inherent in its modeling choices. Regarding our recommendation to test and document the sensitivity and uncertainty of the model system’s output used to make its quantitative determinations, the Federal Reserve notes that it already assesses how model assumptions impact post- stress capital ratios. However, the Federal Reserve did not provide us with documentation that demonstrated any comprehensive assessments that tested the mathematical and statistical implications of their system of models design. Lack of such testing exposes the Federal Reserve to model risk and limits its abilities to direct model development resources to the areas that introduce the most uncertainty and risk to estimates of the final post-stress capital ratios. Regarding our recommendations to improve communication of the range and sources of uncertainty surrounding the post-stress capital ratio estimates to the Board during CCAR deliberations and to articulate tolerance levels for key risks, the Federal Reserve notes it established the Supervisory Stress Test Model Governance Committee in 2015 and has future plans to advise the Board on the state of model risk. The Committee was too new during the bulk of our audit work to meaningfully assess its implementation. However, we plan to continue to monitor the Committee to determine whether their activities ultimately address our recommendation. We are sending copies of this report to the House Committee on Financial Services, the Federal Reserve, FDIC, and OCC. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or EvansL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology The Board of Governors of the Federal Reserve System (Federal Reserve) conducts two stress test exercises: the Dodd-Frank Stress Tests (DFAST) and the Comprehensive Capital Analysis and Review (CCAR). This report (1) compares the DFAST and CCAR exercises and discusses company and Federal Reserve views about the exercises’ costs and benefits; (2) examines the CCAR qualitative assessment, including the extent of communication and disclosure; (3) examines how the Federal Reserve designs the supervisory scenarios for the stress tests; and (4) examines the Federal Reserve’s modeling process for the stress tests. To compare DFAST and CCAR, we reviewed Section 165(i) of the Dodd- Frank Act, the Federal Reserve’s final and amended capital plan and stress test rules, and Federal Reserve policies and procedures about how it has implemented and used DFAST and CCAR in its supervision of banking institutions. We analyzed internal guidance documents and instructions, methodology, and results publications related to DFAST and CCAR; supervisory letters on stress testing and capital planning; public statements by Federal Reserve officials; and other Federal Reserve documentation about the programs. We interviewed staff from the offices of the Federal Reserve that are responsible for DFAST and CCAR, including the Federal Reserve’s Division of Banking Supervision and Regulation, regarding the scope, goals, and utilization of each program. We analyzed information and documentation on stress test extensions and exemptions from the Federal Reserve, the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC). We also interviewed staff from the Federal Reserve, OCC, and FDIC about the use of extensions and exemptions. To obtain views on the stress tests and their costs and benefits, we judgmentally selected and interviewed 13 companies that participated in CCAR in 2015 and 6 companies that were subject only to DFAST. To select CCAR companies to interview, the team used information on CCAR firms collected from the Federal Reserve and SNL Financial, a private provider of data on the financial services industry. We used the 31 bank holding companies that participated in the 2015 CCAR cycle as our selection pool and selected companies based on their size, industry type, organization type, prior stress test participation, and history of CCAR results. To identify and select companies that were subject to DFAST but not CCAR, we used data from the Federal Reserve and its National Information Center. To expand the coverage and information from each interview, we selected bank holding companies subject only to DFAST that also had a subsidiary depository institution subject to stress test requirements (including firms subject to OCC or FDIC rules). We grouped the depository institutions by charter type—(1) state-chartered banks that were members of the Federal Reserve System, (2) state-chartered banks that were not members of the Federal Reserve System, and (3) nationally-chartered banks—and ordered them by total asset size. We selected institutions with the largest amount of total assets and the company with the smallest amount from each of the three groups that also had a holding company subject to DFAST. If we were unable to schedule interviews with selected companies we chose additional companies based on the same selection criteria. We also reviewed Federal Reserve statements on benefits and costs. To characterize companies’ views throughout the report, we consistently defined modifiers (e.g., “nearly all”) to quantify each group of interviewees’ views as follows: “all” represents 100 percent of the group, “nearly all” represents 80 percent to 99 percent of the group, “most” represents 60 percent to 79 percent of the group, “several” represents 40 percent to 59 percent of the group, and “some” represents 20 percent to 39 percent of the group. While the percentage of the group of interviews remains consistent, the number of interviews each modifier represents differs based on the number of interviews in that grouping: 19 total CCAR and DFAST firms, 13 CCAR companies, and 6 DFAST-only companies. Table 8 provides the number of interviews in each modifier for each group of interviews. To examine the process used by the Federal Reserve to conduct the CCAR qualitative assessment, we reviewed the Federal Reserve’s stress test and capital plan rules and other publicly available documents including annual stress test instructions and results, periodically released guidance on supervisory expectations and other topics, and supervisory letters regarding stress testing and bank supervision. We also examined internal policies and procedures, training documents, and other program documentation related to the CCAR qualitative assessments including communication with companies and documentation of Board decision making. The policy and procedure documents included CCAR program manuals and project plans that described roles and responsibilities of staff teams and oversight groups and identified the content and timing of key tasks, among other things. Program documentation we analyzed included evaluation memorandums from four assessment teams covering 8 companies from CCAR 2014 and 2015. We judgmentally selected company-specific workpapers based on companies we interviewed and the involvement of different staff teams from across the Federal Reserve System. We also reviewed conclusion and recommendation memorandums used in making object or non-objection determinations. To examine communication with companies, we reviewed communication procedures and company-specific feedback provided to companies including questions and responses provided through the Federal Reserve’s communication mailbox. We interviewed Federal Reserve staff about how they conduct the qualitative assessment including their policies, procedures, and decision-making process as well as their communication with companies about the assessment and the Federal Reserve’s supervisory expectations. We also interviewed officials from 13 CCAR companies about their experience with the qualitative assessment and interaction with the Federal Reserve including the clarity of supervisory expectations, program guidance, and feedback. We used criteria from Standards for Internal Control in the Federal Government and transparency principles, including directives issued by the Office of Management and Budget, to evaluate the Federal Reserve’s qualitative assessment process and communication with companies. To examine how the Federal Reserve designs the supervisory scenarios for the stress tests, we conducted interviews and reviewed public and nonpublic documentation related to the scenario design process. We interviewed Federal Reserve officials about the scenario design process, including key considerations and rationales for scenario design policy decisions. We interviewed officials from the International Monetary Fund (IMF) and Bank for International Settlements regarding their own research and experience conducting stress tests. We reviewed public Federal Reserve documentation, including the Policy Statement which governs the scenario design process, CCAR instructions, and the CCAR assessment framework. We analyzed public data from the supervisory quantitative scenarios from 2013 to 2016. We also reviewed nonpublic Federal Reserve documents including internal presentations related to proposed scenarios and CCAR results. To understand relevant standards for complex analyses and stress testing, we reviewed IMF principles for supervisory stress tests and Office of Management and Budget standards for assessing the impact of regulations. Finally, we reviewed Basel Committee on Banking Supervision analyses of the potential impact of post-crisis reforms to strengthen bank capital and liquidity regulations and IMF’s 2015 U.S. Financial Sector Assessment Program. To examine the Federal Reserve’s supervisory stress test modeling process, we collected and reviewed public and nonpublic Federal Reserve documentation including DFAST- and CCAR-related publications, internal guidance and procedures, policy statements, model documentation, model validation reports, and internal presentations. For model-specific documentation, we reviewed model documentation and validation reports from the DFAST/CCAR 2015 stress test cycle (the most recent available at the time of our examination) for a judgmentally selected sample of component models. After reviewing publicly-available model documentation and examples of nonpublic documentation provided by the Federal Reserve, we requested and analyzed the documentation and validation reports for four supervisory modeling teams, which we selected based on our assessment of their likely importance to the system of models or their potential for presenting analytical challenges. We interviewed Federal Reserve staff from across the Federal Reserve System, including staff involved with supervisory stress test model development and validation, about the process for executing the supervisory stress tests and the Federal Reserve’s model risk management practices. We assessed the Federal Reserve’s supervisory stress test practices using the Federal Reserve’s guidance to bank holding companies on their stress test model risk management activities. To provide additional context for the Federal Reserve guidance, we reviewed publications of the National Research Council, whose members are drawn from the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine, on best practices in complex modeling and model risk management. We conducted this performance audit from December 2014 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Reviews by the Federal Reserve’s Model Validation Unit and the Office of Inspector General Board of Governors of the Federal Reserve System (Federal Reserve) Model Validation Unit review. The December 2014 review findings included a shortcoming in policies and procedures, insufficient model testing, insufficient planning and procedures to address the risks posed by potential key personnel departures, and incomplete structures and information flows to ensure proper oversight of model risk management. The review resulted in six recommendations to address the identified findings. Office of Inspector General (OIG) report. The OIG report identified continuing risks related to model validation and broader governance practices in four areas. Risks related to validation staffing and performance management existed that may not be mitigated by the implementation of a new staffing approach. These risks included insufficient performance feedback to supplemental reviewers, dependence on key personnel, and inadequate scrutiny of models. Risks associated with model changes that occur late in the supervisory stress testing cycle remained despite Federal Reserve steps to address these risks. The Federal Reserve did not maintain an accurate, complete, and updated inventory of models as required by the Federal Reserve’s model risk-management guidance. In reviewing a sample of validation reports, limitations encountered by reviewers during model validation were not always made clearly identifiable for management in the validation reports submitted to management. To address these risks, the OIG report made eight recommendations to the Division of Banking Supervision and Regulation, including to establish processes for assessing the materiality of late-stage changes to models that would clarify what changes required independent validation and would leverage reviewer resources to validate such changes. According to Federal Reserve staff and program documents, the Federal Reserve has been implementing changes to address concerns raised by both reviews. For example, in response to the Model Validation Unit review, the Federal Reserve officials said created the Supervisory Stress Test Model Governance Committee was created to coordinate and oversee its model risk-management efforts. Federal Reserve staff said that the committee met for the first time in May 2015 and explained that its agenda largely has been driven by responding to the findings of the validation unit’s governance review, in particular around model risk. One of the review’s findings was that incomplete governance structures and information flows did not ensure proper oversight of model risk management. The staff noted that the committee was formed to introduce more structure and discipline to the model governance role, including by clarifying reporting lines to the Director of the Federal Reserve’s Division of Banking Supervision and Regulation, who oversees the Model Oversight Group and the Model Validation Unit. Federal Reserve staff said that the new committee provides a formal venue for discussing differences of opinion and advising the director, some of which was done informally in the past. The staff also noted that they have been exploring opportunities to expand communication of information about model risk with the Board of Governors, including allowing Governors to communicate their preferences regarding modeling decisions and levels of risk. In its response to the OIG review, the Federal Reserve said that it had already made improvements to address a number of the recommendations and was taking actions in response to others. According to OIG staff, all of the report’s recommendations remained open as of July 2016. Appendix III: Comments from the Board of Governors of the Federal Reserve System Appendix IV: Comments from the Federal Deposit Insurance Corporation Appendix V: Comments from the Office of the Comptroller of the Currency Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Lawrance L. Evans, Jr., (202) 512-8678, EvansL@gao.gov. Staff Acknowledgments In addition to the contact named above, Andrew Pauline (Assistant Director), Kevin Averyt (Analyst-in-Charge), Nancy Barry, Abby Brown, A. Nicole Clowers, Aaron Colsher, Michael Hoffman, Risto Laboski, Marc Molino, Barbara Roesmann, and Jessica Sandler made key contributions to this report. Other assistance was provided by Vida Awumey, Don Brown, David Dornisch, Janet Eackloff, Nathan J Gottfried, Nicholas John, Rob Letzler, Joseph O’Neill, Nadine Garrick Raidbard, Anne Stevens, Karen Tremba, and Jason Wildhagen.
The Federal Reserve has two stress test programs for certain banking institutions it supervises. DFAST encompasses stress tests required by the Dodd-Frank Act. CCAR comprises a qualitative assessment of firms' capital planning processes and a quantitative assessment of firms' ability to maintain sufficient capital to continue operations under stress. Questions have been raised about the effectiveness and burden of requiring two stress test programs. GAO was asked to review these programs and their effectiveness. This report examines how the stress test programs compare, the CCAR qualitative assessment, and the design of the stress test scenarios and models. GAO analyzed Federal Reserve documents including rules, guidance, and internal policies and procedures on DFAST and CCAR implementation and assessed practices against federal internal control standards and other criteria. GAO also interviewed Federal Reserve staff and officials of 19 banking institutions selected based on characteristics such as their size, prior stress test participation, and history of CCAR results. The Board of Governors of the Federal Reserve System (Federal Reserve) has two related supervisory programs that involve stress testing but serve different purposes. Stress tests are hypothetical exercises that assess the potential impact of economic, financial, or other scenarios on the financial performance of a company. Stress tests of banking institutions typically evaluate if the institutions have sufficient capital to remain solvent under stressful economic conditions. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) implements statutory stress test requirements, known as the Dodd-Frank Act Stress Tests (DFAST) for Federal Reserve-supervised banking institutions with more than $10 billion in total consolidated assets. DFAST projects how banking institutions' capital levels would fare in hypothetical stressful economic and financial scenarios. It applies to a broad range of banking institutions and consists of supervisory- and company-run stress tests that produce capital adequacy information for firms' internal use and for public disclosure. The Federal Reserve also conducts a Comprehensive Capital Analysis and Review (CCAR), which uses DFAST information to assess the capital adequacy (a quantitative assessment) and capital planning processes (a qualitative assessment) for bank holding companies with total consolidated assets of $50 billion or more. CCAR generally does not require additional stress tests and uses the same data, models, and projections used for DFAST. While the primary purpose of DFAST is to produce and disclose comparable information on the financial condition of banking institutions (the stress test results), the Federal Reserve uses CCAR to make supervisory assessments and decisions about the capital adequacy plans (including proposed capital actions such as dividend payments) of large bank holding companies. For example, the Federal Reserve may object to a company's plan if stress test results show the company's post-stress capital ratios (regulatory measures that indicate how much capital is available to cover unexpected losses) falling below required minimum levels or if the Federal Reserve's qualitative assessment deems the firm's capital planning and related processes inadequate. An objection can result in restrictions on a firm's capital distributions. Several of the companies GAO interviewed that are subject to Federal Reserve stress tests identified benefits from the tests (such as overall improvements in risk management and capital planning) and also identified costs (including for staff resources and other expenditures necessary to conduct the tests and meet the Federal Reserve's supervisory expectations). GAO found limitations in the Federal Reserve's stress test programs that could hinder their effectiveness. Qualitative assessment disclosure and communication. The Federal Reserve uses a framework with multiple levels of review to assess qualitative CCAR submissions that helps ensure consistency, but it has not disclosed information needed to fully understand its assessment approach or the reasons for decisions to object to a company's capital plan. Transparency is a key feature of accountability and such incomplete disclosure may limit understanding of the CCAR assessments and hinder public and market confidence in the program and the extent to which the Federal Reserve can be held accountable for its decisions. Federal internal control standards state the importance of relevant and timely communications with external stakeholders. The Federal Reserve has not regularly updated guidance to firms about supervisory expectations and peer practices related to the qualitative assessment. For example, it has not published observations of leading capital planning practices used in CCAR since 2014. The limited communication can pose challenges to companies that must meet these expectations annually and could hinder the achievement of CCAR goals. Scenario design. The Federal Reserve has a framework for designing stress test scenarios but its analysis of some key design decisions has been limited. For example, the Federal Reserve has not conducted analyses to determine whether the single severe scenario it uses for the supervisory stress test is sufficient to accomplish DFAST and CCAR goals. While there are advantages to using one scenario—including simplicity and transparency—many different types of financial crises are possible, and the single selected scenario does not reflect a fuller range of possible outcomes. Without additional analysis, the Federal Reserve cannot be reasonably assured that banks are resilient against a range of future risks. The Federal Reserve also has not explicitly analyzed how to balance the choice of severity—and its influence on the resiliency of the banking system—with any impact on the cost and availability of credit, which could limit its ability to avoid undesired economic effects from scenario design choices. Model risk management. Federal Reserve supervisory guidance for banking institutions states that risk from individual models and also from the aggregate system of models should be managed. The Federal Reserve makes supervisory decisions based on the results of its own stress test models, but its management of model risk—the potential for adverse consequences from decisions based on incorrect or misused model outputs—has not focused on its system of models that produce stress test results. To estimate the effect of stress test scenarios on companies' ability to maintain capital, the Federal Reserve has developed individual component models that predict a company's financial performance in the scenarios. The results of these component models are combined with assumed or planned capital actions of companies and form the system of models used by the Federal Reserve. The Federal Reserve has an oversight structure for developing and using models in the supervisory stress tests but its own risk-management efforts have not targeted the system of models. For example, it has not conducted sensitivity and uncertainty analyses—important elements in the Federal Reserve's model risk management guidance—of how its modeling decisions affected overall results. Without such a focus, the Federal Reserve's ability to effectively evaluate and manage model risk and uncertainty from the entire system of stress test models will be limited. Understanding and communicating this uncertainty is critical because the outcome of the CCAR assessment can have significant implications for a company, including limiting its capital actions (such as dividend payments and share repurchases).
Background VA is required by law to provide hospital care and medical services to certain veterans and may provide care to other veterans. In general, veterans must enroll in VA health care to receive VA’s medical benefits package that includes a range of services such as preventative health care services and inpatient hospital services. Veterans may receive certain other health care services, such as dental care, without enrolling. VA provides these services at various types of facilities, including VAMCs. In providing these services to veterans, clinicians at VAMCs use expendable medical supplies and RME. VA has established roles and responsibilities within its system for purchasing, tracking, and reprocessing of these items and policies that VAMCs are required to follow when purchasing and tracking these items at their facilities. VA also has policies that VAMCs are required to follow regarding the reprocessing of RME. VA Roles and Responsibilities for Purchasing, Tracking, and Reprocessing VA headquarters is responsible for the development of policies related to purchasing, tracking, and reprocessing and is ultimately responsible for ensuring that VISNs and VAMCs are in compliance with these policies. Within VA headquarters, the Office of Acquisition, Logistics, and Construction and the Procurement and Logistics Office, are responsible for policies related to the purchasing and tracking of expendable medical supplies and RME, while the Sterile Processing Department is responsible for policies related to the reprocessing of RME. Each of the 21 regional VISNs is responsible for ensuring compliance with VA’s policies at the VAMCs within its region. VISNs report to the Deputy Under Secretary for Health for Operations and Management within VA headquarters. In turn, each of the 153 VAMCs is responsible for implementing VA’s policies. Within each VAMC, the Acquisition Department is responsible for purchasing expendable medical supplies and RME, the Logistics Department is responsible for tracking these items, and the Sterile Processing Department is responsible for reprocessing RME. (See fig. 1 for an overview of VA’s organizational structure.) VA Policies for Purchasing, Tracking, and Reprocessing VA policies specify how VAMCs can purchase expendable medical supplies and RME. VAMCs can purchase expendable medical supplies and RME through their acquisition departments or through their clinical departments, such as the radiology department. VA’s policies include the following requirements related to veterans’ safety that VAMCs must follow when purchasing expendable medical supplies and RME: Committee review and approval. A designated VAMC committee must review and approve proposed purchases of any expendable medical supplies or RME that have not been previously purchased by the VAMC. The committee, which typically includes administrative staff and clinicians from various departments, reviews the proposed purchases to evaluate the cost of the purchase as well as its likely effect on veterans’ care. For example, the committee that reviews and approves proposed RME purchases often includes a representative from the department responsible for reprocessing RME in order to determine whether the VAMC has the capability to reprocess the item correctly and to ensure that staff are appropriately trained to do so. Proper reprocessing of RME is important to ensure that RME is safe to use and that veterans are not exposed to infectious diseases, such as Human Immunodeficiency Virus (HIV), during treatment. Signatures from two officials. All approvals for purchases of expendable medical supplies and RME must be signed by two officials, the official placing the order and the official responsible for approving the purchase. This process helps ensure that purchases of expendable medical supplies and RME are appropriate to use when providing care to veterans. VA has two inventory management systems that it requires VAMCs to use to track the type and quantity of expendable medical supplies and RME used in its facilities. VAMCs use information about the items in their facilities for a variety of purposes, for example to readily determine whether they have expendable medical supplies or RME that are the subject of a manufacturer recall or a patient safety alert. VA policy requires that each VAMC enter information about certain expendable medical supplies and RME in their facilities into the appropriate system. Specifically, VA policies include two key requirements related to veterans’ safety that VAMCs must follow for tracking expendable medical supplies and RME: Tracking of expendable medical supplies. VAMCs must enter information on all expendable medical supplies that are ordered on a recurring basis into the Generic Inventory Package (GIP). Tracking of RME. VAMCs must enter information on all RME that is classified as nonexpendable equipment by VA’s Office of Acquisition, Logistics, and Construction into the Automated Engineering Management System / Medical Equipment Reporting System (AEMS/MERS). VA policies include requirements designed to help ensure that VAMCs reprocess RME correctly, in order to help ensure that RME is safe for use when providing care to veterans. VA’s reprocessing policies include two key types of requirements: Training requirements. To ensure that RME is reprocessed in accordance with manufacturers’ guidelines, VA requires that each VAMC develop device-specific training for reprocessing RME. To develop this training, VA requires VAMCs to create device-specific standard operating procedures (SOP), which provide step-by-step instructions for reprocessing. VA also requires VAMCs to assess staff annually on their competence to reprocess RME in accordance with these SOPs. Operational requirements. To ensure that reprocessing activities are performed safely and that RME is reprocessed correctly, VA policies establish operational requirements for VAMCs, which include that VAMC staff must monitor sterilizers to ensure that they are functioning properly, use personal protective equipment when performing reprocessing activities, and segregate dirty and clean RME. Selected VA Requirements for Tracking and Reprocessing Are Inadequate to Help Ensure Veterans’ Safety We found that both the tracking and reprocessing requirements we reviewed are inadequate to help ensure the safety of veterans who receive care at VAMCs. These inadequacies create potential risks to the safety of veterans who receive care at VAMCs. However, we did not identify any inadequacies in the purchasing requirements we selected for review that may create potential risks to veterans’ safety. VAMCs Are Not Required to Track Certain Expendable Medical Supplies and RME VA does not require VAMCs to enter information about certain expendable medical supplies and RME into their inventory management systems, and therefore, VAMC inventories have incomplete information on these items. Specifically, VAMCs are not required to enter into GIP information on expendable medical supplies purchased on a nonrecurring basis. Furthermore, VAMCs are not required to enter into AEMS/MERS information on RME that VA’s Office of Acquisition, Logistics, and Construction does not classify as nonexpendable equipment. RME that is not classified as nonexpendable equipment includes certain surgical and dental instruments. As a result, none of the six VAMCs we visited had complete inventories of all of the expendable medical supplies or RME in their facilities. Incomplete inventories of these items at VAMCs can pose potential risks to veterans’ safety. At all six of the VAMCs we visited, we identified examples of potential risks to veterans’ safety that may result from these inadequacies in VA’s tracking requirements. For example: Limited ability to identify items on which there are alerts or recalls. In the event of a manufacturer recall or patient safety alert related to an expendable medical supply item or RME, VAMCs may be unable to use their inventory management systems to systematically determine whether the affected item is in their facilities and should therefore be removed so that it is not used when providing care to veterans. Rather, VAMC officials would have to rely on a physical search for the item throughout their facilities—and a physical search could miss items. As we reported in our 2010 testimony, VAMC officials and officials from the VA OIG told us that in response to a patient safety alert in December 2008 regarding an auxiliary water tube—a type of RME that is used with a colonoscope— VAMC officials checked their inventory management systems and concluded—incorrectly—that the tube was not used in the facility. However, in March 2009, the VAMC discovered that the tube was in use in the facility and was not being reprocessed correctly, potentially exposing 2,526 veterans to infectious diseases such as HIV, hepatitis B, and hepatitis C. Difficulty maintaining appropriate inventories. Because GIP helps VAMCs to ensure that they maintain appropriate quantities of supply items in their facilities, VAMCs with incomplete information in GIP about the supplies in their facilities may have difficulty ensuring that they maintain appropriate quantities of these items. This may result in expendable medical supplies being unavailable for veterans’ care if needed or, alternatively, excess supplies accumulating and expiring before they can be used. For example, in 2009 and 2010, VA headquarters officials identified expired expendable medical supplies, which were not being properly tracked in GIP, at three of the six VAMCs we visited. Had these VAMCs been properly tracking these supply items in GIP, they may have been able to maintain appropriate quantities of items and therefore avoid unavailable or expired supplies. Challenges developing required training. VAMCs with incomplete information about the RME in their inventories face challenges identifying the equipment for which they must develop device-specific reprocessing training. None of the six VAMCs we visited relied on their inventory management systems to systematically determine which types of RME they had in their facilities. In fact, officials at all six VAMCs told us that they had to use alternate methods, such as contacting individual staff members or conducting searches in each clinical department, to determine if the facility had a specific type of RME. These methods of searching for RME make it difficult for VAMCs to ensure that they identify all of the RME in their facilities for which they must develop device-specific reprocessing training—without inadvertently missing items—and may have contributed to delays in developing this training. Approximately 1 year after VA instituted the requirement for developing device-specific training for reprocessing, three of the six VAMCs we visited had not yet fully developed this training. Without appropriate training for reprocessing RME, VAMCs cannot ensure that staff in their facilities are reprocessing RME correctly so that these items are safe for use when caring for veterans. At the time of our review, VA did not have plans to immediately address the inadequacies we identified in the tracking requirements by requiring VAMCs to enter information about all expendable medical supplies and RME into VA’s inventory management systems. VA headquarters officials told us that they plan to address the inadequacies we identified in the tracking requirements following implementation of a new inventory management system—Strategic Asset Management. However, VA had suspended the implementation of this system as of March 2011. Although VA did not plan on revising its tracking requirements immediately, officials from two of the six VAMCs we visited told us that they have taken steps to improve the information they maintain on the expendable medical supplies at their facilities. Officials told us that they are requiring staff to enter information about all expendable medical supplies at these VAMCs into GIP, including those that are purchased on a nonrecurring basis. Selected VA Reprocessing Requirements Are Inadequate The VA reprocessing requirements we selected for review are inadequate to help ensure veterans’ safety in two respects: (1) they do not specify the types of RME for which VAMCs must develop device-specific training, and (2) VA has provided VAMCs with conflicting guidance on how to develop this training. Lack of specificity about types of RME that require device-specific training. The VA reprocessing requirements we reviewed do not specify the types of RME for which VAMCs must develop device-specific training. This inadequacy has caused confusion among VAMCs and contributed to inconsistent implementation of training for RME reprocessing. While VA headquarters officials told us that the training requirement is intended to apply to RME classified as critical—such as surgical instruments—and semi-critical—such as certain endoscopes, officials from five of the six VAMCs we visited told us that they were unclear about the RME for which they were required to develop device-specific training. Officials at one VAMC we visited told us that they did not develop all of the required reprocessing training for critical RME—such as surgical instruments—because they did not understand that they were required to do so. Officials at another VAMC we visited also told us that they had begun to develop device-specific training for reprocessing non-critical RME, such as wheelchairs, even though they had not yet fully completed device-specific training for more critical RME. Because these two VAMCs had not developed the appropriate device-specific training for reprocessing critical and semi-critical RME, staff at these VAMCs may not have been reprocessing all RME properly, which potentially put the safety of veterans receiving care at these facilities at risk. Conflicting guidance on the development of RME reprocessing training. While VA requires VAMCs to develop device-specific training on reprocessing RME, VA headquarters officials provided VAMCs with conflicting guidance on how they should develop this training. For example, officials at three VAMCs we visited told us that certain VA headquarters or VISN officials stated that this device-specific training should very closely match manufacturer guidelines–-in one case verbatim—while other VA headquarters or VISN officials stated that this training should be written in a way that could be easily understood by the personnel responsible for reprocessing the RME. This distinction is important, since VAMC officials told us that some of the staff responsible for reprocessing the RME may have difficulty following the more technical manufacturers’ guidelines. In part because of VA’s conflicting guidance, VAMC officials told us that they had difficulty developing the required device-specific training and had to rewrite the training materials multiple times for RME at their facilities. Officials at five of the six VAMCs also told us that developing the device-specific training for reprocessing RME was both time consuming and resource intensive. VA’s lack of specificity and conflicting guidance regarding its requirement to develop device-specific training for reprocessing RME may have contributed to delays in developing this training at several of the VAMCs we visited. Officials from three of the six VAMCs told us that they had not completed the development of device-specific training for RME since VA established the training requirement in July 2009. As of October 2010, 15 months after VA issued the policy containing this requirement, officials at one of the VAMCs we visited told us that device-specific training on reprocessing had not been developed for about 80 percent of the critical and semi-critical RME in use at the facility. VA headquarters officials told us that they are aware of the lack of specificity and conflicting guidance provided to VAMCs regarding the development of training for reprocessing RME, and were also aware of inefficiencies resulting from each VAMC developing its own training for reprocessing types of RME that are used in multiple VAMCs. In response, VA headquarters officials told us that they have made available to all VAMCS a database of standardized device-specific training developed by RME manufacturers for approximately 1,000 types of RME and plan to require VAMCs to implement this training by June 2011. The officials also told us that VA headquarters is planning to develop device-specific training available to all VAMCs for certain critical and semi-critical RME for which RME manufacturers have not developed this training, such as dental instruments. However, as of February 2011, VA headquarters has not completed device-specific training for these RME and has not established plans or corresponding timelines for completing this training. VA’s Oversight of VAMCs’ Compliance with Selected Purchasing and Reprocessing Requirements Has Weaknesses VA’s oversight of VAMCs’ compliance with selected purchasing and reprocessing requirements has weaknesses, which result in VA not being able to systematically identify and address noncompliance. We did not identify any weaknesses in VA’s oversight of the tracking requirements we selected for review. Oversight of VAMCs’ compliance with the selected purchasing, tracking, and reprocessing requirements is important because, at each of the six VAMCs we visited, we identified examples of noncompliance, which may result in risks to veterans’ safety. VA headquarters officials told us that VA intends to improve oversight over the selected purchasing requirements, but has not yet developed a plan for doing so. In addition, VA recently made changes to its oversight of VAMCs’ compliance with selected reprocessing requirements; however, this oversight continues to have weaknesses. VA Has Limited Oversight of VAMCs’ Compliance with Selected Purchasing Requirements We found that, in general, VA does not oversee VAMCs’ compliance with the purchasing requirements we selected for review. Specifically, neither VA headquarters nor the six VISNs that oversee the VAMCs we visited provided oversight for the committee review and approval requirement and only one of the six VISNs provided oversight of the double signature requirement. Consistent with the federal internal control for monitoring, which is applicable to all federal agencies, we would expect VA to oversee VAMCs’ compliance with the requirements we selected, assess the risk of VAMCs’ noncompliance with these requirements, and ensure that noncompliance is addressed. Without oversight of the selected purchasing requirements, VA is unable to identify and address VAMCs’ noncompliance with the selected purchasing requirements. During our site visits to six VAMCs, we identified examples of noncompliance with these requirements that created potential risks to veterans’ safety. VAMC committee review and approval. Officials from four of the six VAMCs we visited told us that certain expendable medical supplies—for example, those used in a limited number of clinical departments—were sometimes purchased without the required VAMC committee review and approval. Furthermore, officials from one of those four VAMCs told us that none of the expendable medical supplies it purchased were reviewed and approved by a VAMC committee. Without obtaining the required review and approval, these VAMCs may have purchased expendable medical supplies without evaluating their cost-effectiveness or likely effect on veterans’ care. Signatures of purchasing and approving officials. At one of the six VAMCs we visited, VAMC officials discovered that one staff member working in a dialysis department purchased expendable medical supplies without obtaining the required signature of an appropriate approving official. That staff member ordered the wrong supplies, which incorrectly allowed blood to pass into dialysis machines. Those supplies were used for 83 veterans, resulting in potential cross-contamination of these veterans’ blood, which may have exposed them to infectious diseases, such as HIV, hepatitis B, and hepatitis C. In January 2011, VA headquarters officials told us that they intend to develop an approach to oversee VAMCs’ compliance with the selected purchasing requirements, although VA has not yet established a timeline for developing and implementing this oversight. In addition, an official from one VISN told us in January 2011 that the VISN planned to begin overseeing VAMCs’ compliance with VA’s requirement that two signatures be obtained for purchases of expendable medical supplies and RME. However, the official told us that the VISN had not yet established a timeline for developing and implementing this oversight. Despite Changes Intended to Improve Its Oversight of VAMCs’ Compliance with Selected Reprocessing Requirements, VA’s Oversight Has Weaknesses Beginning in fiscal year 2011, VA headquarters directed VISNs to make three changes intended to improve its oversight of VAMCs’ compliance with the selected reprocessing requirements at VAMCs. VA headquarters recently required VISNs to increase the frequency of site visits to VAMCs—from one to three unannounced site visits per year—as a way to more quickly identify and address areas of noncompliance with selected VA reprocessing requirements. VA headquarters also recently required VISNs to begin using a standardized assessment tool to guide their oversight activities. According to VA headquarters officials, requiring VISNs to use this assessment tool will enable the VISNs to collect consistent information on VAMCs’ compliance with VA’s reprocessing requirements. Before VA established this requirement, the six VISNs that oversee the VAMCs we visited often used different assessment tools to guide their oversight activities. As a result, they reviewed and collected different types of information on VAMCs’ compliance with these requirements. VISNs are now required to report to VA headquarters information from their site visits. Specifically, following each unannounced site visit to each VAMC, VISNs are required to provide VA headquarters with information on VAMCs’ noncompliance with VA’s reprocessing requirements and VAMCs’ corrective action plans to address areas of noncompliance. Prior to fiscal year 2011, VISNs were generally not required to report this information to VA headquarters. Despite the recent changes, VA’s oversight of VAMCs’ compliance with its reprocessing requirements, including those we selected for review, has weaknesses in the context of the federal internal control for monitoring. Consistent with the internal control for monitoring, we would expect VA to analyze this information to assess the risk of noncompliance and ensure that noncompliance is addressed. However, VA headquarters does not analyze information to identify the extent of noncompliance across all VAMCs, including noncompliance that occurs frequently or poses high risks to veterans’ safety. As a result, VA headquarters has not identified the extent of noncompliance across all VAMCs with, for example, VA’s operational reprocessing requirement that staff use personal protective equipment when performing reprocessing activities, which is key to ensuring that clean RME are not contaminated by coming into contact with soiled hands or clothing. Three of the six VAMCs we visited had instances of noncompliance with this requirement. Similarly, because VA headquarters does not analyze information from VAMCs’ corrective action plans to address noncompliance with VA reprocessing requirements, it is unable to confirm, for example, whether VAMCs have addressed noncompliance with its operational reprocessing requirement to separate clean and dirty RME. Two of the six VAMCs we visited had not resolved noncompliance with this requirement. Compliance with this requirement is important to ensure that clean RME does not become contaminated by coming into contact with dirty RME. VA headquarters officials told us that VA plans to address the weaknesses we identified in its oversight of VAMCs’ compliance with reprocessing requirements. Specifically, VA headquarters officials told us that they intend to develop a systematic approach to analyze the information on VAMCs’ noncompliance and corrective action plans to identify areas of noncompliance across all VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed in a timely manner. While VA has established a timeline for completing these changes, certain VA headquarters officials told us that they are unsure whether this timeline is realistic due to possible delays resulting from VA’s ongoing organizational realignment, which had not been completed as of April 6, 2011. Conclusions Weaknesses exist in VA’s processes for tracking expendable medical supplies and RME and reprocessing RME that create potential safety risks to veterans. Because VA does not require VAMCs to track information about certain expendable medical supplies and RME in their inventory management systems, VAMCs may be unaware of the complete inventory of such items at their facilities. This knowledge is critical to maintain available supplies on hand to serve veterans, to properly identify items for which manufacturers have issued recalls, and to develop training on reprocessing the RME in their inventory. Moreover, VA’s lack of specificity and conflicting guidance for developing device-specific training for reprocessing RME has led to confusion among VAMCs about which types of RME require device-specific training and how VAMCs should develop that training. This confusion has contributed to some VAMCs not developing training for their staff for some critical and semi-critical RME. Until these weaknesses are addressed, the safety of veterans receiving care at VAMCs could potentially be at risk. A general lack of oversight of VAMCs’ compliance with selected purchasing requirements makes it difficult for VA to identify and resolve situations wherein items are purchased without proper review and approval. A failure to review and approve these purchases poses safety risks to veterans being treated in VAMCs. In fact, during our visits to VAMCs, we noted examples of expendable medical supplies that were purchased without appropriate review and approval. As a result, some supplies may have been purchased without evaluating the likely effect on veterans’ care, or worse yet, the wrong supplies were ordered—a mistake that potentially led to some veterans being exposed to infectious diseases. Furthermore, weaknesses in oversight of VAMCs’ compliance with the selected reprocessing requirements do not allow VA to identify and subsequently address areas of noncompliance across all VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed by VAMCs. Providing effective oversight over purchasing and reprocessing requirements consistent with the federal standards for internal control would help VA prevent potentially harmful incidents from occurring. Recommendations for Executive Action To help ensure veterans’ safety through VA’s purchasing, tracking, and reprocessing requirements, we are making four recommendations. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following four actions: Require VAMCs to enter information about all expendable medical supplies and RME into an appropriate inventory management system. Develop and implement an approach for providing standardized training for reprocessing all critical and semi-critical RME to VAMCs. Additionally, hold VAMCs accountable for implementing device-specific training for all of these RME. Develop and implement an approach to oversee compliance at all VAMCs with the selected purchasing requirements. Use the information on noncompliance identified by the VISNs and information on VAMCs’ corrective action plans to identify areas of noncompliance across all 153 VAMCs, including those that occur frequently, pose high risks to veterans’ safety, or have not been addressed, and take action to improve compliance in those areas. Agency Comments and Our Evaluation VA provided written comments on a draft of this report, which we have reprinted in appendix II. In its comments, VA concurred with our recommendations and described the department’s planned actions to implement them. VA also provided technical comments, which we incorporated, as appropriate. To address our recommendation that VA require VAMCs to enter information about all expendable medical supplies and RME into an appropriate inventory management system, VA stated that it plans to take several actions that include the following. By September 30, 2011, the department plans to implement a process for tracking information on certain expendable medical supplies, which are currently not being tracked in GIP, to ensure that these items can be identified in the event of a recall. Furthermore, by September 30, 2011, VA plans to implement a pilot program for tracking certain RME, such as surgical and dental instruments, which are currently not being tracked in AEMS/MERS. To address our recommendation that VA develop an approach for providing standardized training to VAMCs on reprocessing all critical and semi-critical RME, VA stated that it is taking several actions, which include revising VA’s requirement for developing device-specific reprocessing training and providing staff training through a professional organization that specializes in RME reprocessing. In our report, we stated that VA headquarters is planning to develop device-specific training available to all VAMCs for certain critical and semi-critical RME for which RME manufacturers have not developed this training, such as dental instruments, but had not developed a time frame for developing this training. VA’s comments did not provide an update on when this training would be developed. To hold VAMCs accountable for implementing training for critical and semi-critical RME, VA reiterated that it is strengthening its oversight of VAMCs and is requiring VAMCs to develop corrective action plans to ensure that noncompliance with the training requirement is addressed. To address our recommendation that VA develop and implement an approach to oversee compliance with selected purchasing requirements at all VAMCs, VA stated that it plans to oversee VAMCs’ purchasing activities, including VAMCs’ compliance with our selected purchasing requirements. To do this, VA stated that by September 30, 2011, VA headquarters’ Purchasing and Logistics Office will begin requiring VISN officials to conduct routine site visits to VAMCs to help the latter develop action plans for addressing noncompliance with the purchasing requirements. The Purchasing and Logistics Office also plans to review and approve these action plans and follow up with VAMCs to ensure that any noncompliance is addressed. To address our recommendation that VA use information on noncompliance to identify areas of noncompliance across all VAMCs and take action to improve compliance in those areas, VA plans to analyze the results of its oversight activities to identify national concerns and target future Sterile Processing Department initiatives. In our report we stated that while VA has established a timeline for conducting this analysis, certain VA headquarters officials told us that they were unsure whether this timeline is realistic. In its comments, VA did not provide information on whether it anticipates meeting its expected timeline. VA also reiterated changes that it has made that are intended to improve its oversight of VAMCs’ compliance with its requirements. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To examine Department of Veterans Affairs (VA) purchasing, tracking, and reprocessing requirements, we reviewed relevant VA policies and from these policies we judgmentally selected two purchasing requirements, two tracking requirements, and two reprocessing requirements that we determined were relevant to veterans’ safety issues that were identified at certain VA medical centers (VAMC) in 2008 and 2009. Specifically, the purchasing requirements we selected were relevant to a patient safety incident at the VAMC in Palo Alto, California, resulting from the improper purchase and use of dialysis supplies; the tracking requirements we selected were relevant to a patient safety incident resulting from the improper reprocessing of endoscopy equipment at the VAMC in Miami, Florida; and the reprocessing requirements we selected were relevant to patient safety incidents resulting from the improper reprocessing of endoscopy equipment at the VAMCs in Augusta, Georgia; Miami, Florida; and Murfreesboro, Tennessee. After selecting these requirements for our review, we judgmentally selected six VAMCs at the following locations to visit: Albany, New York; Cheyenne, Wyoming; Detroit, Michigan; Miami, Florida; Palo Alto, California; and St. Louis, Missouri. These VAMCs represent different surgical complexity groups, serve veteran populations of different sizes, and are located in different Veterans Integrated Service Networks (VISN). (See table 1.) At these six VAMCs, we examined the adequacy of the selected purchasing, tracking, and reprocessing requirements to help ensure the safety of veterans who received care. To do this, we examined how the requirements in these policies were implemented and whether the requirements indirectly created a potential risk to the safety of veterans who receive care at VAMCs. Specifically, at each VAMC we visited, we reviewed applicable VAMC committee meeting minutes and other documentation on the implementation of these requirements. We also interviewed VAMC officials who were responsible for implementing the selected requirements to determine whether the requirements were adequate to help ensure veterans’ safety. At each VAMC, these officials included members of the executive leadership team, the nurse executive, the chief of the Sterile Processing Department, the patient safety manager, infection preventionists, and members of the quality management staff. To examine VA’s oversight of VAMCs’ compliance with the purchasing, tracking, and reprocessing requirements we selected, we reviewed VA’s oversight of these requirements and evaluated whether this oversight provides VA with adequate information to identify and address noncompliance. As part of this review, we reviewed VA’s oversight in the context of federal standards for internal control for monitoring. The internal control for monitoring refers to an agency’s ability to assure that ongoing review and supervision activities are conducted, with the scope and frequency depending on the assessment of risks, deficiencies are communicated to at least one higher level of management, and actions are taken in response to findings or recommendations within established timelines. We then interviewed officials from VA headquarters, including the Sterile Processing Department, the Infectious Disease Program Office, and the System-wide Ongoing Assessment and Review Strategy; VA’s Office of Inspector General; and the six VISNs that oversee the VAMCs we visited who are responsible for overseeing compliance with VA’s requirements, including those we selected for our review. Through our interviews, we obtained information on the oversight activities conducted by each of these entities and the extent to which these entities followed up with VAMCs to ensure that they corrected problems identified through these oversight activities. In addition, we obtained and reviewed relevant documents regarding VA oversight, including internal reports, VAMCs’ plans to correct problems identified through oversight activities, and policy memorandums. We conducted this performance audit from March 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mary Ann Curran, Assistant Director; David Barish; Kye Briesath; Alana Burke; Melanie Krause; and Michael Zose made key contributions to this report. Lisa Motley provided legal support and Krister Friday assisted in the message and report development.
Department of Veterans Affairs (VA) clinicians use expendable medical supplies--disposable items that are generally used one time--and reusable medical equipment (RME), which is designed to be reused for multiple patients. VA has policies that VA medical centers (VAMC) must follow when purchasing such supplies and equipment, tracking these items at VAMCs, and reprocessing--that is, cleaning, disinfecting, and sterilizing--RME. GAO was asked to evaluate (1) purchasing, tracking, and reprocessing requirements in VA policies and (2) VA's oversight of VAMCs' compliance with these requirements. GAO reviewed VA policies and selected two purchasing requirements, two tracking requirements, and two reprocessing requirements. At the six VAMCs GAO visited, GAO interviewed officials and reviewed documents to examine the adequacy of the selected requirements to help ensure veterans' safety. GAO also interviewed officials from VA headquarters and from six Veterans Integrated Service Networks (VISN), which oversee VAMCs, and obtained and reviewed documents regarding VA's oversight. GAO found that the VA tracking and reprocessing requirements selected for review are inadequate to help ensure the safety of veterans who receive care at VAMCs. GAO did not identify inadequacies in selected VA purchasing requirements that may create potential risks to veterans' safety. GAO found the following: (1) Tracking requirements. Because VA does not require VAMCs to enter information about certain expendable medical supplies and RME in their facilities into VA's inventory management systems, VAMCs may have incomplete inventories of these items. This, in turn, creates potential risks to veterans' safety. For example, in the event of a manufacturer recall involving these items, VAMCs may be unable to readily determine whether the items are in their facilities and should be removed and not used when providing care to veterans. (2) Reprocessing requirements. Although VA requires VAMCs to develop device-specific training for staff on how to correctly reprocess RME, VA has not specified the types of RME for which this training is required. VA has also provided conflicting guidance to VAMCs on how to develop this training. This lack of clarity may have contributed to delays in developing the required training. Without appropriate training on reprocessing, VAMC staff may not be reprocessing RME correctly, which poses potential risks to the safety of veterans. VA headquarters officials told GAO that VA has plans to develop training for certain RME, but VA lacks a timeline for developing this training. GAO also found weaknesses in VA's oversight of VAMCs' compliance with the selected purchasing and reprocessing requirements. These weaknesses render VA unable to systematically identify and address noncompliance with the requirements, which poses potential risks to the safety of veterans. GAO did not identify weaknesses in VA's oversight of VAMCs' compliance with the selected tracking requirements. GAO found the following: (1) Oversight over purchasing requirements. In general, VA does not oversee VAMCs' compliance with the selected purchasing requirements. While VA intends to improve oversight over these requirements, it has not yet developed a plan for doing so. (2) Oversight over reprocessing requirements. Although VA headquarters receives information from the VISNs on any noncompliance they identify as well as VAMCs' corrective action plans to address this noncompliance, VA headquarters does not analyze this information to inform its oversight. According to VA headquarters officials, VA intends to develop a plan for analyzing this information to systematically identify areas of noncompliance that occur frequently, pose high risks to veterans' safety, or have not been addressed across all VAMCs.
Background The legislative history of AMT refers to three distinct measures of income: economic income, financial statement or “book” income, and income as defined for tax purposes. A calculation of economic income would include all types of income, recognize all income when it is earned rather than when it is received, subtract all the costs of earning the income, and make adjustments for inflation. Because such a comprehensive measurement would not be based solely on market transactions, it is not done in practice. Financial statements include a comprehensive measure of income based on historical records that can be verified. In contrast to economic income, financial statement or book income does not adjust values for inflation and does not recognize certain items of income until they are received. The definition of income implicit in the tax code combines a measure of taxpayers’ ability to pay taxes with the desire to encourage certain activities through the tax code and to minimize the difficulty of administering and complying with the tax law. Despite many similarities, the three measures are substantially different from each other. The purpose of AMT is to better coordinate the definition of income for tax purposes with that of economic income and financial statement income. Corporations are required to calculate their tax liability under two sets of rules—computing their regular tax liability and their tentative AMT liability, and paying whichever is greater. If the tentative AMT is more than the regular tax, the difference between them is AMT. AMT is described in sections 55 through 59 of the Internal Revenue Code. Corporations have to keep records to calculate AMT as well as the regular tax. For tax year 1994, a corporation had to file Form 4626—used to figure AMT—if its taxable income or loss before the net operating loss deduction, plus its adjustments and preferences, totaled more than the lesser of $40,000 or the corporation’s allowable exemption amount. The corporate AMT was cited by all 17 corporations we interviewed in preparing for testimony last year as among the provisions in the Internal Revenue Code with the largest recordkeeping and compliance cost burden. The AMT rate is 20 percent, lower than the regular corporate tax rate of 35 percent now or 34 percent through 1992. However, AMT is levied on a broader tax base than the regular tax because the AMT tax base includes certain regular tax preferences and adjustments that either delay the time when income is recognized or exclude income items altogether. Two important AMT adjustments are related to depreciation and financial statement income. Depreciation is the cost incurred by a business reflecting the reduction in value of certain of its assets over time. For both the regular tax and AMT, the amount of depreciation deductions taken in a year is a certain fraction of the original purchase price of the assets. Compared with the regular tax, deductions for depreciation under AMT are smaller in the early years after an asset is placed in service and are spread out over a longer time. The book income and the adjusted current earnings (ACE) adjustments were established to ensure that firms reporting large earnings on their financial statements in a given year paid some tax in that year. Book income reported on financial statements may not equal taxable income on tax returns because some items of revenue and expenses are never included in one or the other or are reported in different years. As a result, book income may not be equal to the taxable income figure on tax returns, as explained in appendix III. The book income adjustment was part of AMT from 1987 through 1989. It was replaced by the ACE adjustment in 1990. The ACE adjustment relies on income tax principles to define income in a way that Congress intended to be as broad as the definition of book income. AMT limits the amount of a corporation’s net operating losses from prior years that can be deducted in calculating current year’s income to 90 percent of tentative taxable income computed under AMT rules. In addition, it disallows the use of many credits available in the regular tax and specifically restricts the amount of foreign tax credit that can be taken for tax payments abroad. AMT is also linked to the regular tax through the AMT credit. Corporations that have paid AMT can credit these payments against their regular tax liability in future years when they pay the regular tax. However, the credit cannot be used to reduce regular tax liability below tentative AMT liability in future years. With this crediting mechanism, AMT operates partially as a prepayment of tax rather than as a permanent increase in tax liability. (App. I provides a more complete discussion of the history and mechanics of AMT.) How Much AMT Was Paid? The amount of corporate AMT paid rose from $2.2 billion in 1987 to $8.1 billion in 1990, before declining to $4.9 billion in 1992. These numbers must be combined with the fact that recovery of AMT liability via the AMT credit has been growing, albeit slowly, as shown in figure 1. Most corporations that paid AMT in 1987 had not fully recovered their payment by 1991, the last year we were able to examine, but the total dollar volume of credits used rose from year to year. Which Corporations Paid AMT? The total number of corporations paying AMT was small. About 28,000, or about 1.3 percent of the 2.1 million corporations subject to AMT in 1992, paid AMT in 1992. The corresponding percentage ranged from 0.7 to 1.5 percent in the 1987 through 1992 period. Although only about 28,000 firms paid AMT in 1992, many more corporations were affected by it. For example, almost 400,000 corporations filed the AMT form with IRS in 1992 even though they owed no AMT. Of the approximately 2.1 million corporations that were subject to AMT in 1992, about 2,000 corporations with assets of $100 million or more paid 85 percent of the total corporate AMT liability. This was a pattern that generally held true for 1987 through 1991 also. As shown in figure 2, corporations with assets of $500 million or more paid 75 percent of all AMT in 1992, irrespective of the credit they may have received. However, most corporations that paid AMT from 1987 through 1992 were relatively small. In most years, more than 70 percent of corporations paying AMT had less than $10 million in assets. In 1992, 75 percent of AMT payers had less than $10 million in assets, as is also shown in figure 2. Nevertheless, relatively large corporations were more likely than smaller corporations to pay AMT. For instance, in all years except one, about 20 percent of corporations with assets of $500 million or more paid AMT; in contrast, no more than half of 1 percent of corporations with less than $1 million in assets paid AMT. The industries in which corporations paid the most AMT were manufacturing, transportation, and finance. At the industry level, AMT generally increased the amount of tax paid by about 1 or 2 percent of taxable income. Eight specific industry subclasses that we examined—auto, steel, chemicals, utilities, transportation, paper, oil and gas extraction, and mining other than oil—had generally higher percentages of AMT payers than existed in the nation as a whole during the 6 years we examined. Firms differed from each other in how often they paid AMT and the extent to which AMT increased their taxes. Of approximately 10,000 corporations with over $50 million in assets that we tracked over a 5-year period, about half paid AMT in at least one year. Of those that paid AMT at least once, most paid it for only one year. Only about 160 of the 10,000 corporations we studied paid AMT in all five years. In the larger universe of all AMT payers, about a third of the AMT payers that also paid regular tax had their taxes at least doubled by AMT. Why Did Corporations Pay AMT? By far the most important elements that caused corporations to pay AMT were the depreciation adjustment for property placed in service after 1986 and the book income and adjusted current earnings adjustments. For instance, in 1992 the depreciation adjustment was included on about 87 percent of AMT returns and raised taxable income by about $23 billion. The ACE adjustment was included on about 67 percent of AMT returns and raised taxable income by about $19 billion. No other preference item or adjustment was present in more than 10 percent of AMT returns. AMT also caused corporations to pay tax by limiting their ability to take net operating loss deductions and the foreign tax credit (FTC). About 32 percent of AMT payers in 1992 included net operating losses in their AMT calculations, and about 19 percent reached the limitation on the use of the deduction. About 3 percent of AMT payers had FTC as part of their AMT computation, and about one-fourth of FTC claimants were constrained by the 90 percent FTC limit. FTC claims reduced overall AMT before credits by 32 percent. Has AMT Achieved Its Purposes? AMT has partially achieved the congressional objectives of ensuring that taxpayers with substantial economic income in a given year, and taxpayers with positive book income in a given year, pay some tax in that year. By including tax preferences in its tax base and by more closely approximating economic depreciation when inflation is low, AMT leads to a tax more closely based on economic income. In addition, in every year from 1987 through 1992, at least 6,000 corporations with positive book income that paid no regular tax paid some AMT, and at least 9,000 corporations with positive book income subject to regular tax paid an additional AMT amount, as shown in appendix III. AMT Leads to a Closer Measurement of Economic Income When Inflation Is Low A corporate tax based on economic income would deny many of the preferences and exclusions now in the regular tax code, index the value of assets and costs for inflation, and base depreciation deductions on economic depreciation. AMT moves the tax code closer to taxing economic income by including several preferences and exclusions in its tax base. With respect to inflation, neither the regular tax nor AMT rules adjust the measurement of income for inflation. Concerning depreciation, AMT depreciation rules lead to deductions that more closely approximate economic depreciation when inflation is low. However, AMT depreciation deviates further from economic income than does regular tax depreciation in the presence of moderate or high rates of inflation. AMT may also reduce the generous deductions of nominal interest expense (rather than inflation-adjusted interest expense) that corporations can claim at high rates of inflation. (App. III provides additional information on corporations’ book income, economic income, and AMT depreciation rules.) AMT Has Made More Corporations With Positive Book Income Pay Taxes In 1992, AMT provisions were successful in making about 9,900 corporations with positive book income and no regular tax liability pay some AMT, as shown in table 1. Also, about 13,800 corporations with positive book income subject to regular tax paid an additional AMT amount. About 4,300 corporations with negative book income also paid AMT—1,800 of these corporations paid both regular tax and AMT, and almost 2,500 of these corporations paid AMT but no regular tax. This payment of taxes by corporations with losses may have been due to the fact that some revenues were recognized for financial accounting purposes after they were included on tax returns and/or expenses were recorded in accounting records before they were deducted for tax purposes, as explained in appendix III. On the other hand, AMT did not reach all corporations with positive book income. Of 2.1 million corporate returns subject to AMT in 1992, about 306,000 corporate returns reported positive book income but did not pay regular or alternative minimum tax. The vast majority of these corporations were small and had less than $40,000 in net income, so they probably qualified for the AMT exemption. Of the larger corporations with positive book income, most were investment companies, which generally flow out all their income to shareholders. Because of this feature of their business, these companies are exempt from the book income and ACE adjustments. How Might AMT Affect Corporate Investment? The effects of AMT on corporate investment are not clear. Studies and comments by economists have examined two ways in which AMT might affect investment: by (1) reducing cash flow and thus discouraging investment, or (2) changing marginal incentives to invest, leading to changes in investment. Cash Flow Corporations finance investment through internal funds—retained earnings or profits—or external funds such as debt or new stock issues. For corporations that must use external sources and pay significantly higher costs compared to their opportunity costs (earnings from investing their own funds), investment could be sensitive to the current profitability or cash-flow position of the firm. A number of recent studies have found significant effects of cash flow on investment, and some authors have concluded that some corporations find external funds significantly more expensive than internal funds. These studies have concluded that this is more likely to be the case for smaller firms, firms that pay relatively small amounts of dividends, firms that do not participate in the corporate bond market, and firms that cannot use working capital to smooth investment spending over time. Thus, for such firms, AMT might reduce investment by reducing cash flow and forcing them to finance investment with costly external funds. It is not clear how many AMT payers meet these conditions. No study has directly tested the extent to which such cash-flow constraints affect corporations that paid AMT. The tax return data we used were limited in their ability to directly test many of these factors. However, the data did show that most AMT is paid by relatively large corporations. To the extent that investment by large corporations is less dependent on current cash flow than is the case for small corporations, the effect of the AMT on investment would be limited. In addition, as AMT credits are reclaimed in the future, cash flow would increase at that time, possibly increasing investment. Marginal Incentives to Invest Many studies have been done on the effects of corporate income taxes on marginal incentives to invest, and several have directly investigated the effects of the AMT on marginal incentives to invest. These studies have investigated how the regular corporate income tax and AMT might affect the incentive to invest through their tax rates, depreciation provisions, the deductibility of interest payments and the nondeductibility of dividends, loss provisions, and credits for certain types of investment. Relative to the regular tax, AMT has a lower rate, a generally slower depreciation schedule, and additional limitations on credits and losses. Because the lower tax rate by itself would lower the cost of investment but the other two features would raise the cost of investment, investment incentives may be increased or decreased relative to the regular tax. Several studies have investigated how AMT affects incentives to invest for corporations that are consistently paying AMT or recovering AMT credits over long periods. Studies we reviewed contained the following conclusions: Incentives to invest were greater under AMT than under the regular tax for firms permanently paying AMT that financed investments with equity. In this case, the value of the lower tax rate more than offset slower depreciation deductions, so the effective tax rate was lower. Investment incentives were reduced under AMT relative to the regular tax for debt-financed investments. Because interest is deductible under both AMT and the regular tax, a dollar of interest payments will reduce taxes by a greater amount under the higher regular tax rate. For investments financed with a mixture of debt and equity, investment incentives under AMT can be higher or lower than the regular tax, depending on the amount of debt used. For the mix of debt and equity described as typical by two authors, investment incentives are greater under AMT than under the regular tax. Another study addressed the more general situation where firms could switch from the regular tax to AMT or pay AMT and then return to the regular tax and recover all their AMT credits. In this circumstance, the effect of AMT on investment incentives is more complicated. In this case, the effect of taxes on the cost of capital investment will depend on the timing of investment relative to when and how long the corporation pays AMT, as well as on the source of financing for the investment. If depreciation deductions are taken when the firm is paying the regular tax, and income from the investment is received when the firm is paying AMT, the cost of investment is relatively low. If depreciation deductions are taken when the firm is paying AMT and income is taxed at the higher regular tax rate, the cost of investment is higher. Our analysis showed that the circumstance envisioned in this later study was the more common for AMT corporations—such firms were more likely to switch between the regular tax and AMT. We tracked the 1987 through 1991 tax situations of 10,000 corporations with assets of $50 million or more. Fifty-one percent did not pay AMT in any year. About 13 percent either paid AMT or had unrecovered AMT credits in all 5 years. The remaining 36 percent switched back and forth from the regular tax to AMT. Our review of the available studies indicated that determining the effect of AMT on investment is further complicated by the lack of consensus on how significantly actual investments are affected by changes in investment incentives. Analysts have widely differing views on how responsive investment is to changes in tax rules. Some studies have concluded that investment is very responsive to changes in tax incentives, while others have found small effects. The difficulty stems from a lack of consensus on the nontax determinants of investment; without a clear model of how other factors affect investment, it is difficult to isolate the effects of taxes, holding other factors fixed. Objectives, Scope, and Methodology Our objectives for this report were to (1) determine which corporations paid AMT and why they were liable for it, (2) examine whether AMT has achieved its purpose, and (3) discuss how AMT might affect corporate investment. To meet our first objective, we analyzed the IRS Statistics of Income corporate databases for 1987 through 1992, the most recent data available at the time of our review. These data files of over 70,000 tax returns per year include all corporations with assets of over $100 million and a stratified probability sample of all other corporations organized for profit. Results from firms with assets of less than $100 million are thus subject to sampling errors. With the large sample sizes, the calculations of sampling errors for 1989 and 1990 showed that the 95-percent confidence intervals for statistics based on all AMT-paying firms were within 5 percentage points of percentage estimates and within 5 percent of the value of other estimates. Where larger confidence intervals were found, they are noted in the report. We also constructed a database consisting of tax returns for corporations that filed returns in each year from 1987 through 1991. This database included about 10,000 corporations that had assets of over $50 million in each of these years. Corporations in this database paid 73 percent of the total regular tax liability and 77 percent of all AMT paid in 1991. We tracked these corporations over time to assess their experience with AMT. The major limitations of this database are that it does not include (1) all corporations and (2) larger corporations that either went out of business between 1987 and 1991 or merged with another corporation and therefore did not file their own tax returns. To address the second objective, we reviewed AMT’s legislative history. Using the previously described tax return data, we also analyzed the relationship between the income or losses corporations showed on their books and the regular tax and/or AMT they paid. To assess whether AMT effectively taxes corporate economic income, we compared the AMT tax base with the tax base proposed by the Treasury Department in 1984. Treasury’s proposal was to change the tax system so that real economic income would be taxed. We also compared the AMT tax base with the list of tax expenditures published by the Joint Committee on Taxation to determine the extent to which AMT includes items that are preferences or exclusions in the regular tax. To determine whether the AMT depreciation provisions are consistent with economic depreciation, we obtained estimates of the present value of economic depreciation deductions under the regular tax and AMT from the Congressional Research Service. These estimates, while comprehensive and widely used by researchers, were based on work on economic depreciation published in 1981. Therefore, the estimates are subject to error and would not reflect any changes in economic depreciation rates that might have occurred since 1981. To meet the third objective, we reviewed various academic studies and articles. In addition, we reviewed the literature on the determinants of business investment. We did not obtain IRS comments on this report because we did not address tax administration issues. We did our work in Washington, D.C., between May 1993 and February 1995 in accordance with generally accepted government auditing standards. Appendixes II through IV provide more detail on our findings as they relate to our objectives. We are sending copies of this report to various congressional committees and Members of Congress, the Secretary of the Treasury, and other interested parties. Copies will be made available to others upon request. The major contributors to this report are listed in appendix V. If you have any questions, please contact me on (202) 512-5407. AMT’s Purpose: Taxpayers With Substantial Income Should Pay Some Tax In addition to the regular income tax, both corporations and individuals are subject to an alternative minimum tax (AMT). The tax system has historically tried to achieve two potentially conflicting goals. One goal has been to raise revenue in relation to taxpayers’ ability to pay, which is generally measured by annual income. Another has been to encourage certain types of economic activity thought to be beneficial to society. This goal has been pursued through provisions (tax preferences) that exclude various types of income from tax, delay the payment of tax on certain types of income, or grant tax credits for certain activities. These two goals can conflict with each other. At various times, reports that individuals and corporations were able to pay no tax through the direct use of tax preferences and through interactions of preferences and other features of the tax code led to concerns that the ability-to-pay goal was not being met. These concerns led Congress to set limits on preferences in the regular tax code and to create AMT. Legislative History of AMT The idea of AMT was originally developed by the Treasury Department in 1969. Treasury studies found that some high income individuals paid little or no tax, and that many high income individuals paid tax at a lower rate than individuals with lower income. In response to these findings, Treasury proposed establishing a minimum tax for individuals. The Tax Reform Act of 1969 included an add-on minimum tax for both noncorporate and corporate taxpayers on certain tax preferences. A 10-percent tax was levied on the corporate minimum tax base, which was the sum of corporate tax preferences minus a $30,000 exemption amount and a corporation’s regular tax liability. Levied in addition to the taxpayer’s regular tax liability, this was an add-on rather than an alternative tax. The Tax Reform Act of 1976 added preferences and changed the exemption amount. In 1978, concerns about the effectiveness of the individual AMT led to changes. In contrast to an add-on minimum tax, the tax introduced in 1978 developed the AMT concept of levying a tax on an alternative income base when the liability under the alternative base is greater than the regular tax liability. From 1978 to 1982, individuals were subject to both AMT and the add-on minimum tax. In 1982, the add-on tax for individuals was repealed and the AMT base broadened. Throughout this period, the corporate add-on tax was essentially unchanged. In its 1984 tax reform proposal, the Treasury Department proposed changing business taxes, including the corporate income tax, so that the tax base more closely approximated the real economic income of businesses. In addition to several important structural changes, Treasury recommended eliminating over 45 existing tax preferences and limiting many others. In particular, Treasury proposed eliminating most of the preferences in the regular tax that were included in the add-on tax. Treasury concluded that a minimum tax or an add-on tax would not be necessary if the preferences in the tax code were eliminated directly. In contrast to the Treasury Department proposal, the administration’s 1985 tax reform proposal recommended that the corporate add-on minimum tax be replaced with an AMT. Also in contrast to the Treasury proposal, the President’s proposal included additional preferences in the regular tax, did not repeal others, and did not index the corporate tax base for inflation. The proposal called for an AMT under which taxpayers would calculate their income under two systems and pay AMT when it reflected greater tax liability. The President’s proposal also called for an expanded list of preferences to be covered by AMT. “Congress concluded that the minimum tax should serve one overriding objective: to ensure that no taxpayer with substantial economic income can avoid significant tax liability by using exclusions, deductions, and credits. Although these provisions may provide incentives for worthy goals, they become counterproductive when taxpayers are allowed to use them to avoid virtually all tax liability.... “In particular, Congress concluded that both the perception and the reality of fairness have been harmed by instances in which corporations paid little or no tax in years when they reported substantial earnings, and may even have paid substantial dividends, to shareholders. Even to the extent that these instances may reflect deferral, rather than permanent avoidance, of corporate tax liability, Congress concluded that they demonstrated a need for change.” Since the passage of TRA, several other important changes have been made to the corporate AMT. However, the overall structure of AMT has remained essentially the same. AMT is governed by sections 55 to 59 of the Internal Revenue Code. Overview: How the Corporate AMT Works Under current law, corporations are to calculate tax liability under two separate systems—the regular tax and AMT. To comply with the AMT provisions, taxpayers go through the following process: First, they calculate Alternative Minimum Taxable Income (AMTI). To do this, taxpayers start with their taxable income, add the value of a number of preference items and adjustments, and then deduct any available AMT net operating losses. Table I.1 shows this calculation. Taxable income before net operating loss (NOL) deduction AMT NOL deduction (limited to 90 percent of tentative AMTI) Next, taxpayers calculate Tentative Alternative Minimum Tax (TAMT). To do this, they reduce AMTI by an exemption amount and multiply the remainder by the AMT tax rate, which is 20 percent. They then subtract any allowable credits, primarily the AMT foreign tax credit. Finally, taxpayers compare TAMT liability with regular tax liability. If TAMT is more than the regular tax, the taxpayer is subject to AMT. The taxpayer will pay the government the amount of TAMT liability. The difference between TAMT and the regular tax is the amount of AMT actually owed. If regular tax is more than TAMT, the taxpayer is subject to the regular tax. The calculation of regular tax owed can include a credit for AMT paid in earlier years. Because of the AMT credit, any AMT paid may be recouped in future years when the taxpayer returns to the regular tax. In this regard, AMT more closely resembles a prepayment of tax than a permanent increase in tax liability. However, taxpayers cannot reduce their regular tax liability below TAMT through the use of the AMT credit. Table I.2 shows this calculation. Generally $40,000; phased out for corporate taxpayers with AMTI above $150,000. AMT rate (20 percent) Limit rules require worldwide AMTI to be calculated; AMT foreign tax credit (in conjunction with allowable investment credits) cannot reduce AMT liability by more than 90 percent; the credit can be carried back 2 years or forward 5 years. Credits cannot reduce AMT by more than 25 percent. Tentative AMT (TAMT) If tentative AMT > regular tax, tentative AMT is owed; net AMT is the amount by which TAMT exceeds regular tax liability; net AMT can be carried forward and credited against regular tax in future years. An example will illustrate how AMT works. If a corporation computed its regular tax as $1 million and its tentative AMT as $1.5 million, it would pay $1.5 million. One million dollars of this payment would be classified as regular tax, and $0.5 million would be classified as AMT. If the same corporation found that in the next year it owed $2 million as its regular tax liability and $1 million of TAMT, the corporation would then be subject to just the regular tax. The corporation could claim a credit against its regular tax for the $0.5 million in AMT it paid the year before and then send $1.5 million to the government. In this case, it would recoup its AMT payment quickly. However, the AMT credit cannot reduce current year regular tax liability below current year AMT liability. If the corporation’s TAMT in the second year had been $1.75 million instead of $1 million, it could only have claimed a credit of $0.25 million and would have had to carry the remaining $0.25 million in uncredited AMT payments ahead to future years. AMT Preferences and Adjustments In general, AMT preferences and adjustments reflect aspects of the regular tax that either (1) defer tax by rapidly recognizing expenses or by delaying revenue recognition, or (2) always exclude certain income from the definition of taxable income. AMT Preferences AMT preferences under the post-TRA AMT generally maintain the preferences that were in place under the add-on minimum tax. Table I.3 lists the AMT preference items and describes how their regular tax treatment differs from their AMT treatment. Real estate depreciation (pre-1987 property) Certified pollution control facilities amortization (pre-1987 property) Appreciated capital gain property contributed to charity (repealed in the Omnibus Budget Reconciliation Act (OBRA) of 1993) AMT Adjustments AMT adjustments differ from AMT preferences in that adjustments can be positive or negative. Thus, adjustments related to the deferral of tax will generally be positive in the early years of an asset’s useful life, increasing AMTI, and negative in the later years, decreasing AMTI. Table I.4 describes their regular and AMT tax treatments. Real estate depreciation (post-1986 structures) Personal property depreciation (post-1986 equipment) Limited use of completed-contract method of accounting is allowed (income is not recognized until contract is completed) Percentage-completed method of accounting must be used (except for home construction contracts) Amortization of pollution control facilities (post-1986) Certain costs can be expensed (deducted immediately) Book Income and ACE Adjustments “With respect to corporations, Congress concluded that the goal of applying the minimum tax to all companies with substantial economic incomes cannot be accomplished solely by compiling a list of specific items to be treated as preferences. In order to achieve both real and apparent fairness, Congress concluded that there must be a reasonable certainty that, whenever a company publicly reports significant earnings, that company will pay some tax for the year. “For the years from 1987 through 1989, Congress concluded that this goal should be accomplished by means of a preference based upon financial statement or book income reported by the taxpayer pursuant to public reporting requirements or in disclosures made for nontax reasons to regulators, shareholders, or creditors. Congress concluded that it was particularly appropriate to base minimum tax liability in part upon book income during the first three years after enactment of the Act, in order to ensure that the Act will succeed in restoring public confidence in the fairness of the tax system. “For taxable years beginning after 1989, Congress concluded that the book income preference should be replaced by the use of a broad-based system that is specifically defined by the Internal Revenue Code. Congress intended that this system should generally be at least as broad as book income, as measured for financial reporting purposes, and should rely on income tax principles in order to facilitate its integration into the general minimum tax system.” Book Income Adjustment The book income adjustment was in effect from 1987 through 1989. Under its rules, if a corporation’s adjusted net book income exceeded AMTI, 50 percent of the difference was added to AMTI. Although the book income adjustment was described as an adjustment, it was similar to a preference because it could not be negative. If net book income was less than AMTI, no adjustment was made. Because the AMT tax rate was 20 percent, effectively book income (if greater than AMTI) was taxed at a rate of 10 percent (20 percent of 50 percent). TRA specified the financial statements to be used to calculate the book income adjustment. For example, if a corporation had filed a financial statement with the Securities and Exchange Commission, this statement was to be used in the calculation. If the corporation was not required to file this statement, other audited financial statements prepared for nontax purposes could be used. ACE Adjustment The ACE adjustment replaced the book income adjustment in 1990. The ACE adjustment is a modified version of the calculation of earnings and profits. Conceptually, earnings and profits are a measure of the economic resources available to corporations to pay dividends without drawing down their capital. While not specifically defined in the tax code, the earnings and profit concept is developed in several code sections and regulations. Many of the adjustments required to calculate ACE involve items that are also AMT adjustments and preferences. Once ACE is calculated, it is compared to AMTI, and 75 percent of the difference between the two is added to AMTI. Unlike the book income adjustment, ACE can be a negative amount (to the extent to which positive ACE adjustments were made in prior years) and can therefore reduce AMT liability. Table I.5 summarizes the ACE calculation. AMT Limits on Net Operating Loss Deductions and Tax Credits “In addition, Congress concluded that a change was necessary with regard to the use of net operating losses, foreign tax credits and investment tax credits to avoid all U.S. tax liability. Absent a special rule, a U.S. taxpayer with substantial economic income for a taxable year potentially could avoid all U.S. tax liability for such year so long as it had sufficient such credits and losses available. While Congress viewed allowance of the foreign tax credit and net operating loss deduction, along with the transitional relief relating to the investment tax credit, as generally appropriate for minimum tax purposes, it was considered fair to mandate at least a nominal tax contribution from all U.S. taxpayers with substantial economic income.” Net Operating Loss Deduction Losses are generally recognized when they occur for financial accounting purposes, and corporations can report negative amounts of income on their tax returns. However, under current law, a corporation that loses money in a given year does not get a tax refund for that tax year. This means that expenses that would reduce taxable income and reduce taxes had the firm made money do not reduce taxes if the firm loses money. Without a carryforward or carryback provision, corporations that made profits in each year would pay less tax over time than corporations that earned the same profits over time but had some years with profits and some years with losses. Under current law, corporations can carry losses forward to 15 future years and deduct them when they have positive income. Corporations can also carry losses back 3 years. Although deductions for net operating losses are allowed in the calculation of AMTI, the deduction cannot exceed 90 percent of AMTI. With an AMT tax rate of 20 percent, this guarantees that (aside from the exemption amount and other credits) corporations subject to AMT will pay tax equal to at least 2 percent of AMTI (20 percent times at least 10 percent of AMTI). Foreign Tax Credit (FTC) In general, U.S. corporations are subject to U.S. income tax on their worldwide income. However, corporations that operate abroad may also be subject to foreign income taxes. Like most countries, the United States allows taxpayers a tax credit for foreign income taxes paid so that corporations that operate internationally are not taxed twice on the same income. At the same time, the amount of foreign tax that can be credited is limited so that FTCs do not offset U.S.-source income. Under AMT, taxpayers must recalculate their foreign-source income and the FTC limitations according to AMT rules. AMT also places an additional limit on FTC. The AMT FTC cannot reduce AMT (determined without regard to the AMT net operating loss deduction) by more than 90 percent. If the AMT FTC exceeds 90 percent, the excess amount can be carried back 2 years or forward 5 years, as can the regular tax FTC. Other Tax Credits Historically, taxpayers who have undertaken a variety of activities have qualified for credits. Before TRA, corporations could earn investment tax credits for investing in qualified capital assets. Currently, corporations can earn tax credits for qualified research and development spending, income earned in U.S. possessions, spending on rehabilitation of qualified structures, wages on qualified jobs, and other tax-related activity. The purpose of these credits is to encourage certain types of activity that is thought to lead to social benefits. The regular tax places limits on the extent that corporations can use credits to reduce their tax liability. Under AMT, these credits generally cannot be used to reduce AMT liability.Additionally, many tax credits cannot be used for the regular tax if they reduce regular tax liability below AMT liability. An exception to this rule exists for the possessions credit; it cannot reduce AMT, but it is included in the calculation of regular tax. Table I.6 compares the tax rules for the deduction of net operating losses with those for tax credits under the regular tax and AMT. Net operating loss deduction Can be carried back 3 years or forward 15 years; can be used to eliminate all current year tax liability. Alternative tax net operating loss can reduce AMTI by at most 90 percent; unused alternative losses can similarly be carried forward or back. Can be carried back 2 years and forward 5 years; can eliminate all U.S. tax on foreign-source income. Calculated on AMTI base; limited to 90 percent of AMT. Can be carried back 3 years and forward 15 years; cannot exceed difference between regular tax and tentative AMT, or 25 percent of regular tax liability in excess of $25,000. Generally cannot be used to reduce AMT; before 1991, corporations could use the investment credit to reduce AMT by up to 25 percent. Credit for U.S. tax on income earned in active business in U.S. possession. Possessions income not included in AMTI; credit cannot reduce AMT. Cannot exceed difference between regular tax and tentative AMT; no carryforwards or carrybacks. Cannot be used to reduce AMT. Cannot exceed difference between regular tax and tentative AMT; no carryforwards or carrybacks. Cannot be used to reduce AMT. AMT Credit “Finally, Congress concluded that it was desirable to change the underlying structure of the minimum tax in certain respects. In particular, to the extent that tax preferences reflect deferral, rather than permanent avoidance, of tax liability, some adjustment was considered necessary with respect to years after the taxpayer has been required to treat an item as a minimum tax preference, and potentially to incur minimum tax liability with respect to the item. Absent such an adjustment, taxpayers could lose altogether the benefit of certain deductions that reflect costs of earning income.” The rationale behind the AMT credit can be illustrated for the case of depreciation. As shown in table I.3, the depreciation rates for AMT purposes are slower and useful lives are longer than under the regular tax. This means that depreciation deductions early in an asset’s useful life are smaller than under the regular tax. Later in the asset’s useful life, depreciation deductions will be greater under the AMT schedule than under the regular tax. If a taxpayer is under AMT when an asset is purchased and later returns to the regular tax, the total amount of depreciation deductions the taxpayer claimed for the asset could be significantly less than the original cost. In this case, the taxpayer’s larger regular tax depreciation deductions are effectively disallowed by AMT in favor of smaller deductions. The AMT credit allows the taxpayer to eventually deduct the cost of the asset, either through depreciation deductions directly or through AMT credits that restore the previously disallowed depreciation deductions. Before 1989, the AMT credit carryforward was limited to those items involving deferral of tax only. In OBRA 1989, this rule was changed so that all items that generate AMT, whether timing or permanent differences, lead to a creditable carryforward for tax years after 1989. Unlike the net operating loss deduction carryforward or foreign tax credit carryforward for the regular tax, the AMT credit has no time limit. Like these other deductions and credits, the carryforward does not earn interest. Therefore, taxpayers who use AMT credit carryforwards lose the time value of money (potential interest) on the amount of AMT paid from the time AMT liability is incurred until they can use the credit. Which Corporations Paid AMT and Why? This appendix contains information on the amount of corporate AMT payments, the size of the firms paying AMT, the industry breakdown of these firms, the frequency of AMT payments and AMT credits claimed, the significant elements of AMT, and the relationship of AMT to net operating losses (NOL) and to the foreign tax credit (FTC). AMT Accounts for Significant Revenues From Corporations Table II.1 shows regular tax and AMT revenues for the years since the major revision of AMT in 1986. AMT revenues were between $2.7 and $8.6 billion, or between 3 and 9 percent of the regular tax revenues collected during the period. The table also shows that the use of the AMT credit has grown as more firms that paid AMT use the credit against regular tax liability. AMT revenue is likely to decline in the future for several reasons. First, the Omnibus Budget and Reconciliation Act (OBRA) of 1993 made two changes that should reduce the number of taxpayers using AMT. OBRA eliminated the ACE depreciation adjustment for property placed in service after 1993. The Joint Committee on Taxation estimated that this change would reduce revenue by about $4.3 billion from 1994 through 1998. OBRA also increased the useful life for nonresidential real estate under the regular tax from 31.5 to 39 years. Because the 39 years is only slightly different from the 40 years for AMT purposes, less in AMT revenues related to this real estate can be expected than otherwise. A second reason for the likely decline in AMT revenue is related to the relatively short-lived equipment placed in service since the 1986 TRA that has added to the depreciation adjustment. Much of this equipment should reach the point in its useful life where depreciation under the AMT system will be less than that under the regular tax, generating a negative adjustment. A third reason is that more taxpayers may be subject to the regular tax as the economy moves out of the recession. With fewer taxpayers paying AMT, AMT revenue should fall and recovery of past AMT credits should speed up. While Few Corporations Have Paid AMT, Large Firms Are More Likely to Pay AMT As table II.2 shows, only 0.7 to 1.5 percent of corporations paid AMT in any given year. For example, about 32,000 of 2.1 million 1990 corporate returns included AMT. Table II.3 shows that a high percentage of corporate AMT payers were relatively small corporations. The relationship between firm size and AMT payment stems from the fact that there are many more small corporations than large corporations. For most of the years between 1987 and 1992, more than 70 percent of AMT payers had less than $10 million in assets. Large corporations represented a small percentage of AMT payers. Table II.4 shows the percentage of corporations in each size class that paid AMT. While table II.3 showed that most AMT payers were relatively small, small corporations were much less likely to be paying AMT than large corporations. While less than half of 1 percent of corporations with less than $1 million in assets were paying AMT, more than 20 percent of the corporations with more than $1 billion in assets were. However, since there were so many more small corporations than large ones, most AMT payers were relatively small. To further understand the relative importance of AMT, we calculated the percentage of corporate assets that were in firms that paid AMT and in those that did not. Because large corporations paid AMT more frequently, the percentage of assets that were in firms paying AMT was much larger than the percentage of taxpayers paying AMT. Thus, even though less than 2 percent of taxpayers paid AMT, table II.5 shows that about a quarter of corporate assets were in firms that paid AMT. Table II.6 shows the percentage of AMT liability paid by corporations by asset size class. Despite the fact that most AMT payers were relatively small, most AMT liability came from the largest firms. Referring to table II.3, large corporations, generally comprising about 2 to 3 percent of AMT payers, usually paid about 75 percent of AMT liability. In contrast, the smallest two size classes contained 75 percent of AMT payers, but they paid less than 10 percent of the AMT liability. AMT Liability by Industry Table II.7 shows the percentage of firms paying AMT by industry. The industry classifications are the major industry groups as defined by IRS. The table shows that corporations in the mining, manufacturing, and transportation industries were more likely to have paid AMT. Corporations in wholesale and retail trade and services were less likely to have paid AMT. Table II.8 shows AMT liability by industry. The data show that the manufacturing, transportation, and finance industries paid the most AMT. In order to see the importance of AMT relative to the regular tax for different industries, we calculated industry average tax rates for the regular tax and for AMT. The regular tax rate is regular tax (not including the AMT credit) divided by taxable income (as defined under the regular tax). The AMT average tax rate is the regular tax and AMT less the AMT credit, also divided by taxable income. The difference between the two figures shows the extent to which AMT (both tax and credit) changes the aggregate tax payment of the industry. Table II.9 shows that AMT generally resulted in relatively small changes at the industry level. As requested, we computed the same information for eight industry subclasses. Table II.10 shows the percentage of firms paying AMT in these industry subclasses. The percentage of corporations that paid AMT in these subclasses was above the average for all corporations, with the possible exception of utilities due to the statistical imprecision in the percentage of that subclass. Table II.10: Percentage of Corporations Paying AMT in Eight Industry Subclasses than oil) Table II.11 shows AMT liability for these industries. than oil) Table 11.12 shows the average tax rate without and with AMT for these eight industry subclasses. than oil) AMT Significantly Increased Tax Liability for Some Taxpayers For many AMT payers, AMT led to a large percentage increase in taxes owed. To determine whether AMT led to only very small tax changes or to large tax changes for AMT payers, we calculated the percentage increase in tax from AMT. For AMT taxpayers who had no regular tax liability, AMT was 100 percent of the taxes paid. As shown in appendix III (table III.8), about 40 percent of AMT payers owed no regular tax in the year they paid AMT. Table II.13 shows the percentage increase in tax resulting from AMT for AMT taxpayers who also had positive regular tax liability. In 1990, for example, 8.5 percent of AMT payers had their total tax increased by less than 5 percent by AMT. On the other hand, a third of AMT payers had their taxes at least doubled by AMT. About Half of Large Corporations Paid AMT at Some Time In order to see whether corporations paid AMT consistently between 1987 and 1991 or fluctuated between the regular tax and AMT, it is necessary to track individual corporations over time. To do this, we developed a database containing 5 years of tax returns for corporations that had total assets of more than $50 million in each year from 1987 through 1991. This database also allows us to determine how quickly AMT payers were able to use the AMT credit. Of the approximately 10,000 corporations in the database, about 50 percent did not pay AMT at any time over the 5-year period, as shown in table II.14. Very few (about 3.2 percent of AMT payers, or about 1.6 percent of the 10,000 corporations in the database) paid AMT in all 5 years. The greatest percentage of AMT payers paid once in the 5 years. Table II.14 also shows the percentage of assets in different categories as a percentage of the sum of all corporate assets over the 5 years. 1 year 2 years 3 years 4 years 5 years (percent) (percent) To understand whether corporations tended to pay AMT in consecutive years or moved back and forth between AMT and the regular tax, we tracked the years that taxpayers paid and did not pay AMT. Table II.15 shows the percentage of taxpayers that paid AMT in consecutive years by the number of years that they paid AMT. About two-thirds of the corporations that paid AMT twice in the 5 years did so in consecutive years. About half of 3-year payers paid in 3 consecutive years. To determine how long it took AMT payers to recover their payments via the AMT credit, we calculated the percentage of firms that had fully recovered their payment by year of AMT liability. In making this calculation, we assumed that receipt of an AMT credit recovered the first possible year of AMT payments. Table II.16 shows that the majority of AMT payers for tax year 1987 had not fully recovered their 1987 AMT payment via the AMT credit by the 1991 tax year. Table II.17 shows the percentage of AMT payments recovered via the AMT credit. In contrast to the preceding table, table II.17 shows the amount of credit recovered by firms that fully recovered their AMT payment and by those that only partially recovered their credits. These calculations also assume that credits claimed are allocated to the first year of AMT liability for which AMT has not been fully recovered. The table shows that less than half of 1987 AMT liability had been recovered via the AMT credit by 1991. Table II.18 shows the percentage of corporations and the percentage of assets of firms in the database that either paid AMT or paid regular tax and had not been able to reclaim all outstanding AMT credits in a particular year. The data indicate that about 40 percent of the large corporations in the database were in this position after tax year 1991. This percentage may have fallen in tax year 1992 as the amount of AMT credits claimed rose significantly, as shown in table II.1. (percent) Assets (percent) We also calculated the length of time that corporations spent either paying AMT or recovering credits. Table II.19 shows the percentage of corporations that either paid AMT or had unusable AMT credits by the number of years that they were in this position. For example, the table shows that 9.2 percent of companies paid AMT or had unusable credits in only 1 year, which means that they paid AMT in 1 year and fully recovered the payment with the AMT credit in the following year. About 10 percent of firms either paid AMT in 2 years and recovered their credits in the next year or they paid AMT in 1 year and were unable to recover credits for an additional year. Thirteen percent of the companies either paid AMT or had outstanding credits in all 5 years. These firms could have been AMT payers in all 5 years, paid AMT once but never recovered their credits, or paid AMT in several years and never recovered credits. Thus, while table II.14 showed that only 1.6 percent of the companies in the database paid AMT in all 5 years, 13 percent of the companies were either paying AMT or had excess AMT credits in all 5 years. AMT 1 year 2 years 3 years 4 years 5 years (percent) (percent) The Book/ACE Adjustment and Depreciation Were the Most Significant AMT Components Table II.20 shows the relative size of the AMT preferences and adjustments. As can be seen, the book income and ACE adjustments were relatively large. The replacement of the book income adjustment with the ACE adjustment coincided with a large increase in the amount of the adjustment. Before 1990, the book income adjustment had been declining in importance. The depreciation adjustment for post-1986 property grew as more depreciable assets were placed into service after the introduction of the adjustment. As time passes from the imposition of the tax, more new assets are put into service, increasing the adjustment. At the same time, more assets reach the point where depreciation is greater under the AMT rules than under the regular tax, leading to a negative adjustment. A similar pattern is apparent for the depreciation preferences related to pre-1986 assets; as time passes, fewer assets generate positive adjustment amounts. Compared to the book/ACE adjustment and post-1986 property depreciation, the other components of AMTI were small overall, although they could be important for particular firms or industries. The importance of the depreciation and the book income and adjusted current earnings adjustments is also apparent from data on the frequency of occurrence of different AMT components, as table II.21 shows. These items increased AMTI for most AMT payers. In contrast, the other preferences and adjustments increased AMTI for only a small percentage of AMT payers. AMT Limits Tax Credits and Deductions for Prior Losses In order to ensure at least a small tax liability from corporations with prior year losses and foreign tax credits, the AMT rules include limits on the amounts these deductions and credits can reduce AMTI and AMT. The rules also include an overall limit on the amount by which both the AMT net operating loss deduction and the AMT FTC together can reduce AMT liability. To determine how these rules affected AMT payers, we calculated the percentage of AMT payers that included NOLs and FTCs in their AMT computations. We also calculated the percentage by which AMT payers were able to reduce AMTI and AMT before credits, respectively, to determine whether the limitations had prevented firms from fully claiming deductions and credits. Table II.22 shows the percentage of AMT payers that claimed a deduction for prior year net operating losses. The table shows that about a third of AMT payers claimed the deduction, and in recent years the deduction reduced tentative AMTI by about 15 percent. To determine whether corporations were constrained by the 90-percent net operating loss limit, we calculated the percentage reduction in AMTI for AMT payers who had a deduction for AMT net operating losses. Table II.23 shows that a significant percentage of AMT payers with NOL deductions may have been constrained by the limitation. Percentage reduction in AMTI from AMT NOL limit) AMT payers can also claim AMT foreign tax credits for foreign taxes paid. Table II.24 shows that despite the fact that very few AMT payers claimed an AMT FTC, the credit reduced AMT before credits to a large extent on an aggregate level. Table II.25 shows the distribution of the percentage reduction of AMT before credits for corporations that claimed AMT FTC. The credit cannot be used to reduce AMT before credits by more than 90 percent. The table indicates that between 25 and 37 percent of AMT FTC claimants may have been constrained by the limitation. Percentage reduction in TAMT from AMT FTC limit) Taxpayers who claim the AMT NOL deduction and/or AMT FTC are also subject to an overall limit. The AMT NOL deduction and AMT FTC combined cannot reduce AMT liability by more than 90 percent. Few taxpayers claimed both NOL and FTC. Table II.26 shows the extent to which AMT payers reduced AMTI through the use of the credit and the deduction. It shows that the percent of firms that may have been constrained by the overall limitation varied, ranging from 29 percent in 1989 to 49 percent in 1992. Percentage reduction in AMTI from AMT FTC and AMT NOL limit) Has AMT Achieved Its Goals? According to the legislative history, the goals of AMT are to ensure that taxpayers with substantial economic income pay some tax, and to ensure that taxpayers with positive book income pay tax in the year of positive income. Is AMT Designed to Tax Economic Income? Because we are not aware of the existence of an agreed-upon, detailed definition of economic income for corporations, we compared AMT to the proposals made by the Department of the Treasury in November 1984.The Treasury proposals were designed to tax the real economic income of individuals and businesses, both corporate and noncorporate. We also compared the AMT provisions to the Joint Committee on Taxation’s list of corporate tax expenditures, which are generally preferences and exclusions in the regular tax that deviate from a tax on economic income. The Treasury proposals provide a broad outline of a corporate tax based on economic income; the tax expenditure list goes into greater detail on particular tax code provisions. Our comparisons showed that AMT moves the tax system closer to taxing economic income by including several tax preferences. In addition, firms paying AMT will have depreciation deductions that more closely match economic depreciation than do depreciation deductions under the regular tax if inflation rates are low. However, if inflation is moderate or high, depreciation deductions under AMT can be less generous than estimates of economic depreciation would dictate, leading to an overstatement of economic income. In times of moderate or high inflation, the overstatement of income due to the depreciation provisions may indirectly reduce the understatement of income that occurs when corporations deduct nominal, rather than inflation-adjusted, interest costs on debt incurred to finance investments. However, such indirect effects would not apply to investments financed by equity. Treasury Proposal Treasury proposed three major structural changes to the corporate tax in order to tax economic income. First, it proposed that the double taxation of dividends be reduced. Under the regular corporate tax, dividends are taxed when received by shareholders but are not deducted by the corporation when paid. In contrast, interest paid is taxed when received by bondholders and is deducted by the corporation. Second, Treasury proposed that capital assets, inventories, and interest paid be indexed to inflation. Third, Treasury recommended that depreciation schedules be adjusted to more closely match estimates of economic depreciation. Economic depreciation is the reduction in the market value of a particular asset over a year. If the tax provisions for depreciation deductions matched economic depreciation, businesses would deduct the actual reduction in the value of their assets as a business cost each year. Treasury maintained that these provisions and a reduction in the preferences and exclusions in the tax code would result in a tax more closely based on economic income. Using this proposal as a basis for comparison, we analyzed the tax base of AMT to judge whether AMT has moved the tax base closer to economic income. First, AMT does not relieve the double taxation of dividends. The ACE adjustment further restricts the deductibility of dividends received by corporations and therefore moves the tax base further from a definition of economic income and closer to book income, reflecting another goal of AMT. Second, AMT does nothing explicitly to adjust for inflation. Many items in the Treasury proposal related to the mismeasurement of income due to inflation. Inflation reduces the value of depreciation deductions because the amount of depreciation deducted reflects the historical cost of the asset when purchased, not its current replacement value. On the other hand, inflation increases the real value of the deduction for interest paid because interest costs unadjusted for inflation are deducted rather than the inflation-adjusted interest costs. However, the Tax Reform Act of 1986 did not include comprehensive indexing provisions. Third, AMT depreciation schedules are closer to economic income at 0 percent inflation, but not when the inflation rate is 3 percent or higher. The data in appendix II showed that the depreciation adjustment is a key component of AMT, responsible for $23 billion of AMTI and included on 87 percent of AMT returns in 1992. The question then is whether the AMT depreciation provisions are closer to economic depreciation than the provisions under the regular tax. Under the current tax system, depreciation deductions are calculated using the historical cost of acquiring the asset. Because neither the regular tax nor the AMT depreciation schedules include adjustments for inflation, the value of these deductions erodes as the inflation rate increases. One justification for accelerating depreciation relative to economic depreciation is to offset the effects of inflation. Table III.1 shows one set of estimates of the present value of depreciation deductions under the regular tax and AMT per dollar invested in 22 types of equipment and 6 types of structures, for different inflation rates. The table also shows estimates for the present value of economic depreciation for these asset classes. If the value for the regular tax or AMT for a particular asset is greater than that for economic depreciation, the tax schedules allow a more generous deduction than economic depreciation. If the values are smaller, the tax schedules allow for slower, less generous depreciation deductions. For example, if a corporation purchases an automobile, it is entitled to depreciation deductions over the useful life that will eventually total the purchase price of the auto. However, since the deductions occur over time, they are worth less than the purchase price today. Table III.1 indicates that with no inflation, depreciation deductions under the regular tax today are worth 91 percent of the original investment, 89 percent under AMT depreciation, and 87 percent under economic depreciation. The table also shows the effects of inflation on depreciation deductions for regular tax and AMT; as inflation increases from 0 to 3 to 6 percent, the present value of depreciation deductions falls. Table III.2 shows the percentage difference between economic depreciation and regular and AMT depreciation. The table shows that the current regular tax depreciation schedule is generous relative to economic depreciation when there is no inflation and in most cases when inflation is 3 percent. AMT depreciation is closer to economic depreciation than regular depreciation at 0-percent inflation, but for 3- or 6-percent inflation it is less generous than economic depreciation for many assets. As the inflation rate rises to 6 percent, both regular tax and AMT would be less generous than economic depreciation would dictate for many assets. To the extent that the AMT depreciation provisions are less generous than economic depreciation at moderate or high inflation rates, they tend to overstate economic income. However, as mentioned above, interest expenses are overstated in real terms when inflation exists. In this context, AMT may indirectly offset this inflation advantage for corporations with sizeable debt-financed capital investment and function as an implicit limit on interest deductions. Whether such a limit is consistent with a tax on economic income depends largely on whether the personal tax is considered as well as the corporate tax. While corporations deduct interest unadjusted for inflation, this interest is in turn taxed when received at the individual level. Thus, income earned by the corporation is in fact taxed, but the revenue is received through the individual income tax rather than the corporate tax. However, many individuals are taxed at a rate lower than the corporate rate, so the deduction at the corporate level reduces taxes by an amount more than taxes are raised at the individual level. In addition, if the recipient of the interest is a pension fund, no tax is levied until the income is ultimately received by the pension recipient. For shareholders, corporate income can be received as dividends or as capital gains when stock shares are sold. Dividends are not deductible under the corporate tax, so there is no inflation-driven advantage at the corporation level for dividends. Capital gains are taxed on their amount unadjusted for inflation, overstating their real value, but have commonly been taxed under preferential rates and are taxed only when shares are sold (realized), allowing potentially substantial tax deferral. While AMT depreciation provisions may indirectly counteract inflation biases for debt at the corporate level, they do not do so for income received by shareholders. AMT Includes Several Tax Expenditures in Its Base AMT adjustments and preferences include some, but not all, tax expenditures to broaden the tax base and move the tax base closer to economic income. Table III.3 shows corporate tax expenditures, as defined by the Joint Committee on Taxation, that have an estimated revenue loss of over $100 million in 1995. The table shows which tax expenditures are included directly in AMT as preferences or adjustments and which are included indirectly through the ACE adjustment. Accelerated depreciation on rental housing Exclusion of interest on private purpose Exclusion of interest on governmental Exclusion of income of foreign sales Inventory property sales source rules Deferral of income from controlled foreign Interest allocation rules exception for Expensing of research and development Expensing of exploration and development costs (fuels and nonfuel minerals) Excess of percentage over cost depletion (fuels and nonfuel minerals) Expensing of multiperiod timber growing Investment tax credit for rehabilitation of Excess bad debt reserves of financial Exclusion of interest on life insurance Small life insurance company taxable Special treatment of life insurance (continued) Has AMT Ensured That Corporations With Positive Book Income in a Given Year Paid Some Tax in That Year? AMT has generated tax from some firms with positive book income that otherwise would not have paid regular tax, but the percentage of firms with book income that paid tax in a given year was not changed very much by AMT. The data indicate that AMT has been successful in ensuring that large firms with book income paid some tax in that year. The corporations with book income that did not pay AMT or regular tax were generally small, and most had net income under $40,000, the AMT exemption amount. The large corporations that had book income but paid no tax were predominately mutual funds and investment companies, which generally pass all income to shareholders. Because of this feature of their business, these companies are exempt from the book income and ACE adjustments. Differences Between Taxable Income and Financial Statement Income The measurement of income for financial statement purposes and measurement for tax purposes differ in important ways. These differences make it possible for the same corporation to report positive income for financial statement purposes (book income) and a loss for tax purposes, or the opposite. Some items of revenue and expense enter into the calculation of either taxable income or book income without ever affecting the other under current provisions of the tax laws. One example of a permanent difference between the two income measures is the treatment of income from tax-exempt securities. Corporations will include income from tax-exempt securities on their financial statements, but this income will never be included in taxable income. Another permanent difference is the treatment of dividends received by a corporation. For financial statements, dividends received are included in income. For tax purposes, only a fraction of dividends received are taxed. The purpose of the deduction for dividends received is to compensate in part for the lack of a deduction for dividends paid. Without a deduction for dividends received, income flowing through several corporations and ultimately to shareholders would be taxed at all levels. Some items of revenue and expense are eventually recognized by both tax and financial accounting but are recognized at different times. Book income before tax can exceed taxable income if (1) revenue is recognized for accounting purposes prior to its recognition on the tax return, or (2) expenses are recognized for accounting purposes after their deduction on the tax return. On the other hand, book income before tax can be less than taxable income if (1) revenue is recognized for accounting purposes after its inclusion on the tax return, or (2) expenses are recognized for accounting purposes prior to their deduction for tax purposes. In contrast to permanent differences, timing differences affect the timing of the recognition of income or expense; over time, the same amount of income and expense will be recognized for both book and tax purposes. How Different Are Tax and Book Income? To show how book income and taxable income are related, we calculated the percentage of corporations in each of the classes in table III.4. The first row shows the percentage of corporations that reported a positive amount of book income and a positive amount of net income on their tax returns in a particular year. The middle two rows of the table show the percentage of corporations that differ in the sign of the two income measures in the year. The last row shows the percentage of corporations that reported losses on both their financial statements and for tax purposes in the year. Table III.5 repeats this calculation after allowing for the deduction of dividends received and net operating losses from net income. The table shows that these two provisions have significant effects. In 1992, 13 percent of taxpayers with positive book income and positive current year net income reduced their current year taxable income to zero by using deductions for dividends received and prior year losses. Regular tax owed on taxable income is further reduced by any allowable credits. Table III.6 shows the percentage of corporations that have positive and zero regular tax liability while reporting positive or negative book income. As one goal of AMT is to get taxpayers with positive book income in a given year to pay tax in that year, its design must “undo” many of the differences between regular tax income and book income. Many of the preference items and the adjustments serve this purpose, as do the book income and ACE adjustments. Many AMT Payers Did Not Owe Any Regular Tax Table III.8 shows the percentage of AMT payers that also paid regular tax and the percentage that reported no regular tax liability. The percentages, which were consistent across time, show that about half of AMT payers owed regular tax as well as AMT. However, a significant percentage of AMT payers had no regular tax liability at the time they paid AMT. Table III.9 examines the relationship between regular tax status and AMT payment in more detail. The table groups AMT taxpayers into four categories. The first category includes those taxpayers that had positive taxable income and paid some regular tax. The second category covers those taxpayers with positive net income but no regular tax; these taxpayers had credits that could have eliminated all regular tax or sufficient NOL deductions to eliminate all taxable income. The third category is for those taxpayers with a current year regular tax loss. A small number of AMT payers paid regular tax but did not fall into one of the other categories. The table shows that the majority of AMT payers had positive taxable income and also owed regular tax. Fewer AMT payers had positive taxable income and owed no regular tax. A large percentage of AMT payers owed no regular tax due to net operating loss deduction carryforwards. A smaller but significant percentage of AMT payers had a current year regular tax loss but had positive AMTI leading to an AMT liability. Table III.10 shows the share of AMT liability that is raised from each of the groups shown in table III.9. Most AMT Payers Had Positive Book Income The legislative history of AMT indicates that Congress was concerned that confidence in the tax system could be undermined if corporations that reported significant income on their books paid no tax. Table III.11 shows that most AMT payers had positive book income, as might be expected because of the large percentage of AMT returns that included the book income and ACE adjustments. However, a significant percentage of AMT payers had negative book income. To determine whether AMT significantly reduced the number of taxpayers that reported positive income and paid no tax, we calculated the percentage of taxpayers with positive book income that paid AMT and had no regular tax liability. Table II.12 shows the tax status of those corporations that reported positive book income. Most taxpayers with positive amounts of book income paid regular tax. AMT had a very small effect on the overall percentage. However, AMT raised a significant amount of revenue from firms that reported book income and did not pay regular tax. Table III.13 shows the percentage of total AMT liability paid by corporations according to their regular tax and book income situation. Corporations with positive book income and no regular tax liability paid a significant portion of AMT. Why Did Companies With Positive Book Income Not Pay AMT? To determine why AMT had not forced all corporations with positive book income to pay some tax, we analyzed the information that was available for these corporations from their regular tax returns. The IRS database that we used had little AMT information for non-AMT payers. In particular, small taxpayers who qualify for the exemption are not required to file a Form 4626, so IRS does not have AMT information for these taxpayers. Without a 4626, we could not completely identify the reasons why firms would not be paying AMT. However, we were able to characterize these firms by their regular tax returns. About 98 percent of the corporations with positive book income and no tax payment were relatively small, having less than $10 million in assets. About 85 percent had less than $40,000 in net income. Thus, it is likely that they would qualify for the AMT exemption. Most firms with $1 billion or more in assets were regulated investment companies (RIC) and real estate investment trusts (REIT), which are technically subject to AMT but are exempt from the book income and ACE adjustments. (See table III.14.) Has AMT Affected Corporate Investment? Studies and comments by economists on the potential effect of AMT on investment have considered two ways in which AMT might affect investment. First, by increasing the average tax rate, AMT could reduce cash flow, discouraging investment. Second, AMT could change the marginal tax rate, which is the additional tax owed from an additional dollar of income. If AMT changed the incentives to invest, this in turn could lead to changes in investment. The material that follows summarizes the results and ideas of the various studies and comments. Effects of AMT on Cash Flow and Investment Corporations can finance investment through internal funds (retained earnings or profits) or external funds, such as debt or new stock issues. If a corporation must pay significantly higher costs for borrowed funds or newly issued stock than the opportunity cost of retained earnings, investment could be sensitive to the current profitability or cash-flow position of the firm. In circumstances where securities markets do not have the same information as managers in evaluating the potential investments of the firm, firms that must borrow from the markets may have to pay a premium for funds. If such premiums had to be paid, potential investments that could be profitable if the firm had sufficient cash flow might not be profitable, and investment could be curtailed or delayed until sufficient cash flow was available. A number of recent studies have found significant effects of cash flow on investment, and some authors have concluded that some corporations find external funds significantly more expensive than internal funds. These studies have concluded that this is more likely to be the case for smaller firms, firms that pay relatively small amounts of dividends, firms without access to the corporate bond market, and firms that cannot use working capital to smooth investment spending over time. It is not clear how many AMT payers meet these conditions. No study has directly tested the extent to which such cash-flow constraints affect corporations that paid AMT. The tax return data we used were limited in their ability to directly test many of these factors. However, the data did show that most AMT is paid by relatively large corporations. To the extent that investment by large corporations is less dependent on current cash flow than is the case for small corporations, the effect of the AMT on investment would be limited. In addition, as AMT credits are reclaimed in the future, cash flow would increase at that time, possibly increasing investment. Taxes Affect Investment Incentives Several studies have analyzed the effects of AMT on incentives to invest. These studies have attempted to measure the extent to which AMT changes incentives to invest. While AMT increases the average tax rate paid by corporations, it may increase or decrease the marginal tax rate on new investment. A common approach to analyzing the effects of taxes on investment has been to calculate the extent that taxes increase the before-tax profit rate or pretax rate of return needed to generate a given after-tax profit or return on investment. Under these analyses, business income taxes have been found to effectively raise the price of investments. If investments cost more than they otherwise would, only those that earn relatively high profits over time will be worthwhile. One advantage to this type of analysis is that it can include all the features of the tax code that may affect the after-tax return to an investment. Researchers have studied how several business income tax provisions may affect incentives to invest. In particular, the incentives to invest can be affected through the tax rate, depreciation provisions, the deductibility or nondeductibility of interest payments and dividends, whether inflation is accounted for, loss provisions, and credits for certain types of investment. First, the lower the statutory business tax rate is, the lower is the cost of capital investments, and the greater is the incentive to invest. Second, the more accelerated the depreciation method and shorter the useful lives of business assets are, the lower is the cost of investment. For example, an immediate deduction of all investment spending (expensing) reduces the tax cost on investment to zero. Third, inflation can reduce the value of deductions that are based on historical cost. Indexing provisions would lower the cost of capital in times of inflation. Fourth, the deductibility or nondeductibility of sources of finance and the tax rates that apply to those sources in the individual income tax can affect the cost of investment. Fifth, the deductibility of prior-year losses from taxable income and whether such loss carryforwards earn interest to preserve their present value can affect the cost of capital. Finally, if tax credits are allowed for certain types of investment, the cost of those investments falls. As shown in table IV.1, relative to the regular tax, AMT has a lower rate, a generally slower depreciation schedule, and additional limitations on credits and losses. Since the lower tax rate by itself would lower the cost of investment but the other two features would raise the cost of investment, it is not immediately clear whether the cost of investment would rise or fall. An evaluation of the effects of AMT must include all these features. Accelerated relative to economic depreciation (with low to moderate inflation) Studies of AMT and Incentives to Invest The studies we reviewed found that relative to the regular tax, investment incentives can be increased or reduced by AMT, depending on several factors. In general, these studies focused on investment incentives for small projects that would not by themselves affect whether the corporation would be subject to the regular tax or AMT. For firms permanently paying AMT, the incentives to invest were found to be greater under AMT than the regular tax for investments financed by equity. In this case, the value of the lower tax rate more than offset slower depreciation deductions, so the effective tax rate was lower. On the other hand, investment incentives can be lower under AMT relative to the regular tax for debt-financed investments. Since interest is deductible under both AMT and the regular tax, the higher rate under the regular tax is a relative advantage because a dollar of interest payments will reduce taxes by a greater amount if the tax rate is higher. Since the regular tax code favors debt-financed over equity-financed investment at the corporate level because interest payments are deductible and dividends are not, AMT may reduce this distortion. For investments financed with a mixture of debt and equity, the effective rate under AMT can be higher or lower depending on the amount of debt used. For an investment with the average mix of approximately one-third debt, effective rates are higher under the regular tax than under AMT. The results cited above hold for firms that are either permanently paying only the regular tax or paying AMT. However, the effect of AMT on investment incentives is further complicated if firms switch back and forth from AMT status to regular tax status. In this case, the cost of capital will depend on the timing of investment relative to the time during which AMT is paid and the length of time the firm pays AMT and recovers its credits, as well as the source of financing for the investment. Investment incentives will depend on the timing of investment because of the differences in the depreciation rules and the tax rates between the two systems. If depreciation deductions are taken when the firm is paying the regular tax, and income from the investment is received when the firm is paying AMT, the cost of investment is relatively low. If depreciation deductions are taken when the firm is paying AMT and income is taxed at the higher regular tax rate, the cost of investment is higher. A recent study also showed that AMT may change the incentives to invest in the United States or abroad. Since the AMT tax rate is lower than the regular tax rate, firms operating abroad may find that AMT status presents an opportunity to bring profits back to the United States and pay tax at a temporarily lower tax rate. If these additional profits are reinvested here, domestic investment may rise. On the other hand, the depreciation schedule under AMT is closer to that for foreign investment under the regular tax, narrowing the differential that exists under the regular tax. AMT may thus reduce the relative disincentive to invest abroad, encouraging more investment abroad than otherwise. The literature does not cover the effect of AMT on investment when an investment is large enough to potentially change the tax status of the firm from the regular tax to AMT or from a current net operating loss position to AMT. Some studies have examined investment incentives when corporations can be either in a net operating loss carryforward position or paying the regular tax. In this case, the size of net operating loss outstanding has an effect on incentives; a firm with a relatively small NOL carryforward is penalized for investment because of the loss of the time value of money on the loss. However, a large NOL carryforward could indicate that the firm will effectively be tax-exempt for the foreseeable future and investment may be encouraged. It is not clear at this time how AMT might change these incentives. How Sensitive Is Investment to the Price or Cost of Capital? The effect of AMT on investment is further complicated by the lack of consensus on the size of the effect on investment of changes in the incentive to invest. Analysts have widely differing views on how responsive investment is to changes in tax rules. Some studies have concluded that investment is very responsive to changes in tax incentives, while others have found small effects. The difficulty stems from a lack of consensus on the nontax determinants of investment; without a clear model of the other determinants of investment, it is difficult to isolate the effects of taxes, holding other factors fixed. In particular, it has been difficult for investment models to isolate the effects of output and price. If output is the major determinant of investment as firms add capacity when output is growing, then investment may be relatively insensitive to the price of capital goods. If investment is sensitive to the price of capital goods, then taxes, including AMT, may have an important effect on investment by changing the effective price. Major Contributors to This Report General Government Division, Washington, D.C. Jose R. Oyola, Assistant Director, Tax Policy and Administration Issues Lawrence M. Korb, Assignment Manager Edward J. Nannenhorn, Economist-in-Charge Patricia H. McGuire, Senior Computer Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO provided information on the corporate alternative minimum tax (AMT), focusing on: (1) the corporations that paid AMT between 1987 and 1992; (2) whether AMT achieved its purpose; and (3) how AMT might affect corporate investment. GAO found that: (1) AMT accelerated tax payments of $27.4 billion and corporations used credits totalling $5.8 billion from 1987 to 1992; (2) at the end of 1992, corporations had accumulated $21.6 billion in credits that would result in lower tax revenues in the future; (3) of the 2.1 million corporations subject to AMT, 2,000 large corporations paid 85 percent of all AMT in 1992; (4) the two AMT provisions that produced the largest increases in taxable income were the depreciation adjustment, used by 87 percent of all AMT payers, and the adjusted current earnings adjustment, used by 67 percent of all AMT payers; (5) manufacturing, transportation, and finance industries paid the most AMT; (6) AMT has achieved its objectives of making profitable corporations pay tax and causing corporations that report positive amounts of income in a particular year to pay some tax in that year; (7) the effects of AMT on corporate investment are unclear due to insufficient data; and (8) while AMT might reduce present cash flows, future cash flows may be enhanced as taxpayers recover AMT credits.
Background Oil and gas reservoirs vary in their geological makeup, location, and size. Regardless of the reservoir, unconventional oil and gas development involves a number of activities, many of which are also conducted in conventional oil and gas drilling. This section describes the types and locations of oil and gas reservoirs and the key stages of oil and gas development. Types and Locations of Oil and Gas Reservoirs Oil and natural gas are found in a variety of geologic formations. In conventional reservoirs, oil and gas can flow relatively easily through a series of pores in the rock to the well. Shale and tight sandstone formations generally have low permeability and therefore do not allow oil and gas to easily flow to the well. Shale and tight sandstone formations can occur at varying depths, including thousands of feet beneath the surface. For example, the Bakken shale formation in North Dakota and Montana ranges from 4,500 to 11,000 feet beneath the surface. Coalbed methane formations, often located at shallow depths of several hundred to 3,000 feet, are generally formations through which gas can flow more freely; however, capturing the gas requires operators to pump water out of the coal formation to reduce the pressure and allow the gas to flow. Shale, tight sandstone, and coalbed methane formations are located within basins, which are large-scale geological depressions, often hundreds of miles across, which also may contain other oil and gas resources. There is no clear and consistently agreed upon distinction between conventional and unconventional oil and gas, but unconventional sources generally require more complex and expensive technologies for production, such as the combination of horizontal drilling and multiple hydraulic fractures. See figure 1 for a depiction of conventional and unconventional reservoirs. Unconventional reservoirs are located throughout the continental United States on both private lands and federal lands that are administered by BLM, Forest Service, Park Service, and FWS (see fig. 2). Developing unconventional reservoirs involves a variety of activities, many of which are also conducted in conventional oil and gas drilling. Siting and site preparation. The operator identifies a location for the well and prepares the area of land where drilling will take place—referred to as a well pad. In some cases, the operator will build new access roads to transport equipment to the well pad or install new pipelines to transport the oil or gas that is produced. In addition, the operator will clear vegetation from the area and may place storage tanks (also called vessels) or construct pits on the well pad for temporarily storing fluids (see fig. 3). In some cases, multiple wells will be located on a single well pad. Drilling, casing, and cementing. The operator conducts several phases of drilling to install multiple layers of steel pipe—called casing—and cement the casing in place. The layers of steel casing are intended to isolate the internal portion of the well from the outlying geological formations, which may include underground drinking water supplies. As the well is drilled deeper, progressively narrower casing is inserted further down the well and cemented in place. Throughout the drilling process, a special lubricant called drilling fluid, or drilling mud, is circulated down the well to lubricate the drilling assembly and carry drill cuttings (essentially rock fragments created during drilling) back to the surface. After vertical drilling is complete, horizontal drilling is conducted by slowly angling the drill bit until it is drilling horizontally. Horizontal stretches of the well typically range from 2,000 feet to 6,000 feet long but can be as long as 12,000 feet in some cases. Hydraulic fracturing. Operators sequentially perforate steel casing and pump a fluid mixture down the well and into the target formation at high enough pressure to cause the rock within the target formation to fracture. The sequential fracturing of a well can use between 2 million and 5.6 million gallons of water. Operators add a proppant, such as sand, to the mixture to keep the fractures open despite the large pressure of the overlying rock. About 98 percent of the fluid mixture used in hydraulic fracturing is water and sand, according to a report about shale gas development by the Ground Water Protection Council. The fluid mixture—or hydraulic fracturing fluid—generally contains a number of chemical additives, each of which is designed to serve a particular purpose. For example, operators may use a friction reducer to minimize friction between the fluid and the pipe, acid to help dissolve minerals and initiate cracks in the rock, and a biocide to eliminate bacteria in the water that cause corrosion. The number of chemicals used and their concentrations depend on the particular conditions of the well. After hydraulic fracturing, a mixture of fluids, gases, and dissolved solids flows to the surface (flowback), after which production can begin, and the well is said to have been completed. Operators use hydraulic fracturing in many shale and tight sandstone formations (see fig. 4). Some coalbed methane wells are hydraulically fractured (see fig. 5), but operators may use different combinations of water, sand, and chemicals than with other unconventional wells. In addition, operators must “dewater” coalbed methane formations in order to get the natural gas to begin flowing—a process that can generate large amounts of water. Well plugging. Once a well is no longer producing economically, the operator typically plugs the well with cement to prevent fluid migration from outlying formations into the well and to prevent downward drainage from inside the well. In some cases, wells may be temporarily plugged so that the operator has the option of reopening the well in the future. In some states with a long history of oil and gas development, wells drilled decades ago may not have been properly plugged—or the plug may have deteriorated. Site reclamation. Once the well is plugged, the operator takes steps to restore the site to make it acceptable for specific uses, such as farming. For example, reclamation may involve removing equipment from the well pad, closing pits, backfilling soil, and restoring vegetation. Sometimes, when a well starts production, operators reclaim the portions of a site affected by the initial drilling activity. Waste management and disposal. Throughout the drilling, hydraulic fracturing, and subsequent production activities, operators must manage and dispose of several types of waste. For example, operators must manage produced water, which, for purposes of this report includes flowback water—the water, proppant, and chemicals used for hydraulic fracturing—as well as water that occurs naturally in the oil- or gas-bearing geological formation. Operators temporarily store produced water in tanks or pits, and some operators may recycle it for reuse in subsequent hydraulic fracturing. Options for permanently disposing of produced water vary and may include, for example, injecting it underground into wells Operators also generate solid wastes designated for such purposes. such as drill cuttings and could potentially generate small quantities of hazardous waste. See table 1 for additional methods for managing and disposing of waste. We recently issued a report on the quantity, quality, and management of water produced during oil and gas production. See GAO, Energy-Water Nexus: Information on the Quantity, Quality, and Management of Water Produced during Oil and Gas Production, GAO-12-156 (Washington, D.C.: Jan. 9, 2012). Federal Environmental and Public Health Laws Apply to Unconventional Oil and Gas Development but with Key Exemptions Requirements from eight federal laws apply to the development of oil and gas from unconventional sources. In large part, the same requirements apply to conventional and unconventional oil and gas development. There are exemptions or limitations in regulatory coverage for preventive programs authorized by six of these laws, though EPA generally retains its authorities under federal environmental and public health laws to respond to environmental contamination. States may have regulatory programs related to some of these exemptions or limitations in federal regulatory coverage; state requirements are discussed later in this report. Eight Federal Environmental and Public Health Laws Apply to Unconventional Oil and Gas Development Parts of the following eight federal environmental and public health laws apply to unconventional oil and gas development: Safe Drinking Water Act (SDWA) Clean Water Act (CWA) Clean Air Act (CAA) Resource Conservation and Recovery Act (RCRA) Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) Emergency Planning and Community Right-to-Know Act (EPCRA) Toxic Substances Control Act (TSCA) Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) There are exemptions or limitations in regulatory coverage related to the first six laws listed above. In large part, the same requirements apply to conventional and unconventional oil and gas development. This section discusses each of these laws in brief; for more details about seven of these laws, please see appendixes II through VIII. SDWA is the main federal law that ensures the quality of drinking water.Two key aspects of SDWA that are part of the regulatory framework governing unconventional oil and gas development are the Underground Injection Control (UIC) program and the imminent and substantial endangerment provision. Under SDWA, EPA regulates the injection of fluids underground through its UIC program, including the injection of produced water from oil and gas development. The UIC program protects underground sources of drinking water by setting and enforcing standards for siting, constructing, and operating injection wells. Injection wells in the UIC program fall into six different categories based on the types of fluids being injected. The wells used to manage fluids associated with oil and gas production, including produced water, are Class II wells. EPA officials estimate there are approximately 151,000 permitted Class II UIC wells in operation in the United States. Two types of wells account for nearly all the Class II UIC wells in the United States (see fig. 7), as follows: Enhanced recovery wells inject produced water or other fluids or gases into oil- or gas-producing formations to increase the pressure in the formation and force additional oil or gas out of nearby producing wells. EPA documents estimate that about 80 percent of Class II wells are enhanced recovery wells. Disposal wells inject produced water or other fluids associated with oil and gas production into formations that are intended to hold the fluids permanently. EPA documents estimate that about 20 percent of Class II wells are disposal wells. UIC regulations include minimum federal requirements for most Class II UIC wells; these requirements are generally applicable only where EPA implements the program. For example, for most new Class II UIC wells, an operator must, among other things (1) obtain a permit from EPA or a state, (2) demonstrate that casing and cementing are adequate, and (3) pass an integrity test prior to beginning operation and at least once every 5 years. In addition, when proposing a new Class II UIC well, an operator must identify any existing water or abandoned production or injection wells generally within one-quarter mile of the proposed well. During the life of the Class II UIC well, the operator has to comply with monitoring requirements, including tracking the injection pressure, rate of injection, and volume of fluid injected. SDWA authorizes EPA to approve by rule a state to be the primary enforcement responsibility—called primacy—for the UIC program, which means that a state assumes responsibility for implementing its program, including permitting and monitoring UIC wells. Generally, to be approved for primacy, state programs must be at least as stringent as the federal program for each of the well classes for which primacy is sought; however, SDWA also includes alternative provisions for primacy related to Class II wells whereby, in lieu of adopting requirements consistent with EPA’s Class II regulations, a state can demonstrate to EPA that its program is effective in preventing endangerment of underground sources of drinking water. Five of the six states in our review (Colorado, North Dakota, Ohio, Texas, and Wyoming) have been granted primacy for Class II wells under the alternative provisions. Pennsylvania has not applied for primacy, so EPA directly implements the program there. Please see appendix IX for more information about UIC requirements in the six states in our review. As discussed, the UIC program regulates the injection of fluids underground. Historically, the UIC program was not used to regulate hydraulic fracturing, even though fracturing entails the injection of fluid underground. In 1994, in light of concerns that hydraulic fracturing of coalbed methane wells threatened drinking water, citizens petitioned EPA to withdraw its approval of Alabama’s Class II UIC program because the state failed to regulate hydraulic fracturing. The case ended up before the United States Court of Appeals for the Eleventh Circuit, which held that the definition of underground injection included hydraulic fracturing. The court’s decision was made in the context of hydraulic fracturing of a coalbed methane formation in Alabama but raised questions about whether hydraulic fracturing would be included in UIC programs nationwide. UIC regulations at the time and now provide that ‘‘ny underground injection, except into a well authorized by rule or except as authorized by permit issued under the UIC program, is prohibited.’’ 40 C.F.R. 144.11 (2005) (2011). The Energy Policy Act provision did not exempt injections of diesel fuel during hydraulic fracturing from the definition of underground injection. EPA’s position is that underground injection of diesel fuel as part of hydraulic fracturing requires a UIC permit or authorization by rule. program. The guidance is directed at EPA permit writers in states where EPA directly implements the program; the guidance does not address state-run UIC programs (including five of the six states in our review). EPA’s draft guidance is applicable to any oil and gas wells using diesel in hydraulic fracturing (not just coalbed methane wells). The draft guidance provides recommendations related to permit applications, area of review (for other nearby wells), well construction, permit duration, and well closure. SDWA also gives EPA authority to issue orders when the agency receives information about present or likely contamination of a public water system or an underground source of drinking water that may present an imminent and substantial endangerment to human health. In December 2010, EPA used this authority to issue an emergency administrative order to an operator in Texas alleging that the company’s oil and gas production facilities near Fort Worth, Texas, caused or contributed to methane contamination in two nearby private drinking water wells. EPA contended that this methane contamination posed an explosion hazard and therefore was an imminent and substantial threat to human health. EPA’s order required the operator to take six actions, specifically: (1) notify EPA whether it intended to comply with the order, (2) provide replacement water supplies to landowners, (3) install meters to monitor for the risk of explosion at the affected homes, (4) conduct a survey of any additional private water wells within 3,000 feet of the oil and gas production facilities, (5) develop a plan to conduct soil and indoor air monitoring at the affected dwellings, and (6) develop a plan to investigate how methane flowed into the aquifer and private drinking water wells. The operator disputed the validity of EPA’s order and noted that the order does not provide any way for the company to challenge EPA’s findings. Nevertheless, the operator implemented the first three actions EPA listed in the order. In January 2011, EPA sued the operator in federal district court, seeking to enforce the remaining three provisions of the order. In March 2011, the regulatory agency that oversees oil and gas development in Texas held a hearing examining the operator’s possible role in the contamination of the water wells and issued an opinion in which it concluded that the operator had not caused the contamination. In March 2012, EPA withdrew the original emergency administrative order, and the operator agreed to continue monitoring 20 private water wells near its production sites for 1 year. According to EPA officials, resolving the lawsuit allows the agency to shift its focus away from litigation and toward a joint EPA-operator effort in monitoring. For more details about SDWA, please see appendix II. To restore and maintain the nation’s waters, CWA authorizes EPA to, among other things, regulate pollutant discharges and respond to spills affecting rivers and streams. Several aspects of CWA are applicable to oil and gas well pad sites, but statutory exemptions limit EPA’s regulatory authority. Several elements of CWA and implementing regulations are relevant to oil and gas development from onshore unconventional sources. First, the National Pollutant Discharge Elimination System (NPDES) program regulates industrial sites’ wastewater and stormwater discharges to waters of the United States (surface waters). Second, spill reporting and spill prevention and response planning requirements pertain to certain threats to U.S. navigable waters and adjoining shorelines. In addition, under certain circumstances, EPA has response authorities; for example, it can generally bring suit or take other actions to protect the public health and welfare from actual or threatened discharges of oil or hazardous substances to U.S. navigable waters and adjoining shorelines. EPA’s NPDES program limits the types and amounts of pollutants that industrial sites, industrial wastewater treatment facilities, and municipal wastewater treatment facilities (often called publicly-owned treatment works or POTWs) can discharge into the nation’s surface waters by requiring these facilities to have and comply with permits listing pollutants and their discharge limits. As required by CWA, EPA develops effluent limitations for certain industrial categories based on available control technologies and other factors to prevent or treat the discharge. EPA established multiple subcategories for the oil and gas industry; relevant here are: (1) onshore, (2) agricultural and wildlife water use, and (3) stripper wells—that is, wells that produce relatively small amounts of oil. For the onshore and agricultural and wildlife water use subcategories, EPA established effluent limitations guidelines for direct dischargers that establish minimum requirements to be used by EPA and state NPDES permit writers. Specifically, the onshore subcategory has a zero discharge limit for discharges to surface waters, meaning that no direct discharges to surface waters are allowed. EPA documents explain that this is because there are technologies available—such as underground injection—to dispose of produced water generated at oil and gas well sites without directly discharging them to surface waters. Given that the NPDES permit limit would be “no discharge,” EPA officials said that they were unaware of any instances in which operators had applied for these permits. EPA officials did mention, however, an instance in which an operator discharged produced water to a stream and was fined by EPA under provisions in CWA. For example, in 2011, EPA Region 6 assessed an administrative civil penalty against a company managing an oil production facility in Oklahoma for discharging brine and produced water to a nearby stream. The company ultimately agreed to pay a $1,500 fine and conduct an environmental project, which included extensive soil remediation near the facilities. Effluent limitations guidelines for the agricultural and wildlife water use subcategory cover a geographical subset of wells in the westthe quality of produced water from the wells is good enough for watering crops and livestock or to support wildlife in streams. The effluent limitations guideline for this subcategory allows such discharges of produced water for these purposes as long as the water meets a minimum quality standard for oil and grease. EPA officials identified 349 facilities with discharge permits in this subcategory. Officials also stated in which that individual permits may contain limits for pollutants other than oil and grease. EPA has not established effluent limitations guidelines for stripper wells, and EPA and state NPDES permit writers currently use their best professional judgment to determine the effluent limits for permits on a case-by-case basis. EPA explained in a 1976 Federal Register notice that unacceptable economic impacts would occur if the agency developed effluent limitations guidelines for stripper wells and that the agency could revisit this decision at a later date. In July 2012, EPA officials confirmed that the agency currently has no plans to develop an effluent limitations guideline for stripper wells. EPA also has not established effluent limitations guidelines for coalbed methane wells and EPA and state NPDES permit writers currently use their best professional judgment to determine the effluent limits for permits on a case-by-case basis. EPA officials explained that the process of extracting natural gas from coalbed methane formations is fundamentally different from traditional oil and gas development, partly because of the large volume of water that must be removed from the coalbed methane formation prior to production. Given these differences, coalbed methane wells are not included in any of EPA’s current subcategories. EPA announced in 2011 that, based on a multiyear study of the coalbed methane industry, the agency will develop effluent limitations guidelines for produced water discharges from coalbed methane formations. In the course of developing these guidelines, EPA officials told us that they will analyze the economic feasibility of each of the available technologies for disposing of the large volumes of produced water from coalbed methane wells and that EPA plans to issue proposed guidelines in the summer of 2013. In addition to setting effluent limitations guidelines for direct discharges of pollutants to surface waters, CWA requires EPA to develop regulations that establish pretreatment standards. These standards apply when wastewater is sent to a POTW before being discharged to surface waters, and the standards must prevent the discharge of any pollutant that would interfere with, or pass through, the POTW. To date, EPA has not set pretreatment standards specifically for produced water, though there are some general requirements; for example, discharges to POTWs cannot cause the POTW to violate its NPDES permit or interfere with the treatment process. In October 2011, EPA announced its intention to develop pretreatment standards specific to the produced water from shale gas development. EPA officials told us that the agency intends to conduct a survey and use other methods to collect additional data and information to support this rulemaking. Officials expect to publish the first Federal Register notice about the survey by the end of 2012 and to publish a proposed rule in 2014. In addition to CWA’s requirement for NPDES permits for discharges from industrial sites, the 1987 Water Quality Act amended CWA to establish a specific program for regulating stormwater discharges, such as those related to rainstorms, though oil and gas well sites are largely exempt from these requirements. EPA generally requires that facilities get NPDES permits for discharges of stormwater associated with industrial and construction activities, but the Water Quality Act of 1987 specifically exempted oil and gas production sites from permit requirements for stormwater discharges, as long as the stormwater was not contaminated by, for example, raw materials or waste products. exemption and EPA’s implementing regulations, oil and gas well sites are only required to get NPDES permits for stormwater discharges if the facility has had a discharge of contaminated stormwater that includes a reportable quantity of a pollutant or contributes to the violation of a water quality standard. The 2005 Energy Policy Act expanded the language of the exemption to include construction activities at oil and gas well sites, meaning that uncontaminated stormwater discharges from oil and gas construction sites also do not require NPDES permits. So while other industries must generally obtain NPDES permits for construction activities that disturb an acre or more of land, operators of oil and gas well sites are generally not required to do so. The 1987 Water Quality Act also exempted oil and gas processing, treatment, and transmission facilities from permit requirements for stormwater discharges. National Response Center, which is managed by the U.S. Coast Guard and serves as the sole federal point of contact for reporting oil and chemical spills in the United States. Oil discharges must be reported if they cause a film or sheen on the surface of the water or shorelines or if they violate water quality standards. The National Response Center shares information about spills with other agencies, including EPA Regional offices, which allows EPA to follow up on reported spills, as appropriate. CWA also authorized spill prevention and response planning requirements as promulgated in the Spill Prevention, Control, and Countermeasure (SPCC) rule. Facilities that are subject to SPCC rules are required to prepare and implement a plan describing, among other things, how they will control, contain, clean up, and mitigate the effects of any oil discharges that occur. Onshore oil and gas well sites, among others, are subject to this rule if they have total aboveground oil storage capacity greater than 1,320 gallons and could reasonably be expected, based on location, to discharge oil into U.S. navigable waters or on adjoining shorelines. The amount of oil storage capacity at oil and gas well sites tends to vary based on whether the well is being drilled, hydraulically fractured, or has entered production. For example, during drilling at well sites located near these waters, operators generally have to comply with SPCC requirements if fuel tanks for the drilling rig exceed the 1,320 gallon threshold. According to EPA officials, nearly all drill rigs have fuel tanks larger than 1,320 gallons, and so most well sites are subject to the SPCC rule during drilling if they are near these waters. Oil and gas well sites that are subject to the SPCC rule were required to comply by November 2011 or before starting operations. In accordance with CWA, EPA directly administers the SPCC program rather than delegating authority to states. EPA regulations generally do not require facilities to report SPCC information to EPA, including whether or not they are regulated. As a result, EPA does not know the universe of SPCC-regulated facilities. To ensure that regulated facilities are meeting SPCC requirements, EPA Regional personnel may inspect these facilities to evaluate their compliance. EPA officials said that some of these inspections were conducted as follow-up after spills were reported and that most inspections are conducted during the production phase, since drilling and hydraulic fracturing are of much shorter durations, making it difficult for inspectors to visit these sites during those times. According to EPA officials, Regional personnel inspected 120 oil and gas well sites nationwide in fiscal year 2011 and found noncompliance at 105 of these sites. These violations ranged from paperwork inconsistencies to more serious violations, such as a lack of secondary containment around stored oil or failure to implement an SPCC plan (though EPA officials were unable to specifically quantify the number of more serious violations). EPA officials said that EPA has addressed some of the 105 violations through enforcement actions. CWA also provides EPA with authorities to address the discharge of pollutants and to address actual or threatened discharges of oil or hazardous substances in certain circumstances. For example, under one provision, EPA has the authority to address actual or threatened discharges of oil or hazardous substances into U.S. navigable waters or on adjoining shorelines upon a determination that there may be an imminent and substantial threat to the public health or welfare of the United States, by bringing suit or taking other action, including issuing administrative orders that may be necessary to protect public health and welfare. Under another provision, EPA has authority to obtain records and access to facilities, among other things, in order to determine if a person is violating certain CWA requirements. For example, EPA conducted initial investigations in Bradford County, Pennsylvania, following a 2011 spill of hydraulic fracturing and other fluids that entered a stream. Citing its authority under CWA and other laws,from the operator about the incident, including information about the chemicals involved and the environmental effects of the spill. Meanwhile, the Pennsylvania Department of Environmental Protection signed a consent order and agreement with the operator in 2012 that required the operator to pay fines and implement a monitoring plan for the affected stream. For more details about CWA, please see appendix III. CAA, a federal law that regulates air pollution from mobile and stationary sources, was enacted to improve and protect the quality of the nation’s air. Under CAA, EPA sets national ambient air quality standards for the six criteria pollutants––ground level ozone, carbon monoxide, particulate matter, sulfur dioxide, nitrogen oxides, and lead––at levels it determines are necessary to protect public health and welfare. States then develop state implementation plans (SIP) to establish how the state will attain air quality standards, through regulation, permits, policies, and other means. States must obtain EPA approval for SIPs; if a SIP is not acceptable, EPA may assume responsibility for implementing and enforcing CAA in that state. CAA also authorizes EPA to regulate emissions of hazardous air pollutants, such as benzene. In addition, under CAA, EPA requires reporting of greenhouse gas emissions from a variety of sources, including oil and gas wells. In accordance with CAA, EPA has progressively implemented more stringent diesel emissions standards to lower the amount of key pollutants from mobile diesel-powered engines since 1984. These standards apply to a variety of on- and off-road diesel-powered engines, including trucks used in the oil and gas industry to move materials to and from well sites and compressors used to drill and hydraulically fracture wells. Diesel exhaust contains nitrogen oxides and particulate matter. Emissions standards may set limits on the amount of pollution a vehicle or engine can emit or establish requirements about how the vehicle or engine must be maintained or operated, and generally apply to new vehicles. For example, the most recent emissions standards for construction equipment began to take effect in 2008 and required a 95 percent reduction in nitrogen oxides and a 90 percent reduction in particulate matter from previous standards. EPA estimates that millions of older mobile sources—including on-road and off-road engines and vehicles—remain in use. It is projected that over time, older sources will be taken out of use and be replaced by the lower-emission vehicles, ultimately reducing emissions from mobile sources. New Source Performance Standards (NSPS) apply to new stationary facilities or modifications to stationary facilities that result in increases in air emissions and focus on criteria air pollutants or their precursors. For the oil and gas industry, the key pollutant is volatile organic compounds, a precursor to ground level ozone formation. Prior to 2012, EPA’s NSPS were unlikely to affect oil and gas well sites because (1) EPA had not promulgated standards directly targeting well sites and (2) to the extent that EPA promulgated standards for equipment that may be located at well sites, the capacity of equipment located at well sites was generally too low to trigger the requirement. For example, in 1987, EPA issued NSPS for storage vessels containing petroleum liquids; however, the standards apply only to tanks above a certain size, and EPA officials said that most storage tanks at oil and gas sites are below the threshold. In April 2012, EPA promulgated NSPS for the oil and natural gas production industry which, when fully phased-in by 2015, will require reductions of volatile organic compound emissions at oil and gas well sites, including wells using hydraulic fracturing. Specifically, these new standards are related to pneumatic controllers, well completions, and certain storage vessels as follows: Pneumatic controllers. According to EPA, when pneumatic controllers are powered by natural gas, they may release natural gas and volatile organic compounds during normal operations. The new standards set limits for the amount of gas (as a surrogate for volatile organic compound emissions) that new and modified pneumatic controllers can release per hour. EPA’s regulatory impact analysis for the NSPS estimates that about 13,600 new or modified pneumatic controllers will be required to meet the standard annually; EPA also estimates that the oil and gas production sector currently uses about 400,000 pneumatic controllers. Well completions for hydraulically fractured natural gas wells. EPA’s NSPS for well completions focus on reducing the venting of volatile organic compounds during flowback after hydraulic fracturing. According to EPA’s regulatory impact analysis, natural gas well completions involving hydraulic fracturing vent approximately 230 times more natural gas and volatile organic compounds than natural gas well completions that do not involve hydraulic fracturing. The regulatory impact analysis attributes these emissions to the practice of routing flowback of fracture fluids and reservoir gas to a surface impoundment (pit) where natural gas and volatile organic compounds escape to the atmosphere. To reduce the release of volatile organic compounds from hydraulically fractured natural gas wells, EPA’s new rule will require operators to use “green completion” techniques to capture and treat flowback emissions so that the captured natural gas can be sold or otherwise used. EPA’s regulatory impact analysis for the rule estimates that more than 9,400 wells will be required to meet the new standard annually. Storage vessels. Storage vessels are used at well sites (and in other parts of the oil and gas industry) to store crude oil, condensate, and produced water. These vessels emit gas and volatile organic compounds when they are being filled or emptied and in association with changes of temperature. EPA’s NSPS rule will require storage vessels that emit more than 6 tons per year of volatile organic compounds to reduce these emissions by at least 95 percent. EPA’s regulatory impact analysis for the rule estimates that approximately 300 new storage vessels used by the oil and gas industry will be required to meet the new standards annually. EPA officials said they anticipate that most of these storage vessels will be located at well sites. EPA also regulates hazardous air pollutants emitted by stationary sources. In accordance with the 1990 amendments to CAA, EPA does this by identifying categories of industrial sources of hazardous air pollutants and requiring those sources to comply with emissions standards, such as by installing controls or changing production practices. These National Emission Standards for Hazardous Air Pollutants (NESHAP) for each industrial source category include standards for major sources, which are defined as sources with the potential to emit 10 tons or more per year of a hazardous air pollutant or 25 tons or more per year of a combination of pollutants, as well as for area sources, which are sources of hazardous air pollutants that are not defined as major sources. Generally, EPA or state regulators can aggregate emissions from related or nearby equipment to determine whether the unit or facility should be regulated as a major source. However, in determining whether the oil or gas well is a major source of hazardous air pollutants, CAA expressly prohibits aggregating emissions from oil and gas wells (with their associated equipment) and emissions from pipeline compressors or pumping stations. EPA initially promulgated a NESHAP for oil and natural gas production facilities for major sources in 1999 and promulgated amendments in April 2012. NESHAPs generally identify emissions points that may be present at facilities within each industrial source category. The source category for oil and natural gas production facilities includes oil and gas well sites and other oil and gas facilities, such as pipeline gathering stations and natural gas processing plants. The NESHAP for the oil and natural gas production facilities major source category includes emissions points (or sources) that may or may not normally be found at well sites at sizes that would tend to meet the major source threshold. EPA officials in each of the four Regions we contacted were unaware of any specific examples of oil and natural gas wells being regulated as major sources of hazardous air pollutants before the April 2012 amendments. These amendments, however, changed a key definition used to determine whether a facility (such as a well site) is a major source. Specifically, EPA modified the definition of the term “associated equipment” such that emissions from all storage vessels and glycol dehydrators (used to remove water from gas) at a facility will be counted toward determining whether a facility is a major source. EPA’s regulatory impact analysis and other technical support documents for the April 2012 amendments did not estimate how many oil and natural gas well sites would be considered major sources under the new definition. EPA also promulgated a NESHAP for oil and natural gas production facilities for area sources in 2007. The 2007 area source rule addresses emissions from one emissions point, triethylene glycol dehydrators, which are used to remove water from gas. Triethylene glycol dehydrators can be located at oil and gas well sites or other oil and gas facilities, such as natural gas processing plants. Area sources are required to notify EPA that they are subject to the rule, but EPA does not track whether the facilities providing notification are well sites or other oil and natural gas facilities, so it is difficult to determine to what extent oil and gas well sites are subject to the area source NESHAP. In addition to specific programs for regulating hazardous air pollutants, CAA establishes that operators of stationary sources that produce, process, store, or handle listed or extremely hazardous substances have a general duty to identify hazards that may result from accidental releases, take steps needed to prevent such releases, and minimize the consequences of such releases when they occur. Methane is one of many hazardous substances of concern due to their flammable properties. Some EPA Regional officials said that they use infrared video cameras to conduct inspections to identify leaks of methane from storage tanks or other equipment at well sites. For example, EPA Region 6 officials said they have conducted 45 inspections at well sites from July 2010 to July 2012 and issued 10 administrative orders related to violations of the CAA general duty clause. said that all well sites are required to comply with the general duty clause but that EPA prioritizes and selects sites for inspections based on risk. EPA Region 6 includes the states of Arkansas, Louisiana, New Mexico, Oklahoma, and Texas. CAA also requires EPA to publish regulations and guidance for chemical accident prevention at facilities using substances that pose the greatest risk of harm from accidental releases; the regulatory program is known as the Risk Management Program. The extent to which a facility is subject to the Risk Management Program depends on the regulated substances present at the facility and their quantities, among other things. EPA’s list of regulated substances and their thresholds for the Risk Management Program was initially established in 1994 and has been revised several times. The regulated chemicals that may be present at oil and gas well sites include components of natural gas (e.g., butane, propane, methane, and ethane). However, a 1998 regulatory determination from EPA provided an exemption for naturally-occurring hydrocarbon mixtures (i.e., crude oil, natural gas, condensate, and produced water) prior to entry into a natural gas processing plant or petroleum refinery; EPA explained at the time that these chemicals do not warrant regulation and that the general duty clause would apply in certain risky situations. Since naturally-occurring hydrocarbons at well sites generally have not entered a processing facility, they are not included in the threshold determination of whether the well site should be subject to the Risk Management Program. EPA officials said that generally, unless other flammable or toxic regulated substances were brought to the site, well sites would not trip the threshold quantities for the risk management regulations. In September 2011, the U.S. Chemical Safety and Hazard Investigation Board (Chemical Safety Board) released a report describing 26 incidents involving fatalities or injuries related to oil and gas storage tanks located at well sites from 1983 through 2010. The report found that these accidents occurred when the victims—all young adults—gathered at rural unmanned oil and gas storage sites lacking fencing and warning signs and concluded that such sites pose a public safety risk. The report also noted that exploration and production storage tanks are exempt from the Risk Management Program requirements of CAA and recommended that EPA publish a safety alert to owners and operators of exploration and production facilities with flammable storage tanks advising them of their CAA general duty clause responsibilities, and encouraging specific measures to reduce these risks. The Chemical Safety Board requested that EPA provide a response stating how EPA will address the recommendation. EPA responded in June 2012, stating its intent to comply with the recommendation. As of 2012, oil and natural gas production facilities are required to report their greenhouse gas emissions to EPA on an annual basis as described in EPA’s greenhouse gas reporting rule. According to EPA documents, oil and gas well sites may emit greenhouse gases, including methane and carbon dioxide, from sources including: (1) combustion sources, such as engines used on site, which typically burn natural gas or diesel fuel, and (2) indirect sources, such as equipment leaks and venting.greenhouse gas reporting rule requires oil and gas production facilities (defined in regulation as all wells in a single basin that are under common ownership or control) that emit more than 25,000 metric tons of carbon dioxide equivalent at the basin level to report their annual emissions of carbon dioxide, methane, and nitrous oxide from equipment leaks and venting, gas flaring, and stationary and portable combustion. EPA documents estimate that emissions from approximately 467,000 onshore wells are covered under the rule. For more details about CAA, please see appendix IV. RCRA, passed in 1976, established EPA’s authority to regulate the generation, transportation, treatment, storage, and disposal of hazardous wastes. Subsequently, the Solid Waste Disposal Act Amendments of 1980 created a separate process by which oil and gas exploration and production wastes, including those originating within a well, would not be regulated as hazardous unless EPA conducted a study of wastes associated with oil and gas development and then determined that such wastes warranted regulation as hazardous waste, followed by congressional approval of the regulations. EPA conducted the study and, in 1988, issued a determination that it was not warranted to regulate oil and gas exploration and production wastes as hazardous. Based on this EPA determination, drilling fluids, produced water, and other wastes associated with the exploration, development, or production of oil or gas are not regulated as hazardous. According to EPA guidance issued in 2002, these exempt wastes include wastes that come from within the well, as well as wastes generated from field operations. Conversely, wastes generated from other activities at well sites may be regulated as hazardous. For example, discarded unused hydraulic fracturing fluids and painting wastes, among others, may be present at well sites and are “non-exempt,” and could be regulated as hazardous, depending on the specific characteristics of the wastes. Facilities that generate more than 100 kilograms (220 pounds) of hazardous waste per month are regulated as generators of hazardous waste and, among other things, are required to have an EPA identification number and to use the RCRA manifest system for tracking hazardous waste. Facilities generating smaller quantities of hazardous waste are not subject to these requirements. EPA headquarters officials said they do not have data on how many well sites may be hazardous waste generators, but that states may have more information about quantities of hazardous wastes at well sites. As such, we asked state officials responsible for waste programs whether they were aware of well sites being classified as small-quantity hazardous waste generators, and officials in all six states we reviewed indicated that they were unaware of well sites having sufficient quantities of hazardous wastes to be subject to those regulations. In September 2010, the Natural Resources Defense Council submitted a petition to EPA requesting that the agency regulate waste associated with oil and gas exploration and production as hazardous. The petition asserts that EPA should revisit the 1988 determination not to regulate these wastes as hazardous because, among other things, EPA’s underlying assumptions concerning the availability of alternative disposal practices, the adequacy of state regulations, and the potential for economic harm to the oil industry are no longer valid. According to EPA officials, the agency is currently reviewing the information provided in the petition but does not have a time frame for responding. RCRA also authorizes EPA to issue administrative orders, among other things, in cases where handling, treatment, or storage of hazardous or solid waste may present an imminent and substantial endangerment to health or the environment. EPA has used RCRA’s imminent and substantial endangerment authorities related to oil and gas well sites. For example, EPA Region 8 issued RCRA imminent and substantial endangerment orders to operators in Wyoming after discovering that pits near oil production sites were covered with oil and posed a hazard to birds. For more details about RCRA, please see appendix V. Congress passed CERCLA in 1980 to protect public health and the environment by addressing the cleanup of hazardous substance releases. CERCLA establishes a system governing the reporting and cleanup of releases of hazardous substances and provides the federal government the authority to respond to actual and threatened releases of hazardous substances, pollutants, and contaminants that may endanger public health and the environment. CERCLA requires operators of oil and gas sites to report certain releases of hazardous substances and gives EPA authority to respond to certain releases but excludes releases of petroleum (e.g., crude oil and other petroleum products) from these provisions. As previously discussed, releases of petroleum products are covered by CWA if the release threatens U.S. navigable waters or adjoining shorelines. EPA officials identified some instances of petroleum spills in dry areas that did not reach surface waters and explained that EPA had no role related to the investigation or cleanup of these incidents. We identified regulatory provisions in five of six states requiring cleanup of oil spills even if they do not reach surface waters. For hazardous substances, CERCLA has two key elements relevant for the unconventional oil and gas industry: release reporting and EPA’s investigative and response authority. Similar to the requirements to report oil spills under CWA, CERCLA requires operators to report releases of hazardous substances above reportable quantities to the National Response Center. The National Response Center shares information about spills with other agencies, including EPA Regional offices, which allows EPA the opportunity to follow up on reported spills. EPA also has investigative and response authority under CERCLA, including provisions allowing EPA broad access to information and the authority to enter property to conduct an investigation or a removal of contaminated material. EPA has the following authorities, among others: Investigative. EPA may conduct investigations—including activities such as monitoring, surveying, and testing—in response to actual or threatened releases of hazardous substances or pollutants or contaminants. EPA can also require persons to provide information about alleged releases or threat of release. EPA officials described several instances in which the agency used CERCLA’s investigative and information gathering authorities relating to alleged hazardous substance releases from oil and gas well sites. For example, EPA used CERCLA authority to investigate private water well contamination potentially related to nearby shale gas well sites in Dimock, Pennsylvania. In addition, EPA is currently using the same authority to investigate private water well contamination potentially related to tight sandstone well sites in Pavillion, Wyoming. Response. EPA has the authority to respond to releases of hazardous substances itself and to issue administrative orders requiring a company potentially responsible for a release of hazardous substances, which may pose an imminent and substantial endangerment, to take response actions, as well as to seek relief in a federal court. EPA officials could not provide a recent example where the agency used this authority to issue an administrative order at a well site, but EPA used the response authority to conduct sampling and to provide temporary drinking water to residents with contaminated wells in Dimock, Pennsylvania. For more details about CERCLA, please see appendix VI. Among other things, EPCRA provides individuals and their communities with access to information regarding storage or release of certain chemicals in their communities. Two provisions of EPCRA—release notification and chemical storage reporting—apply to oil and gas well sites. The release notification provisions require companies that produce, use, or store certain chemicals to notify state and local emergency planning authorities of certain releases that would affect the community. Spills that are strictly on-site would not have to be reported under EPCRA but may still have to be reported to the National Response Center under provisions of CWA or CERCLA. In addition, companies would have to comply with EPCRA’s chemical storage reporting provisions, which require facilities storing or using hazardous or extremely hazardous chemicals over certain thresholds to submit an annual inventory report including detailed chemical information to state and local emergency planning authorities and the local fire department. When asked whether oil and gas well sites would commonly trigger EPCRA’s release notification and chemical storage reporting requirements, EPA officials said these requirements could be triggered at every well site. EPCRA also established the Toxics Release Inventory (TRI)––a publicly available database containing information about chemical releases from more than 20,000 industrial facilities––but EPA regulations for the TRI do not require oil and gas well sites to report to TRI. Specifically, these provisions of EPCRA generally require certain facilities that manufacture, process, or otherwise use any of more than 600 listed chemicals to report annually to EPA and their respective states on chemicals used above threshold quantities; the amounts released to the environment; and whether they were released into the air, water, or soil. EPCRA specified certain industries subject to the reporting requirement—which did not include oil and gas exploration and development—and also provided authority for EPA to add or delete industries going forward. EPA issued regulations to implement the TRI in 1988 and chose not to change the list of industries subject to the provision at that time. In 1997, EPA promulgated a rule adding seven industry groups to the list of industries required to report releases to the TRI, including coal mining and electrical utilities that combust coal and/or oil. In developing the 1997 rule, EPA considered including oil and gas exploration and production but did not do so because, according to EPA’s notice in the Federal Register for the final rule, there were concerns about how “facility” would be defined for this industry. At that time, EPA’s stated rationale was that the oil and gas exploration and production industry is unique in that it may have related activities over a large geographic area and, while together these activities may involved the management of chemicals regulated by the TRI program, taken at the smallest unit—an individual well—the chemical and other thresholds are unlikely to be met. According to EPA officials, EPA is in the preproposal stage of developing a new rule to add additional industrial sectors into the TRI program but is not planning to include the oil and gas exploration and production industry. EPA officials said that adding oil and gas well sites would likely provide an incomplete picture of the chemical uses and releases at these sites and would, therefore, be of limited utility in providing information to communities. For more details about EPCRA, please see appendix VII. TSCA authorizes EPA to regulate the manufacture, processing, use, distribution in commerce, and disposal of chemical substances and mixtures. TSCA provides EPA with several authorities by which EPA may assess and manage chemical risks, including the authority to (1) collect information about chemical substances, (2) require companies to conduct testing on chemical substances, and (3) take action to protect adequately against unreasonable risks. TSCA allows chemical companies to assert confidentiality claims on information provided to EPA; if the information provided meets certain criteria, EPA must protect it from disclosure to the public. EPA maintains a list of chemicals that are or have been manufactured or processed in the United States, called the TSCA inventory. Of the over 84,000 chemicals currently in the TSCA inventory, about 62,000 were already in commerce when EPA began reviewing chemicals in 1979. Since then, EPA has reviewed more than 45,000 new chemicals, of which approximately 20,000 were added to the inventory after chemical companies began manufacturing them. As part of EPA’s Study on the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources, EPA is currently analyzing information provided by nine hydraulic fracturing service companies, including a list of chemicals used in hydraulic fracturing operations. EPA officials said that they expect most of the chemicals disclosed by the service companies to appear on the TSCA inventory list, provided that chemicals are not classified solely as pesticides. EPA officials do not expect to be able to compare the list of chemicals provided by the nine hydraulic fracturing service companies to the TSCA inventory until the release of a draft report of the Study on the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources for peer review, expected in late 2014. In August 2011, EPA received a petition from the environmental group Earthjustice and 114 others asking the agency to exercise TSCA authorities and issue rules to require manufacturers and processors of chemicals used in oil and gas exploration and production to develop and provide certain information to EPA. According to the petition, EPA and the public currently lack adequate information about the health and environmental effects of chemicals used in oil and gas exploration and production, and EPA should exercise its TSCA authorities to ensure that chemicals used in oil and gas exploration and production do not present an unreasonable risk of harm to health and the environment. In a letter to the petitioners, EPA granted the petition in part, stating there is value in beginning a rulemaking process under TSCA to obtain data on chemical substances used in hydraulic fracturing. EPA’s letter also stated that the TSCA proposal would focus on providing an aggregate picture of the chemical substances used in hydraulic fracturing, which would complement and not duplicate well-by-well disclosure programs that exist in some states. The letter also indicates that the agency is drafting an Advance Notice of Proposed Rulemaking on this issue. As of August 31, 2012, EPA has not released a publication date for this proposed rulemaking. EPA also intends to convene a stakeholder process to gather additional information for use in developing a proposed rule. For more details about TSCA, please see appendix VIII. FIFRA, as amended, mandates that EPA administer pesticide registration requirements and authorizes EPA to regulate the use, sale, and distribution of pesticides to protect human health and preserve the environment. FIFRA requires that EPA register new pesticides; pesticide registration is a very specific process that is not valid for all uses of a particular chemical. Instead, each registration describes the chemical and its intended use (i.e., the crops/sites on which it may be applied), and each use must be supported by research data. According to EPA officials, some pesticides registered under FIFRA are used in hydraulic fracturing, and EPA has approved registrations of some pesticides for this purpose. According to a report about shale gas development by the Ground Water Protection Council, operators may use pesticides to kill bacteria or other organisms that may interfere with the hydraulic fracturing process. For example, glutaraldehyde may be used by operators to eliminate bacteria that produce byproducts that cause corrosion inside the well and was reregistered for this purpose by EPA in 2007. Exemptions Are Related to Preventive Programs As discussed above, in six of the eight federal environmental and public health laws identified, there are exemptions or limitations in regulatory coverage related to the oil and gas exploration and production industry (there are two exemptions related to CAA). All of these exemptions are related to programs designed to prevent pollution (see table 2). For example, under CWA, EPA generally requires permits for stormwater discharges at construction sites, which prevents sediment from entering nearby streams. However, the Water Quality Act of 1987 and Energy Policy Act of 2005 largely exempted the oil and gas exploration and production sector from these stormwater permitting requirements. Four of the exemptions are statutory (related to SDWA, CWA, CAA, and CERCLA), and three are related to regulatory decisions made by EPA (related to CAA, RCRA, and EPCRA). States may have regulatory programs related to some of these exemptions or limitations in federal regulatory coverage. For example, although oil and gas exploration and production wastes are not regulated under RCRA as hazardous, which reduces the federal role in management of such wastes, they are nonetheless solid wastes. State regulations may govern management of solid waste, and certain EPA regulations address minimum requirements for how solid waste disposal facilities should be designed and operated. The exemptions do not limit the authorities EPA has under federal environmental and public health laws to respond to environmental contamination. Table 3 lists EPA authorities that may be applicable when conditions or events at a well site present particular risk to the environment or human health. As noted throughout this report, EPA has used several of these authorities at oil and gas wells. For example, as discussed above, EPA Region 8 has used RCRA’s imminent and substantial endangerment authorities to issue RCRA imminent and substantial endangerment orders to operators in Wyoming after discovering that pits near oil production sites were covered with oil and posed a hazard to birds. Similarly, as discussed above, EPA is using CERCLA’s response authority to investigate private water well contamination in Pavillion, Wyoming. Whether an authority is available depends on requisite conditions being met in a given instance. EPA officials said that, in some instances, response authorities of multiple federal environmental laws could be used to address a threat to public health or the environment. In 2001, EPA and the Department of Justice developed a memo advocating that officials consider the specifics of a situation and use the most appropriate authority. See appendixes II through VI for a more detailed discussion of these authorities. States in Our Review Implement Additional Requirements and Recently Updated Some Requirements The six states in our review implement additional requirements governing a number of activities associated with oil and gas development. One of the states—Pennsylvania—is also part of the Delaware River Basin Commission—a regional commission that implements additional requirements. All six states have updated some aspects of their requirements in recent years. States in Our Review Implement Additional Requirements and Certain Federal Requirements In addition to implementing and enforcing certain aspects of federal requirements with EPA approval and oversight, the six states in our review implement additional requirements governing a number of activities associated with oil and gas development. State requirements often do not explicitly differentiate between conventional and unconventional development but, in recent years, states have begun to promulgate some requirements that apply specifically to unconventional development. States have regulatory requirements related to a variety of activities involved in developing unconventional reservoirs, including siting and site preparation; drilling, casing, and cementing; hydraulic fracturing; well plugging; site reclamation; waste management and disposal; and managing air emissions. Table 4 compares selected state requirements and related federal environmental and public health requirements; a more comprehensive table is available in appendix X. Several studies noted that development practices and state requirements may vary based on a number of factors, including geology, climate, and the type of resource being developed. We did not assess whether all requirements are appropriate for all states as part of this review. All six states we reviewed have state requirements regarding site selection and preparation, though the specifics of their requirements vary. Specifically, states have requirements for baseline testing of water wells, required setbacks from water sources, and stormwater management, among others. For example, three of the six states—Colorado, Ohio, and Pennsylvania—have requirements that encourage or require operators to conduct baseline water testing in certain cases. Colorado requires testing of certain nearby wells when a proposed coalbed methane well is located within a quarter-mile of a conventional gas well or a plugged and abandoned well. In Ohio, baseline water well sampling is required within 1,500 feet of any proposed horizontal well or within 300 feet of any kind of well proposed in an urban area. Pennsylvania does not require baseline testing, but state law presumes operators to be liable for any pollution of water wells within 2,500 feet of an unconventional well that occurs within 12 months of drilling activities, including hydraulic fracturing. Operators in Pennsylvania can defend against this presumption if they have predrilling tests conducted by an independent certified laboratory showing that the pollution predated drilling. State regulators in Pennsylvania said that nearly all companies in Pennsylvania conduct baseline testing of nearby water wells, in many cases up to 4,000 feet from the drilling site. Five of the six states—Colorado, North Dakota, Ohio, Pennsylvania, and Wyoming—we reviewed have requirements related to setbacks for well sites or equipment from certain water sources. For example, in Ohio, oil and gas wells and associated storage tanks generally may not be within 50 feet of a stream, river, or other body of water. In Pennsylvania, unconventional wells may not be drilled within 500 feet of water wells without written owner consent unless the operator cannot otherwise access its mineral rights and demonstrates that additional protective measures will be utilized. In Pennsylvania, there are also setbacks from public water supplies and certain other bodies of water such as springs and wetlands. Oil and gas operations are generally not subject to certain stormwater permitting requirements under the Clean Water Act, but four of the six states we contacted—Colorado, North Dakota, Pennsylvania, and Wyoming—have their own stormwater permitting requirements. For example, the Wyoming Department of Environmental Quality requires permit coverage for stormwater discharges from all construction activities disturbing 1 or more acres. These permits require the operator to develop a stormwater management program, including best management practices, that can be reviewed by the Wyoming Department of Environmental Quality. In North Dakota, operators must obtain a permit for construction activities that disturb 5 or more acres, and state officials said that nearly all oil and gas drilling projects meet this threshold. This permit also requires the operator to develop a stormwater management program and implement best management practices for managing stormwater, such as using straw bales or dikes to manage water runoff. We did not identify any stormwater permitting requirements for Ohio and Texas, but their state regulations address stormwater in other ways. For example, operators in Ohio are required to comply with the state’s best management practices during construction, such as design guidelines for constructing access roads. Texas regulations prohibit operators from causing or allowing pollution of surface water and encourage operators to implement best management practices to minimize discharges, including discharges of sediment during storm events. States have additional requirements relating to erosion control, site preparation, and surface disturbance minimization. For more details about state siting and site preparation requirements, see appendix IX. All of the six states in our review have requirements related to how wells are to be drilled and casing should be installed and cemented in place, though the specifics of their requirements vary. For example, states have different requirements regarding how deep operators must run surface casing to protect groundwater. In Pennsylvania, operators are required to run surface casing approximately 50 feet below the deepest fresh groundwater or at least 50 feet into consolidated rock, whichever is deeper. Generally, the surface casing may not be set more than 200 feet below the deepest fresh groundwater unless necessary to set the casing in consolidated rock. Different casing and cementing requirements apply in Pennsylvania when drilling through coal formations, which state regulators said is common in the southwest part of the state. In Texas, operators are required to run surface casing to protect all usable quality water, as defined by the Texas Commission on Environmental Quality. The depth of the surface casing may be specified in a letter by the commission or in rules specific to a particular oil or gas field, which account for local considerations. In no case may surface casing be set deeper than 200 feet below the specified depth without prior approval from the Texas Railroad Commission, the oil and gas regulator in Texas. Operators in Wyoming are generally required to run surface casing to reach a depth below all known or reasonably estimated usable groundwater as defined in regulations and generally 100 to 120 feet below certain permitted water supply wells within a quarter-mile, but certain coalbed methane wells are exempt from these requirements. Until 2012, Ohio did not specify a depth to which surface casing was required to be set but according to state regulators, the depth of the casing used to protect groundwater was dictated through the permitting process, and regulators and operators were generally following the same casing and cementing requirements for unconventional wells as they would for Class II UIC wells. Ohio adopted new regulations effective August 2012 that generally require operators to run surface casing at least 50 feet below the base of the deepest underground source of drinking water or at least 50 feet into bedrock, whichever is deeper. Among the six states we contacted, North Dakota and Ohio are the only states with specific casing and cementing provisions for horizontal wells. However, all six states have some requirements—whether through law, regulation, or the permitting process—that generally require operators to provide regulatory officials with information about the vertical and horizontal drilling paths. For example, an application for a permit to drill a horizontal well in Wyoming must include information about the vertical and horizontal paths of the well, and operators must provide notice to owners within a half-mile of any point on the entire length of the well. In addition, operators must (1) provide notification and obtain approval from the Wyoming Oil and Gas Conservation Commission before beginning horizontal drilling and (2) file a description of the exact path of the well, known as a directional survey, within 30 days of well completion. North Dakota requires a different permit to drill a horizontal well than it does for a vertical well, and the horizontal permit contains information about the horizontal path of the well. For more details about state drilling, casing, and cementing requirements, see appendix IX. All six states we reviewed have requirements for disclosing the chemicals used in hydraulic fracturing, but the specific requirements vary (see table 5). Four states—Colorado, North Dakota, Pennsylvania, and Texas— require disclosure through the website FracFocus, which is a joint project of the Ground Water Protection Council and the Interstate Oil and Gas Compact Commission. For example, operators that perform hydraulic fracturing in Texas are required to upload certain information to the website FracFocus within 30 days after completion of the well or 90 days after the drilling operation is completed, whichever is earlier. Information required to be uploaded to FracFocus includes, among other things, the operator’s name; the date of completion of hydraulic fracturing; the well location; the total volume of water used to conduct fracturing; and chemicals used, including their trade names, suppliers, intended use, and concentration. In Ohio, companies have options as to how to disclose information, including through FracFocus. Wyoming’s chemical disclosure requirements were developed prior to the development of FracFocus, and the state does not require operators to disclose information through the website. Among the six states we contacted, Wyoming is the only state that requires operators to disclose certain chemical information prior to conducting hydraulic fracturing. Specifically, as part of their application for permit to drill, operators are required to submit information on the chemicals proposed to be used during hydraulic fracturing. Five of the six states—Colorado, Ohio, Pennsylvania, Texas, and Wyoming—have specific provisions for protecting information on hydraulic fracturing fluids that is claimed as confidential business information or trade secrets. Four of the six states—Colorado, Ohio, Pennsylvania, and Texas—specifically require that the information must be provided to health professionals for diagnosis or treatment and to certain officials responding to a spill or a release. For example, in Texas, if an operator claims that a chemical is subject to trade secret protection, the chemical family or other similar description must generally be provided. Operators in Texas may not withhold information, including trade secrets, about chemicals used during hydraulic fracturing from health professionals or emergency responders who need the information for diagnostic, treatment, or other emergency response purposes, but health professionals and emergency responders must hold the information confidential except as required for sharing with other health professionals, emergency responders, or accredited laboratories for diagnostic or treatment purposes. Texas’ regulations also allow for certain entities—including the owner of the land on which the well is located, an adjacent landowner, and relevant state agencies—to challenge a claim to trade secret protection. Five of the six states—Colorado, North Dakota, Ohio, Pennsylvania, and Wyoming—have additional requirements specifically related to hydraulic fracturing. For example, Colorado, North Dakota, Ohio, and Wyoming require operators to continuously monitor certain pressure readings during hydraulic fracturing and to notify the state if pressure exceeds a certain threshold. Ohio also requires the suspension of operations when anticipated pressures are exceeded. North Dakota has mechanical integrity requirements specific to hydraulic fracturing, including requirements for specific types of casing, valves, and other equipment, which vary based on different fracturing scenarios. In addition, Colorado, Ohio, Pennsylvania, and Wyoming require operators to notify state regulators prior to conducting hydraulic fracturing, which provides state regulators the opportunity to conduct inspections during the hydraulic fracturing. Colorado requires notice 48 hours prior to conducting hydraulic fracturing, and Ohio and Pennsylvania require notice 24 hours prior. Wyoming does not require a specific period of notice. In Wyoming, benzene, toluene, ethylbenzene, and xylene (BTEX compounds) and petroleum distillates may only be used for hydraulic fracturing with prior authorization from state oil and gas regulators. Pennsylvania law requires blowout preventers to be used when drilling into an unconventional formation. For more details about state hydraulic fracturing requirements, see appendix IX. All six states in our review have requirements regarding well plugging, such as notifying the state prior to plugging or using specific materials or methods to do so. For example, operators in Colorado must obtain prior approval from state regulators for the plugging method and provide notice of the estimated time and date of plugging. Colorado regulations specify that the material used for plugging must be placed in the well in a manner that permanently prevents migration of oil, gas, water, or other substances out of the formation in which it originated. Cement plugs must be a minimum of 50 feet in length and must extend a minimum of 50 feet above each zone to be protected. After plugging the well, operators must submit reports of plugging and abandonment to the Colorado Oil and Gas Conservation Commission and include information specifying the fluid used to fill the wellbore, information about the cement used, date of work, and depth of plugs. In Pennsylvania, operators must follow (1) specific provisions for well plugging based on whether the well is located in a coal area or noncoal area or (2) an alternate approved method. Prior to plugging a well in an area underlain by a workable coal seam, the oil and gas operator must notify the state and the coal company to permit representatives to be present at the plugging. In addition, all six states have programs to plug wells that were improperly plugged and have been abandoned, though their level of activity varies. For example, state regulators in Texas said that the primary objective of their program, which began in 1983, is to plug abandoned oil and gas wells that are causing pollution or threatening to cause pollution for which a responsible operator does not exist; the responsible operator failed to plug the well; or the responsible operator failed to otherwise bring the wells into compliance. As of 2009, Texas state regulators had plugged 30,000 wells, and approximately 8,000 potentially abandoned wells remained throughout the state. Officials stated, however, that many of these abandoned wells may be re-used for development of previously overlooked reservoirs. State regulators in North Dakota said that the number of abandoned wells in the state is very low compared with other states because the state was fairly late to oil and gas development—with major development starting in the 1950s—and that the state had a good tracking system in place during the early days of development. State regulators in North Dakota used funds from its well plugging program to plug two wells in the last year. For more details about state well plugging requirements, see appendix IX. All six states in our review have requirements for site reclamation, though the extent of the requirements varies. Five states—Colorado, Ohio, North Dakota, Pennsylvania, and Wyoming—have requirements both for backfilling soil and for revegetating areas. For example, in Colorado, final reclamation must generally be complete within 3 months of plugging a well on crop land and within 12 months on noncrop land. Reclamation in Colorado involves returning segregated soil horizons to their original relative positions; returning crop land to its original contour; as near as practicable, returning noncrop land to its original contour to achieve erosion control and long-term stability; and adequately tilling to establish a proper seedbed. In Wyoming, operators must begin reclamation within 1 year of permanent abandonment of a well or last use of a pit and in accordance with the landowner’s reasonable requests, or to resemble the original vegetation and contour of adjoining lands. In addition, where practical, topsoil must be stockpiled during construction for use in reclamation. Texas has requirements for contouring soil, but we did not identify requirements for revegetating the area. For more details about state site reclamation requirements, see appendix IX. All six states in our review have some requirements regarding waste management and disposal, though specific requirements and practices vary across and within states. For example, regulators in Colorado said that the method of waste disposal varies based on the geological formation being exploited and the location of the production well. In some parts of the state, they said that the produced water generated is very salty and is therefore generally disposed of in a Class II UIC well. In contrast, in the Raton Basin—a coalbed methane formation near the border with New Mexico—the produced water is of sufficiently good quality that much of it is discharged to surface waters, according to state regulators. All six states we reviewed have requirements regarding the use of pits for storage of produced water, drill cuttings, and other substances. For example, in North Dakota, a lined pit may be temporarily used to retain solids or fluids generated during activities including well completion, but the contents of the pits must be removed within 72 hours after operations have ceased and must be disposed of at an authorized facility. Pennsylvania requires that certain pits be lined and requires the liners to meet certain permeability, strength, thickness, and design standards; the pit itself must also be constructed so that it will not tear the liner and can bear the weight of the pit contents. In addition, Colorado and Wyoming require pitless drilling systems (tanks) to be used in certain circumstances. For example, Colorado requires pitless drilling systems for produced water from new oil and gas wells within a specified distance of certain drinking water supply areas, and Wyoming requires pitless drilling systems in areas where groundwater is less than 20 feet below the surface. Underground injection of produced water in Class II UIC wells is a common method of disposal of produced water in five of the six states we reviewed. For example, state regulators in Ohio said that there are 177 Class II UIC disposal wells currently in operation, and 98 percent of the fluid waste from oil and gas wells in Ohio is disposed of in these Class II UIC wells. As noted previously, five out of the six states we reviewed have primary responsibility for regulating injection wells, whereas EPA implements the program in Pennsylvania. The five states in our review that have been granted primacy for their Class II UIC programs obtained it under the alternative provisions in which they demonstrate to EPA that their program is effective in preventing endangerment of underground sources of drinking water, in lieu of adopting all Class II UIC requirements in EPA regulations. All states have requirements for Class II UIC wells relating to casing and cementing, operating pressure, mechanical integrity testing, well plugging, and the monitoring and reporting of certain information, among other requirements. For example, North Dakota requires the operators of all new Class II UIC wells to demonstrate the mechanical integrity of the well and requires existing Class II UIC wells to demonstrate continued mechanical integrity at least once every 5 years. In North Dakota, mechanical integrity is demonstrated by showing that there is no significant leak in, for example, the casing; and there is no significant fluid movement into an underground source of drinking water through vertical channels adjacent to the injection well. Texas also requires operators to demonstrate the mechanical integrity of Class II UIC wells generally by conducting specified pressure tests before commencing injection, after conducting maintenance, and every 5 years. With regard to monitoring and reporting, Ohio requires operators to monitor injection pressures and volumes for each disposal well on a daily basis and to report annually on maximum and monthly average pressure and volumes. Aside from underground injection, there are several other options for disposal of produced water, though the specifics vary across and within states. For example, regulatory agencies issue NPDES permits in Colorado, Texas, and Wyoming for direct discharges to surface waters in certain cases; in doing so, the states must apply, where applicable, EPA’s effluent limitations guidelines discussed above. According to state regulators in Wyoming, the state has about 1,000 currently active permits for discharges of produced water from coalbed methane formations and 500 permits for produced water from conventional formations. In contrast, state regulators in North Dakota said that there are no direct surface discharges of produced water in their state because the produced water is too salty. Some states, such as Colorado and Pennsylvania, also have commercial facilities, which treat produced water before discharging it to surface waters. In addition, disposal to a POTW is an option in Ohio and Pennsylvania, but there have been some recent efforts to restrict such disposal. One concern regarding disposal to POTWs is that these facilities may not have the technology necessary to remove key pollutants, including total dissolved solids, from the waste stream. In 2010, Ohio’s Environmental Protection Agency (OEPA) approved a permit modification that allowed a POTW in Warren, Ohio, to accept 100,000 gallons per day of produced water with concentrations of less than 50,000 milligrams per liter of total dissolved solids, which was then diluted and discharged to surface waters. However, the Director of OEPA subsequently issued a determination in 2011 that the permit had been unlawfully issued because Ohio law does not generally permit the disposal of produced water through a POTW. In response, OEPA did not reauthorize the POTW to accept produced water when its NPDES permit came up for renewal in 2012. In July 2012, however, OEPA’s decision was reversed by an administrative review commission, which held that the matter was outside of OEPA’s jurisdiction. Instead, the power to prohibit disposal to a POTW lies with the Ohio Department of Natural Resources. Accordingly, the commission removed the NPDES permit’s prohibition on accepting produced water. Prior to 2011, POTWs in Pennsylvania also accepted produced water from oil and gas well sites. The Pennsylvania Department of Environmental Protection issued administrative orders to POTWs in Pennsylvania requiring, among other things, that the POTWs restrict the volume of oil and gas wastewater they were accepting, evaluate the impacts of oil and gas wastewaters on their treatment process, and submit certain samples of oil and gas wastewater accepted for treatment. In addition, the state of Pennsylvania requested that operators of Marcellus shale gas wells stop delivering produced water to POTWs and began revising the POTWs’ NPDES permits. State officials later reported that POTWs in Pennsylvania were no longer accepting produced water from the Marcellus shale, and EPA Regional officials said that they believe that POTWs are accepting less produced water. In addition to permanent disposal of produced water, all six states in our review allow for recycling or other reuses of produced water. For example, according to a 2011 report, over 50 percent of the produced water in Colorado is recycled. In addition, state regulators in Pennsylvania said that the best option for dealing with produced water in the state is recycling, and the Department of Environmental Protection can track what percentage of recycled water was used in hydraulic fracturing based on information required on well completion reports. Approximately 90 percent of produced water in Pennsylvania is recycled, according to state regulators. The Texas Railroad Commission has approved several recycling projects in the Barnett Shale to reduce the amount of fresh water used in development activities there. Four of the six states—Colorado, North Dakota, Ohio, and Wyoming—also allow operators to reuse certain types of fluid waste for road applications. For example, in Ohio, produced water, excluding flowback from hydraulic fracturing, may be used for dust and ice suppression on roads with the approval of local governments; approximately 1 percent of produced water is used in this way. In Wyoming, road and land applications may be permitted as reuses of produced water. North Dakota allows road but not land application of produced water. Regulatory agencies in all six states implement requirements for the disposal of waste such as drill cuttings. For example, in Colorado, drill cuttings may be buried in pits at the well site, an activity which is regulated by the Colorado Oil and Gas Conservation Commission. Drill cuttings taken off site for disposal at a commercial waste facility must comply with the regulations of the state’s Department of Public Health and Environment that govern those facilities. Texas allows drill cuttings to be landfarmed on the well site where they were generated with the written permission of the surface owner of the site if they were obtained using drilling fluids with a chloride concentration of 3,000 milligrams per liter or less. Texas allows on-site burial of drill cuttings that were obtained using drilling fluids with a chloride concentration in excess of 3,000 milligrams per liter. In North Dakota, operators frequently bury drill cuttings on-site where the North Dakota Industrial Commission’s Oil and Gas Division has authority, but, in some cases, the drill cuttings may be disposed of at a landfill under the jurisdiction of the Department of Health due to shallow groundwater or permeable subsoil. As discussed earlier in this report, officials in the six states we reviewed were not aware of any oil or gas well sites that would be regulated as small-quantity generators of hazardous waste under RCRA. Pursuant to RCRA, regulation of waste that is not considered hazardous is largely a state responsibility. Some states have special categories of waste and associated additional requirements that apply to industrial wastes generally, or oil and gas wastes specifically. For example, waste from crude oil and natural gas exploration and production in North Dakota is called special waste. Special waste landfills must be permitted and comply with specific design standards. Currently, there are four special waste landfills in North Dakota with another five proposed special waste landfills at the beginning stages of the permitting process. State regulators said that special waste consists mostly of drill cuttings but can also include other things such as contaminated soil. In Pennsylvania, oil and gas waste falls into a category of waste called residual waste that applies to, among other things, certain wastes from industrial, mining, or agricultural operations. Residual waste disposal must be permitted and is subject to processing and storage rules. All six states in our review have requirements for managing and disposing of wastes, such as oilfield equipment, drilling solids, and produced water that have been exposed to or contaminated with naturally-occurring radioactive material (NORM) or technologically-enhanced NORM.occurs naturally in some geologic formations that also contain oil or gas and when NORM is brought to the surface during drilling and production, it remains in drill cuttings and produced water and, under certain conditions, creates scales or deposits on pipes or other oilfield equipment. Officials at the Colorado Department of Public Health and Environment said that they set tiers for how to manage materials that contain NORM based on their level of radioactivity. In addition, they said that the department is working with the Colorado Oil and Gas Conservation Commission to require operators to perform certain tests on produced water before allowing produced water to be used for road application. Texas officials said that the state requires operators to identify NORM-contaminated equipment with the letters “NORM” by securely attaching a clearly visible waterproof tag or marking with a legible waterproof paint or ink. In addition, Texas requires operators to dispose of oil and gas NORM waste by methods that are specifically authorized by rule or specifically permitted. State regulators in Wyoming said that a lot of NPDES permits for direct discharges to surface waters have limits on radioactivity that would probably lead the operator to dispose of produced water contaminated with NORM in a Class II UIC well. For more details about states waste management and disposal requirements, see appendix IX. Five of the six states we reviewed have permitting or registration requirements for managing air emissions from oil and gas production sites. In addition, all six states have requirements related to venting and flaring of gas and limiting or managing emissions of hydrogen sulfide—a hazardous and deadly gas—at drilling sites. Five of the six states we reviewed —Colorado, North Dakota, Ohio, Texas, and Wyoming—have developed permitting or registration requirements that apply to oil and gas development. For example, according to state regulators, the vast majority of production wells in Colorado require air permits. Operators with certain condensate tanks and tank batteries are required to obtain a permit if the tanks have uncontrolled actual emissions of volatile organic compounds greater than or equal to 2 tons per year in areas which are not attaining certain air quality standards (nonattainment areas) or greater than or equal to 5 tons per year in an attainment area. As part of the permit requirements, operators in nonattainment areas must reduce emissions of volatile organic compounds by 90 percent from uncontrolled actual emissions during certain times of the year, and by 70 percent during other times, and reduce emissions by 90 percent for dehydration systems. In Ohio, an operator meeting certain requirements must obtain an air permit that lists each source of emissions; all applicable rules that apply to the sources, including federal and state requirements; operational restrictions; monitoring; recordkeeping; reporting; and testing requirements. Wyoming officials noted that oil and gas facilities are subject to general state permitting requirements but did not identify any permitting requirements specific to air emissions from oil and gas development. In Wyoming, state regulators have worked with industry to achieve voluntary reductions from mobile sources in certain parts of the state that may soon not meet air quality standards for ozone. Specifically, officials at the Wyoming Department of Environmental Quality said that they have asked operators in certain areas to agree to implement voluntary reductions in volatile organic compounds and nitrogen oxides and to install controls on diesel engines on mobile drilling rigs; regulators then include these requirements in the air permit issued to the operator. North Dakota and Texas also have permitting or registration requirements, and Pennsylvania is in the process of developing an inventory for oil and gas emissions information. All six states have some requirements for flaring excess gas encountered during drilling and production, which may otherwise pose safety hazards and contribute to emissions. For example, operators in Pennsylvania who encounter excess gas during drilling or hydraulic fracturing must capture the excess gas, flare it, or divert it away from the drilling rig in a manner that does not create a hazard to public health and safety. According to state regulators in Wyoming, the Oil and Gas Conservation Commission has jurisdiction for flaring prior to production when the primary concern with flaring is safety. For flaring that occurs after production has begun, the Department of Environmental Quality requires 98 percent combustion efficiency. All six states have safety requirements to limit and manage emissions of hydrogen sulfide—a hazardous and deadly gas—at drilling sites. For example, in Texas, operators are subject to detailed requirements in areas where exposure to hydrogen sulfide could exceed a certain threshold if a release occurred, taking into consideration whether the area of potential exposure includes any public areas such as roads. Requirements relate to posting warning signs, using fencing, maintaining protective breathing equipment at the well site, installing a flare line and a suitable method for lighting the flare, and conducting training. In some cases, hydrogen sulfide requirements overlap with flaring requirements. For example, flares used for treating gas containing hydrogen sulfide in North Dakota must be equipped and operated with an automatic ignitor or a continuous burning pilot, which must be maintained in good working order, including flares that are used for emergency purposes only. For more details about state requirements for managing air emissions, see appendix IX. Regional Commission Implements Additional Requirements One of the states in our review—Pennsylvania—is also part of a regional commission that implements additional requirements governing several aspects of natural gas development. Specifically, the Delaware River Basin Commission is a regional body whose members include the governors of Delaware, New Jersey, New York, Pennsylvania, as well as the U.S. Army Corps of Engineers’ Division Engineer for the North Atlantic Division. The commission regulates water quantity and quality within the basin, which spans approximately 13,500 square miles. In December 2010, the Delaware River Basin Commission published draft Natural Gas Development Regulations, which are currently under consideration for adoption, and the commission will not issue any permits for shale gas wells within the basin until the final regulations have been adopted. The draft regulations propose a number of requirements related to the protection of certain landscapes and waters and how to handle wastewater generated by natural gas development. For example, the proposed regulations require that produced water stored on the well pad be kept in enclosed tanks. In addition, operators of treatment and/or discharge facilities proposing to accept natural gas wastewater would be required to provide the commission with information on the contents of the proposed discharge and submit a study showing that the proposed discharge could be adequately treated. Natural gas well operators would also be required to have natural gas development plans for projects that exceed certain thresholds for acreage or number of wells. According to commission officials, the natural gas development plans would allow the commission to consider the cumulative impacts of development from numerous well pads, associated roads, and pipeline infrastructure, and to minimize and mitigate disturbance on lands most critical to water resources, such as core forests and steep slopes. The plans will also help protect water resources for approximately 15 million people, including residents of New York City and Philadelphia. States Have Recently Updated Some Requirements All six states in our review have updated some aspects of their requirements in recent years. Key examples include the following: Colorado made extensive amendments to its oil and gas regulations in 2008, which included, among other things, restrictions on locating wells near drinking water sources, measures to manage stormwater, and requirements to consult with the Colorado Division of Wildlife in certain cases to minimize adverse impacts on wildlife. According to state officials, these regulatory updates served three primary purposes: (1) address the growing impacts of increased oil and gas development; (2) implement state legislation passed in 2007 directing the Colorado Oil and Gas Conservation Commission to work with the Colorado Department of Public Health and Environment and the Colorado Division of Wildlife to update its regulations; and (3) update existing rules to enhance clarity, respond to new information, and reflect current practices and procedures. In 2012, North Dakota implemented 26 rule changes, including the requirement for operators to drain pits and properly dispose of their contents within 72 hours after well completion, servicing, or plugging operations have ceased. According to state officials, this change was implemented in response to a number of pit overflows that occurred during the spring melt in 2010 and 2011. In 2012, Ohio adopted new oil and gas well construction regulations to implement state legislation passed in 2010. The new regulations include casing and cementing requirements and requirements to disclose the chemicals used in hydraulic fracturing. Pennsylvania passed legislation in 2012 which, among other things, requires unconventional wells to be sited at greater setback distances from existing buildings and water wells than was previously required for all wells and requires chemical disclosure through FracFocus. In addition, the new legislation increases the distance from which an operator of an unconventional well may be presumed liable in the event of pollution of nearby water wells from 1,000 feet to 2,500 feet. The Texas Commission on Environmental Quality updated its air emissions regulations for oil and gas facilities in 2011, including emissions limitations for nitrogen oxide and volatile organic compounds. Texas officials told us that changes included requirements for operators to install controls on stationary compressor engines and storage tanks. In addition, operators in the Dallas-Fort Worth area have agreed to voluntarily reduce emissions of volatile organic compounds by replacing pneumatic valves with no-bleed or low-bleed valves which helps to address nonattainment issues in the area while also reducing emissions of hazardous air pollutants. Texas also adopted a regulation in December 2011 regarding chemical disclosure requirements in order to implement state legislation passed several months earlier. In 2010, Wyoming updated its chemical disclosure requirements. According to state regulators, operators were always required to provide notification to the Wyoming Oil and Gas Conservation Commission before conducting hydraulic fracturing, but recent regulatory changes clarified these requirements and also added detailed requirements on what information was required to be disclosed. In the last 3 years, Colorado, Ohio, and Pennsylvania volunteered to have parts of their regulations reviewed by the State Review of Oil and Natural Gas Environmental Regulations (STRONGER) program, which is administered by the Ground Water Protection Council and brings together state, industry, and environmental stakeholders to review state oil and gas environmental regulations and make recommendations for improvement. Ohio and Pennsylvania have made regulatory changes that reflect STRONGER’s recommendations. For example, STRONGER completed a review of Pennsylvania’s regulations in September 2010. The review team commended the state for encouraging baseline groundwater testing in the vicinity of wells but also recommended that the state consider whether the testing radius should be expanded to take into account the horizontal portions of fractured wells. As discussed above, in 2012, Pennsylvania passed legislation that increases the distance from which an operator of an unconventional well may be presumed liable in the event of pollution of nearby water wells from 1,000 feet to 2,500 feet. State regulators said that the addition was in response to the state’s September 2010 STRONGER review and the Governor’s Marcellus Shale Advisory Commission. State regulators are also considering additional regulatory changes in response to the remaining recommendations of the Governor’s Marcellus Shale Advisory Board. Additional Requirements Apply on Federal Lands Federal land management agencies, including the Bureau of Land Management (BLM), Forest Service, National Park Service, and Fish and Wildlife Service (FWS) manage federal lands for a variety of purposes. Specifically, both the Forest Service and BLM manage their lands for multiple uses, including oil and gas development; recreation; and provision of a sustained yield of renewable resources, such as timber, fish and wildlife, and forage for livestock. By contrast, the Park Service manages its lands to conserve the scenery, natural and historical objects, and wildlife so they remain unimpaired for the enjoyment of present and future generations. Similarly, FWS manages national wildlife refuges for the benefit of current and future generations, seeking to conserve and, where appropriate, restore fish, wildlife, plant resources, and their habitats. Each of these agencies imposes additional requirements for oil and gas development on its lands to meet its obligations with respect to its mission. These additional federal requirements are the same for conventional and unconventional oil and gas development. In some cases, the surface rights to a piece of land and the right to extract oil and gas—called mineral rights—are owned by different parties. For example, private mineral rights might underlie lands where the surface is managed by a federal agency. Requirements for developing mineral rights vary based on whether the mineral rights are owned by the federal government or by a private entity. Requirements for Federally Owned Mineral Rights Requirements for operators developing federally owned mineral rights are imposed by federal agencies during planning and leasing processes carried out by federal agencies. Operators must also meet specific requirements during several of the activities involved in oil and gas development. BLM has primary authority for issuing leases and permits for federal oil and gas resources even in cases when surface lands are managed by other federal agencies or owned by private landowners. The majority of federal oil and gas leases underlie lands managed by BLM or the Forest Service, but there are some federal oil and gas resources available for leasing under lands managed by other federal agencies or private landowners. Altogether, BLM oversees oil and gas development on approximately 700 million subsurface acres. Pub. L. No. 91-190 (1970), codified as amended at 42 U.S.C. §§ 4321-4347 (2012). either an environmental assessment or environmental impact statement.After the planning process, BLM takes the lead in preparing the NEPA analysis for leases when the surface lands are managed by BLM or owned by a private landowner (see table 6). For Forest Service lands, the Forest Service takes the lead in preparing the NEPA analysis and coordinates with BLM so that BLM’s subsequent leasing decision can be supported by the same analysis. At both agencies the NEPA review focuses on how the sale of leases may affect the environment and public health and, according to BLM officials, often includes mitigation measures that ultimately become stipulations on leases and permits for that tract of federal land. After the environmental impact statement is completed, BLM sells the lease to an operator through an auction or by other means. After acquiring a lease for the development of federal oil and gas, an operator is required to submit an application for permit to drill (APD) for individual wells to BLM. According to BLM officials, the APD is a comprehensive plan for drilling and related activities, which is approved by BLM. Prior to permit issuance for the proposed drilling activity, BLM is required to document that needed reviews under NEPA have been conducted. According to officials, at this step BLM conducts site-specific NEPA analysis, often drawing on the previous NEPA analysis conducted prior to the lease sale, but supplemented with more specifics about the proposed well site and related facilities, such as access roads or pipelines. The environmental review may also identify mitigation measures that could be used to reduce the environmental effects of drilling. The APD includes two key components: (1) the drilling plan, which describes the plan for drilling, casing, and cementing the well; and (2) the surface use plan of operations, which describes surface disturbances, such as road construction to the well pad and installation of any needed pipelines or other infrastructure. BLM is responsible for reviewing and approving the APD as a whole but gets input from the surface land management agency regarding the surface use plan of operations. For example, the Forest Service is responsible for review and approval of the surface use plan of operations component of the APD. After reviewing the operator’s APD, BLM approves the APD, often by attaching conditions of approval and requiring the operator to take mitigation measures as described in the environmental review or recommended by the surface land management agency. Once the APD is approved, and any state or local approvals are obtained, the operator can begin work. BLM has overall responsibility for ensuring compliance with approved APDs but coordinates with other surface land management agencies as appropriate. According to BLM officials, BLM is responsible for inspections and enforcement related to drilling operations, including running tests on casing and cementing. In addition, BLM officials said that they coordinate with surface land management agencies regarding surface conditions. Forest Service officials said that the Forest Service is responsible for conducting inspections relative to surface uses authorized by the surface use plan of operations. These officials said that if Forest Service personnel note possible noncompliance related to drilling or production operations, they notify and coordinate with BLM. Similarly, officials said that, if BLM conducts an inspection and notices potential violations of the surface use plan of operations, they contact the Forest Service. Operators of wells accessing federal oil and gas also face requirements related to activities involved in oil and gas development. Specifically, these requirements are related to siting and site preparation; drilling, casing, and cementing; well plugging; site reclamation; waste management and disposal; and managing air emissions. Requirements are as follows: Siting and site preparation. BLM requires an operator to identify all known oil and gas wells within a 1-mile radius of the proposed location. BLM does not require baseline testing of groundwater near the proposed well site. BLM generally prohibits an operator from conducting operations in areas highly susceptible to erosion, such as floodplains or wetlands, and recommends that operators avoid steep slopes and consider temporarily suspending operations when weather-related conditions, such as freezing or thawing ground, would cause excessive impacts. Drilling, casing, and cementing. As discussed above, operators must submit detailed drilling plans as part of their APD. The drilling plan must be sufficiently detailed for BLM to appraise the technical adequacy of the proposed project and must include, among other things: (1) geologic information about the formations that the operator expects to encounter while drilling; (2) whether these formations contain oil, gas, or useable water and, if so, how the operator plans to protect such resources; (3) a proposed casing plan, including details about the size of the casing and the depths at which each layer of casing will be set; (4) the estimated amount and type of cement to be used in the well; and (5) a description of any horizontal drilling that is planned. Well plugging. Operators are required to provide notice to and get approval from BLM prior to plugging a well and to comply with specific technical standards in plugging the well. Site reclamation. Operators describe their plans for reclamation in the surface use plan of operations submitted as part of the APD. BLM requires operators to return the disturbed land to productive use. All well pads, pits, and roads must be reclaimed and revegetated. Interim and final reclamation generally must be completed within 6 months of the well entering production and being plugged, respectively. Waste management and disposal. In the surface use plan of operations, operators must describe the methods and locations proposed for safe disposal of wastes, such as drill cuttings, salts, or chemicals that result from drilling the proposed well. The description must also include plans for the final disposition of drilling fluids and any produced water recovered from the well. Managing air emissions. For operations in formations that could contain hydrogen sulfide, BLM requires a hydrogen sulfide operations drilling plan, which describes safety systems that will be used, such as detection and monitoring equipment, flares, and protective equipment for essential personnel. In some cases, BLM and states may regulate similar activities; in such cases, operators must comply with the more stringent regulation. For example, North Dakota state requirements allow the use of pits only for short-term storage of produced water. BLM generally allows the use of pits for longer-term storage of produced water, but operators cannot do so on federal lands in North Dakota due to state requirements. See appendix X for a comparison of federal environmental requirements, state requirements, and additional requirements that apply on federal lands. BLM recently proposed new requirements for oil and gas development on federal lands. Specifically, in May 2012, BLM proposed regulations that update and add to its current requirements related to hydraulic fracturing. As proposed, these regulations would require operators of wells under federal leases to (1) publicly disclose the chemicals they use in hydraulic fracturing; (2) take certain steps to ensure the integrity of the well, including complying with certain cementing standards and confirming through mechanical integrity testing that wells to be hydraulically fractured meet appropriate construction standards; and (3) develop plans for managing produced water from hydraulic fracturing and store flowback water from hydraulic fracturing in a lined pit or a tank. According to BLM officials, BLM’s proposed rule is intended to improve stewardship and operational efficiency by establishing a uniform set of standards for hydraulic fracturing on public lands. According to BLM officials, a final rule is expected in the fall of 2012. Requirements for Privately Owned Mineral Rights under Federal Surface Lands Subject to some restriction, owners of mineral rights that underlie federal lands have the legal authority to explore for oil and gas and, if such resources are found, to develop them. Federal land management agencies’ authorities to control the surface impacts of drilling for privately owned minerals underlying federal lands vary based on a variety of factors, including which federal agency is responsible for managing the surface lands. According to BLM officials, private mineral owners seeking to develop oil and gas would need to obtain a right-of-way grant from BLM for any surface disturbance, including the well pad, but otherwise BLM has limited authority over the private owners’ use and occupancy of the BLM- managed surface lands. Officials said that BLM would have the same rights as a private surface owner under state law to hold a mineral rights owner to “reasonable surface use.” BLM officials explained that BLM would perform a NEPA analysis prior to issuing the right-of-way grant. According to officials, the agency applies its general regulations for granting rights of way, but BLM did not have specific guidance regarding oversight of private mineral operations on BLM lands. According to Forest Service officials, Forest Service authority related to the development of privately owned minerals is limited because private mineral owners have the legal right to develop such resources. The Forest Service manages a large number of wells accessing privately owned minerals. Specifically, Forest Service officials said that, of the 19,000 operating oil and gas wells on Forest Service lands, about three- fourths are producing privately owned minerals. Forest Service officials explained that the Forest Service evaluates the effects of the development and, through negotiations with the operator, tries to reach agreement on certain mitigation measures. Officials explained that these mitigation measures are generally not as stringent or specific as mitigation measures used on federal leases. In addition, Forest Service officials explained that enforcement options are limited for environmental damage from development of privately owned minerals. Generally, the Forest Service can work with state oil and gas agencies to have them enforce any relevant state requirements regarding surface impacts, or the Forest Service can seek an injunction from the court to stop damaging actions and then pursue possible damages or restitution via the court. According to Forest Service officials, development of privately owned minerals has been a particular challenge in the Alleghany National Forest in Pennsylvania where privately owned minerals underlie more than 90 percent of the forest. Forest Service officials stated that there are approximately 1,000 new wells drilled in this forest each year, most of which are shallow conventional oil development. Officials said that the pace of this development has made it difficult for the Forest Service to manage other forest uses, such as recreation and timber extraction. See GAO, National Wildlife Refuges: Opportunities to Improve the Management and Oversight of Oil and Gas Activities on Federal Lands, GAO-03-517 (Washington, D.C.: Aug. 28, 2003). partly because FWS does not currently have regulations that directly address oil and gas development. FWS officials said that the agency is developing a proposed rule that will set requirements for operators developing privately owned minerals. Officials expect an Advance Notice of Proposed Rulemaking to be issued in calendar year 2012. FWS officials said that, despite having minimal requirements for operators drilling for privately owned minerals, they can use other federal authorities and work with federal and state agencies to minimize or remediate injury to FWS lands. For example, FWS worked with EPA to respond to a spill of produced water into a stream on a National Wildlife Refuge in Louisiana in 2005, in violation of CWA. EPA, the Coast Guard, and the Department of Justice worked together on the case, and the operator ultimately paid $425,000 to FWS for the two affected wildlife refuges. According to agency officials, however, without specific regulations, FWS faces challenges conducting daily management and oversight of oil and gas activities on FWS lands. The Park Service’s 9B regulations govern potential impacts to all park system resources and values resulting from exercise of private oil and gas rights within Park Service administered lands. These regulations require an operator to submit a proposed plan of operations to the Park Service, which outlines the activities that are proposed for Park Service lands, including drilling, production, transportation, and reclamation. The regulations also outline certain requirements for operators, including that operations be located at least 500 feet from surface waters, that fences be used to protect people and wildlife, and that during reclamation the operator reestablish native vegetation. The Park Service analyzes the operator’s proposed plan of operations to ensure that the proposed plan complies with the 9B regulations. Also, in determining whether it can approve an operation, the Park Service undertakes an environmental analysis under NEPA. Once the Park Service approves the proposed plan of operations, the operator can begin drilling. The Park Service continues to have access to the site for monitoring and enforcement purposes. In November 2009, the Park Service issued an Advance Notice of Proposed Rulemaking to update its 9B regulations; a proposed rule is expected in September 2013, according to agency officials. Federal and State Agencies Reported Several Challenges Regulating Unconventional Oil and Gas Development Federal and state agencies reported facing several challenges in regulating oil and gas development from unconventional reservoirs. Specifically, EPA officials reported that their ability to conduct inspection and enforcement activities and limited legal authorities are challenges. In addition, BLM and state officials reported that hiring and retaining staff and educating the public are challenges. Conducting Inspection and Enforcement Activities Officials at EPA reported that conducting inspection and enforcement activities for oil and gas development from unconventional reservoirs is challenging due to limited information, as well as the dispersed nature of the industry and the rapid pace of development. More specifically, according to EPA headquarters officials, enforcement efforts can be hindered by a lack of information in a number of areas. For example, in cases of alleged groundwater contamination, EPA would need to link changes in groundwater quality to oil and gas activities before taking enforcement actions. However, EPA officials said that often no baseline data exist on the quality of the groundwater prior to oil and gas development. These officials also said that linking groundwater contamination to a specific activity may be difficult even in cases where baseline data are available because of the variability and complexity of geological formations. As discussed earlier in this report, in 2005, the Energy Policy Act amended SDWA to specifically exempt hydraulic fracturing from the UIC program, unless diesel fuel is used in the hydraulic fracturing process. the agency does not know which operators are using diesel. Similarly, with respect to CWA, EPA officials said it is difficult to assess operators’ compliance with the SPCC program, which establishes spill prevention and response planning requirements in accordance with CWA, because EPA does not know the universe of operators with tanks subject to the SPCC rule. In addition, related to CAA, EPA headquarters officials said that it would be difficult for EPA to find oil and gas wells that are subject to but noncompliant with NESHAPs because EPA does not have information on the universe of oil and gas well sites with the equipment that are significant to air emissions. Also, according to EPA Region 8 officials, these requirements are “self-implementing,” and EPA would only receive notice from a facility that identifies itself as subject to the rules. Several EPA officials also mentioned that the dispersed nature of the industry and the rapid pace of development make conducting inspections and enforcement activities difficult. For example, officials in EPA Region 5 said that it is a challenge to locate the large number of new well sites across Ohio and to get inspectors out to these sites because EPA generally does not receive information about new wells or their location. EPA headquarters officials also mentioned that many oil and gas production sites are not continuously staffed, so EPA needs to contact operators and ensure that someone will be present before visiting a site to conduct an inspection. Officials in EPA Region 6 said that the dispersed nature of the industry, the high level of oil and gas development in the Region, and the cost of travel have made it difficult to conduct enforcement activities in their Region. EPA officials in headquarters said that SDWA is a difficult statute to enforce because of the variation across states. Specifically, SDWA authorizes EPA to approve, for states that elect to assume this responsibility, individual states’ programs as alternatives to the federal UIC Class II regulatory program. As a result, EPA’s enforcement actions have to be specific to each state’s program, which increases the complexity for EPA. In addition, SDWA requires that EPA approve each state’s UIC program by regulation rather than through an administrative process, and many of the federal regulations for state UIC programs are out of date. EPA officials said that this has hindered enforcement efforts, and some cases have been abandoned because EPA can only enforce those aspects of state UIC regulations that have been approved by federal regulation. Limited Legal Authorities EPA officials also reported that the scope of their legal authorities for regulating oil and gas development is a challenge. For example, EPA officials in headquarters and Regional offices told us that the exclusion of exploration and production waste from hazardous waste regulations under RCRA significantly limits EPA’s role in regulating these wastes. For example, if a hazardous waste permit was required, then EPA would obtain information on the location of well sites, how much hazardous waste is generated at each site, and how the waste is disposed of; however, operators are not required to obtain hazardous waste permits for oil and gas exploration and production wastes, limiting EPA’s role. As discussed earlier in this report, EPA is currently considering a petition to revisit the 1988 determination not to regulate these wastes as hazardous, but according to officials, has no specific time frame for responding. In addition, as we described earlier in this report, officials in Region 8 noted that EPA cannot use either its CERCLA or CWA emergency response authority to respond to spills of oil if there is no threat to U.S. navigable waters or adjoining shorelines because those statutory authorities do not extend to such situations. Hiring and Retaining Staff Officials at BLM, Forest Service, and state agencies reported challenges hiring and retaining staff. For example, BLM officials in North Dakota said recruiting is a challenge because the BLM pay scale is relatively low compared with the current cost of living near the oil fields in the Bakken formation. Similarly, BLM officials in North Dakota and headquarters both said that retaining employees is difficult because qualified staff are frequently offered more money for private sector positions within the oil and gas industry. BLM officials in Wyoming told us that their challenges related to hiring and retaining staff have made it difficult for the agency to keep up with the large number of permit requests and meet certain inspection requirements. We previously reported that BLM has encountered persistent problems in hiring, training, and retaining sufficient staff to meet its oversight and management responsibilities for oil and gas operations on federal lands. For example, in March 2010, we reported that BLM experienced high turnover rates in key oil and gas inspection and engineering positions responsible for production verification activities. We made a number of recommendations to address this and other issues—and the agency agreed—but we reported in 2011 that the human capital issues we identified with BLM’s management of onshore oil and gas continue. State oil and gas regulators in two of the six states we reviewed—North Dakota and Texas—also reported challenges with employees leaving their agencies for higher paying jobs in the private sector. Officials from the North Dakota Industrial Commission––which regulates oil and gas development––said they have partially mitigated this challenge by removing state geologists and engineers from the traditional state pay scale and offering signing and retention bonuses. In addition, state environmental regulators in three of the six states—North Dakota, Pennsylvania, and Wyoming—also mentioned challenges related to hiring or retaining staff. For example, air regulators in the Wyoming Department of Environmental Quality said that retaining qualified staff is challenging, as staff leave for higher-paying private sector positions. These officials said that 6 of their 22 air permit-writing positions are vacant as of June 2012. State regulators in Colorado and Ohio did not report facing this challenge. In addition, FWS officials reported that they have inadequate staffing for oil and gas development issues and noted that additional regional and field positions could help FWS implement a more comprehensive oil and gas program. Public Education BLM and state officials reported that providing information and education to the public is a challenge. Specifically, BLM headquarters officials mentioned that hydraulic fracturing has attracted the interest of the public and that BLM has been fielding many information requests about its use in oil and gas development. In addition, officials in five of the six states— Colorado, Ohio, Pennsylvania, Texas, and Wyoming—reported challenges related to public education. For example, regulators in Ohio said that their agency has conducted more public outreach in the last year than in the past 20 years and, in response to this public interest in shale drilling and hydraulic fracturing, they will be adding more communications staff. Similarly, oil and gas development is moving into areas of Colorado that are not accustomed to this development, and state officials in both the Department of Public Health and Environment and the Oil and Gas Conservation Commission said that they have spent a lot of time providing the public with information on topics including hydraulic fracturing. State regulators in Wyoming said that educating the public has been a challenge since coalbed methane and tight sandstone development in Wyoming is very different than, for example, shale gas development in Pennsylvania, but the media do not always make this clear. State regulators in North Dakota did not report public education as a challenge. Agency Comments and Our Evaluation We provided a draft of this report to EPA and to the Departments of Agriculture and the Interior for review and comment. The Departments of Agriculture and Interior provided written comments on the draft, which are summarized below and appear in their entirety in appendixes XI and XII, respectively. In addition, both Departments and EPA provided technical comments, which we incorporated as appropriate. In its written comments, the Department of Agriculture agreed with our findings and noted that the Forest Service also faces challenges hiring and retaining qualified staff. In response, we added this information to the report. In its written comments, the Department of the Interior provided additional clarifying information on its efforts concerning BLM’s proposed rule on hydraulic fracturing and steps BLM is taking to hire and retain skilled technical staff. In response, we included additional information in the report about BLM’s proposed rule on hydraulic fracturing. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the EPA Administrator, the Secretaries of Agriculture and the Interior, the Director of the Bureau of Land Management, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XIII. Appendix I: Objectives, Scope, and Methodology To identify federal and state environmental and public health requirements governing onshore oil and gas development from unconventional reservoirs, we analyzed federal and state laws, regulations, and guidance, as well as reports on federal and state requirements. We defined unconventional reservoirs as including shale gas deposits, shale oil, coalbed methane, and tight sandstone formations. We focused our analysis on requirements that apply to activities on the well pad and wastes or emissions generated at the well pad rather than on downstream infrastructure such as pipelines or refineries. In particular, we identified and reviewed eight key federal environmental and public health laws, specifically the Safe Drinking Water Act; Clean Water Act; Clean Air Act; Resource Conservation and Recovery Act; Comprehensive Environmental Response, Compensation, and Liability Act; Emergency Planning and Community Right-to-Know Act; Toxic Substances Control Act; and Federal Insecticide, Fungicide, and Rodenticide Act. We also reviewed corresponding regulations such as the Environmental Protection Agency’s (EPA) New Source Performance Standards and National Emission Standards for Hazardous Air Pollutants for the Oil and Gas Industry and guidance such as EPA’s Guidance for Implementation of the General Duty Clause of the Clean Air Act. To identify state requirements, we identified and reviewed laws and regulations in a nonprobability sample of six selected states—Colorado, North Dakota, Ohio, Pennsylvania, Texas and Wyoming. We selected states with current unconventional oil or gas development and large reservoirs of unconventional oil or gas. In addition, we ensured that the selected states included a variety of types of unconventional reservoirs, differing historical experiences with the oil and gas industry, and that some of the selected states have significant oil and gas development on federal lands. Because we used a nonprobability sample, the information that we collected from those states cannot be generalized to all states but can provide illustrative examples. To complement our analysis of federal and state laws and regulations, we interviewed officials in federal and state agencies to discuss how federal and state requirements apply to the oil and gas industry (see table 7). In particular, we interviewed officials in EPA headquarters and four Regional offices where officials are responsible for implementing and enforcing programs within the six states we selected, including Region 3 for Pennsylvania, Region 5 for Ohio, Region 6 for Texas, and Region 8 for Colorado, North Dakota, and Wyoming. We also interviewed state officials responsible for implementing and enforcing requirements governing the oil and gas industry and environmental or public health requirements in each of the six states we selected. For three of these states—Colorado, North Dakota, and Wyoming—we conducted these interviews in person. We also interviewed officials from the Delaware River Basin Commission—a regional body that manages and regulates certain water resources in four states, including Pennsylvania. We also contacted officials from environmental, public health, and industry organizations to gain their perspectives and to learn about ongoing litigation or petitions that may impact the regulatory framework. We selected environmental organizations that had made public statements about federal or state requirements for oil and gas development and public health organizations representing state and local health officials and communities. The selected organizations are a nonprobability sample, and their responses are not generalizable. In addition, we visited drilling, hydraulic fracturing, and production sites in Pennsylvania and North Dakota and met with company officials to gather information about these processes and how they are regulated at the federal and state levels. We selected these companies based on their operations in the six states we selected. To identify additional requirements that apply to unconventional oil and gas development on federal lands, we reviewed laws, such as the National Environmental Policy Act (NEPA), as well as regulations and guidance promulgated by the Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), Forest Service, and National Park Service. We also interviewed officials responsible for overseeing oil and gas development on federal lands, including officials in BLM headquarters and in field offices in the states we selected where there is a significant amount of oil and gas development on federal lands, including Colorado, North Dakota, and Wyoming; and in National Park Service, Forest Service, and FWS headquarters. Oil and gas development may also be subject to tribal or local laws, but we did not include an analysis of these laws in the scope of our review. To determine challenges that federal and state agencies face in regulating oil and gas development from unconventional reservoirs, we reviewed several reports conducted by environmental and public health organizations, industry, academic institutions, and government agencies that provided perspectives on federal and state regulations and associated challenges. We also collected testimonial evidence, as described above, from knowledgeable federal and state officials, as well as industry, environmental, and public health organizations. We conducted this performance audit from November 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Requirements and Authorities under the Safe Drinking Water Act The Safe Drinking Water Act (SDWA or the Act) was originally passed by Congress in 1974 to protect public health by ensuring a safe drinking water supply. Under the act, EPA is authorized to set standards for certain naturally-occurring and man-made contaminants in public drinking water systems, among other things. Key aspects of SDWA for unconventional oil and gas development include provisions regarding underground injection and EPA’s imminent and substantial endangerment authority. Underground Injection Control Program SDWA also regulates the placement of wastewater and other fluids underground through the Underground Injection Control (UIC) program.This program provides safeguards to ensure that wastewater or any other fluid injected underground does not endanger underground sources of drinking water; these sources are defined by regulation as an aquifer or its portion: 1) (i) Which supplies any public water system; or (ii) Which contains a sufficient quantity of groundwater to supply a public water system; and (A) Currently supplies drinking water for human consumption; or (B) Contains fewer than 10,000 mg/l total dissolved solids; and 2) Which is not an exempted aquifer. Thus, the program is intended to protect not only those aquifers (or portions thereof) that are currently used for drinking water, but those that possess certain physical characteristics indicating they may be viable future drinking water sources. EPA regulations establish criteria for exempting aquifers.the regulations establish that the criterion that an aquifer “cannot now and In particular, will not in the future serve as a source of drinking water” may be met by demonstrating that the aquifer is mineral, hydrocarbon or geothermal energy producing, or demonstrated by a permit applicant as having commercially producible minerals or hydrocarbons. States or EPA typically initially identified exempt aquifers when UIC programs were established, and according to EPA, states may have added exempt aquifers since then. While EPA has the information from the initial applications, the agency does not have complete information for the additional exemptions, although under EPA regulations certain of these subsequent exemptions are considered program revisions and must be approved by EPA. EPA is currently collecting information about the location of all exempted aquifers, and an official estimated that there are 1,000-2,000 such designations (including portions of aquifers). There are six classes or categories of wells regulated through the UIC program. Class II wells are for the management of fluids associated with oil and gas production, and they include wells used to dispose of oil and gas wastewater and those used to enhance oil and gas production. SDWA § 1422(b)(2), 42 U.S.C. § 300h-1(b)(2) (2012). See also SDWA §§ 1421(b)(1), 1422(b)(1), (3), 42 U.S.C. §§ 300h(b)(1), 300h-1(b)(1), (b)(3) (2012) (establishing requirements and responsibilities for states with primacy). permitting, monitoring, and enforcement for UIC wells within the state. Generally, to be approved as the implementing authority (primacy), state programs must be at least as stringent as the federal program and show that their regulations contain effective minimum requirements for each of the well classes for which primacy is sought. Alternately, SDWA section 1425 provides that to obtain this authority over Class II wells only, a state with an existing oil and gas program may, instead of meeting and adopting the applicable federal regulations, demonstrate that its program is effective in preventing endangerment to underground sources of drinking water. With respect to the six states in this review, Texas, North Dakota, Colorado, Wyoming, and Ohio have each been granted primacy for Class II wells under the alternative provisions (SDWA section 1425). EPA directly implements the entire UIC program in Pennsylvania. Class II wells include saltwater (brine) disposal wells, enhanced recovery wells, and hydrocarbon storage wells. These wells are common, particularly in states with historical oil and gas activity. EPA officials estimate there are approximately 151,000 Class II UIC wells in operation in the United States; about 80 percent of these wells are for enhanced recovery, about 20 percent are for disposal, and there are approximately 100 wells for hydrocarbon storage. In Pennsylvania, the one state in our review in which EPA directly implements the Class II program, EPA Region 3 officials stated that there are five active Class II disposal wells. Recently, Region 3 issued permits for two Class II disposal wells in Pennsylvania, which were appealed. On appeal, the Environmental Appeals Board remanded the permits back to EPA for further consideration, finding that the Region failed to clearly articulate its regulatory obligations or compile a record sufficient to assure the public that the Region relied on accurate and appropriate data in satisfying its obligations to account for and consider all drinking water wells within the area of review of the injection wells. The Environmental Appeals Board denied all other claims against EPA. Under the remand, EPA may take further action consistent with the decision, which could include such actions as additions or revisions to the record and reconsideration of the permits. With respect to applications, according to Region 3 officials, until recently EPA did not receive many applications for new Class II brine disposal wells in Pennsylvania. EPA officials said that they have received five permit applications for such wells in the last 4 months and expect continued interest in the future. Class II UIC Requirements Under SDWA, UIC programs are to prohibit underground injection, other than into a well that is authorized by rule or permitted. Class II UIC wells must meet requirements contained in either EPA regulations, or relevant state regulations. Federal regulations for Class II wells include construction, operating, monitoring and testing, reporting, and closure requirements. For example, one requirement of federal regulations is that all of the preexisting wells located in the area of review, and that were drilled into the same formation as the proposed injection well must be identified. For such wells which are improperly sealed, completed, or abandoned, the operator must also submit a plan of actions necessary to prevent movement of fluid into underground sources of drinking water— known as ‘‘corrective actions,’’ such as plugging, replugging, or operational pressure limits—which are considered in permit review. Permits may be conditioned upon a compliance schedule for such corrective actions. According to EPA, in Pennsylvania many old wells have had to be replugged in order to ensure they cannot present a potential pathway for migration. Regarding seismicity concerns, the federal regulations for Class II UIC wells require applicants for Class II UIC wells to identify faults if known or suspected in the area of review. requirement that a well must be sited to inject into a formation that is separated from any protected aquifer by a confining zone that is free of known open faults or fractures within the area of review. In a permit process, EPA (in direct implementation states) or the state can require additional information (including geology) to ensure protection of underground sources of drinking water. For example, Region 3 officials said the Region routinely determines whether there is the potential for fluid movement out of the injection zone via faults and fractures, as well as abandoned wells, by calculating a zone of endangering influence around the injection operation. Under the general standard, if a proposed or ongoing injection was, due to seismicity, believed to endanger underground sources of drinking water, EPA or the state could act, as the burden is on the applicant to show the injection well will not endanger such sources.line that was not identified or known at the time of the UIC permit approval, EPA (in direct implementation states) or the state can go back to the well owner or operator and ask for additional information, which the owner or operator would be obligated to provide. In addition there is a general Officials said that if a seismic event occurs along a fault For additional information on the Class II UIC requirements applicable under EPA’s program in Pennsylvania, see appendix IX. 40 C.F.R. §§ 146.24, 146.24(a)(2) (2012). Class II UIC Programs and Hydraulic Fracturing Historically, UIC programs did not include hydraulic fracturing injections as among those subject to their requirements. In 1994, in light of concerns that hydraulic fracturing of coalbed methane wells threatened drinking water, the Legal Environmental Assistance Foundation petitioned EPA to withdraw its approval of Alabama’s Class II UIC program. EPA denied the petition, but on appeal, the United States Court of Appeals for the Eleventh Circuit held that the definition of underground injection included hydraulic fracturing and ordered EPA to reconsider the issue. Subsequently, Alabama revised its program to include injection of hydraulic fracturing fluids, and EPA approved it pursuant to SDWA section 1425 in 2000. The Legal Environmental Assistance Foundation appealed the approval and, in 2001, the Eleventh Circuit partially remanded the approval, directing EPA to regulate hydraulic fracturing as Class II UIC wells rather than a Class II-like activity. Alabama amended its regulations in 2001 and 2003. EPA issued a determination in 2004 addressing the question on remand and found that the hydraulic fracturing portion of Alabama’s UIC program relating to coalbed methane production, which was previously approved under the alternative effectiveness provision, complied with the requirements for Class II UIC wells. EPA initiated a study in 2000 to further examine the issue of fracturing in coalbed methane in areas of underground sources of drinking water. EPA officials said the study showed diesel fuel was the primary risk. Subsequently, in 2003, EPA entered into a memorandum of agreement with three major fracturing service companies in which the companies voluntarily agreed to eliminate diesel fuel in hydraulic fracturing fluids injected into coalbed methane production wells in underground sources of drinking water. According to EPA officials, the agreement is still in effect insofar as the agency has not received any termination notices. EPA officials did not know of any permits issued by Alabama, or any other state, for hydraulic fracturing injections during this time frame. EPA also did not modify its direct implementation of Class II UIC programs to expressly include hydraulic fracturing. On December 7, 2004, EPA’s Assistant Administrator for Water responded to a congressional request for information on EPA’s actions on this issue. The letter summarizes EPA’s study findings—that the potential threat to underground sources of drinking water posed by hydraulic fracturing of coalbed methane wells is low, but there is a potential threat through the use of diesel fuel as a constituent of fracturing fluids where coalbeds are colocated with an underground source of drinking water. Pub. L. No. 109–58 § 322, 119 Stat. 594 (2005) (modifying SDWA § 1421(d)(1), 42 U.S.C. § 300h(d)(1) (2012)). fluids other than diesel fuel in connection with hydraulic fracturing is not subject to federal UIC regulations, including both EPA direct implementation requirements and federal minimum requirements for state programs. The provision, however, did not exempt injection of diesel fuels in hydraulic fracturing from UIC programs. EPA has prepared a draft guidance document to assist with permitting of hydraulic fracturing using diesel fuels under SDWA UIC Class II; a public comment period for this draft guidance closed in August 2012. EPA explained that the guidance does not substitute for UIC Class II regulations, rather the guidance focuses on specific topics useful for tailoring Class II requirements to the unique attributes of hydraulic fracturing when diesel fuels are used. EPA’s draft guidance is applicable to any oil and gas wells using diesel in hydraulic fracturing (not just coalbed methane wells). The draft guidance provides recommendations related to permit applications, area of review (for other nearby wells), well construction, permit duration, and well closure. The guidance states that it does not address state UIC programs, although states may find it useful. EPA officials told us that they recently identified wells for which publicly available data suggest diesel was used in hydraulic fracturing. EPA officials stated the agency also has some information on diesel use in hydraulic fracturing of shale formations from a 2011 congressional investigation. EPA officials said there are no EPA-issued permits authorizing diesel to be used in hydraulic fracturing, and they believe no applications for such permits have been submitted to EPA to date. EPA officials also said that they were not aware of any state UIC programs that had issued such permits. Enforcement Generally, EPA is authorized to enforce any applicable requirement of a federal or state UIC program as promulgated in 40 C.F.R. pt. 147, including Class II UIC programs approved under the alternative provision. However, according to officials, EPA has not promulgated all of the states’ modifications to UIC programs, and the federal regulations are out-of-date, hindering EPA’s ability to directly enforce some state program provisions. EPA may issue administrative orders or, with the Department of Justice, initiate a civil action when a person violates any requirement of an applicable UIC program. Where a state has primacy, EPA must first notify the state, and may act after 30 days if the state has not commenced an appropriate enforcement action. SDWA also provides EPA with authority to access records, inspect facilities, and require provision of information. Specifically, EPA has authority, for the purpose of determining compliance, to enter any facility or property of any person subject to an applicable UIC program, including inspection of records, files, papers, processes, and controls. Under EPA’s UIC program enforcement authorities, EPA has issued administrative compliance orders and administrative penalty orders relating to SDWA UIC Class II Wells. According to officials, most cases are administrative and handled at the Regional level. Officials said that there were more than 200 administrative orders related to the UIC program from 2004-2008 and that it is likely that a majority of these were related to Class II wells. For example, EPA Region 3 signed a consent agreement in Venango County, Pennsylvania, where injections of produced water were made into abandoned wells not permitted under the UIC program. In another case, Region 3 told us it has issued an administrative order against an operator for failure to conduct mechanical integrity tests. According to EPA, the order requires the operator to plug many of these wells, and to bring the wells they plan to continue to operate into compliance with their financial responsibility. Region 3 also took a penalty action against an operator for failure to report a mechanical integrity failure and continued operation after the failure. According to officials, EPA was able to confirm during well rework that there was no fluid movement outside the well’s casing and no endangerment to an aquifer. Imminent and Substantial Endangerment Authorities While SDWA generally does not directly regulate land use activities that may pose risk to drinking water supplies, SDWA gives EPA authority to issue imminent and substantial endangerment orders or take other actions deemed necessary “upon receipt of information that a contaminant which is present in or is likely to enter a public water system or an underground source of drinking water…which may present an imminent and substantial endangerment to the health of persons, appropriate State and local authorities have not acted to protect the health of such persons.” As noted above, the term “underground source of drinking water” includes not only active water supplies but also aquifers (or portions thereof) with certain physical characteristics. EPA has used this imminent and substantial endangerment authority in several incidents where oil or gas wells have been alleged to contaminate drinking water. For example, EPA Region 8 has conducted long-term investigation and monitoring of groundwater contamination from an oilfield in Poplar, Montana, of a water supply serving Poplar, as well as the Fort Peck Indian Reservation. EPA determined that there are several plumes of produced water (brine) in the East Poplar aquifer, which supplies private and public drinking water wells. Several pathways of contamination have been identified, including unlined pits, spills, and a leaking plugged oil well. EPA issued a SDWA imminent and substantial endangerment order in 2010 to three companies operating wells in the oilfield, each of which challenged the order in federal court. Following mediation, EPA and the parties entered an administrative order on consent in which the parties agreed to monitor the public drinking water supply for specified parameters and, if certain triggers are met or exceeded, to take actions to ensure the public water system meets water quality standards and pay reimbursement costs to the public water system. In another case, on December 7, 2010, EPA issued an administrative order to a well operator in Texas alleging methane contamination affecting private wells and directly related to its oil and gas production EPA subsequently filed a complaint in U.S. District Court facilities. seeking injunctive relief to enforce the order’s requirements and civil penalties for the operator’s noncompliance with the order. A few days later, the operator filed a petition for review of the order with the Fifth Circuit Court of Appeals. The operator’s position was that the order is not a final agency action and that EPA has the burden of proving its claim in the district court enforcement action, and its enforcement would violate due process. On March 29, 2012, EPA withdrew its administrative order, and the parties moved for voluntary dismissal of both cases. In a letter to EPA, the operator agreed to conduct sampling of 20 private water wells for 1 year. Appendix III: Key Requirements and Authorities under the Clean Water Act Under the Clean Water Act (CWA), EPA regulates discharges of pollutants to waters of the United States; for the purpose of this document, we generally refer to such waters, including jurisdictional rivers, streams, wetlands, and other waters, as surface waters. Discharges may include wastewater, including produced water, and stormwater. In addition, together with the U.S. Army Corps of Engineers, EPA regulates discharge of dredged or fill material into these waters. Under CWA section 311 and the Oil Pollution Act, establish, in relevant part, requirements for the prevention of, preparedness for, and response to oil discharges at certain facilities, including among others oil drilling and production facilities.requirements may include Facility Response Plans and Spill Prevention, Control, and Countermeasure (SPCC) Plans. EPA also has certain response and enforcement authorities relevant to these requirements. This review focuses on EPA regulatory activities under these programs relevant to unconventional oil and gas development activities. CWA § 311, 33 U.S.C. § 1321 (2012); Oil Pollution Act of 1990, Pub. L. No. 101-380, 104 Stat. 484 (classified as amended at 40 U.S.C. ch. 40, §§ 2701 – 2761 (2012) and amending sections of CWA). See also Exec. Order 12,777, 56 Fed. Reg. 54,757 (1991). National Pollutant Discharge Elimination System Program CWA is the primary federal law designed to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. Among other things, EPA and delegated states administer CWA’s National Pollutant Discharge Elimination System (NPDES) program, which limits the types and amounts of pollutants that facilities such as industrial and municipal wastewater treatment plants may discharge into the nation’s surface waters. Facilities such as municipal wastewater treatment plants and industrial sites, including oil and gas well sites, need a permit if they have a point source discharge to surface waters. Other than stormwater runoff as discussed below, discharges of pollutants from an oil or gas well site to surface water require an NPDES permit. According to EPA, wastewater associated with shale gas extraction can include total dissolved solids, fracturing fluid additives, metals, and naturally occurring radioactive materials, and may be disposed by transport to publicly- owned or other wastewater treatment plants, particularly in some locations where brine disposal wells are unavailable. According to EPA, produced water from coalbed methane gas extraction can include high salinity and pollutants such as chloride, sodium, sulfate, bicarbonate, fluoride, iron, barium, magnesium, ammonia, and arsenic, and some produced water is discharged to surface water in certain geographical areas. EPA and delegated states issue discharge permits that set conditions in accordance with applicable technology-based effluent limitations guidelines that EPA has established for various industrial categories, and may also include water-quality based effluent limitations. When EPA issues effluent limitations guidelines for an industrial category, it may include both limitations for direct dischargers (point sources that introduce pollutants directly into waters of the United States) and pretreatment standards applicable to indirect dischargers (facilities that discharge into publicly-owned wastewater treatment plants). Existing Effluent Limitations Guidelines for Oil and Gas Extraction EPA has developed effluent limitations guidelines for several subcategories of the oil and gas extraction industry. The guidelines generally apply to facilities engaged in the production, field exploration, drilling, well completion, and well treatment in the oil and gas extraction industry. The guidelines applicable to the wells in the scope of this review—essentially, oil and gas wells located on land and drilling unconventional reservoirs—include those for the onshore subcategory, agricultural and wildlife water use subcategory, and stripper wells. The guidelines for these subcategories were finalized in 1979. For the onshore and agricultural and wildlife water use subcategories, EPA established effluent limitations guidelines for direct dischargers. EPA did not establish guidelines for stripper wells, explaining that unacceptable economic impacts would occur from use of the then- evaluated technologies, and that the agency could revisit this decision at a later date. EPA officials we spoke with said that they are not aware of any reconsideration of this decision, and that this is not an issue on the current regulatory agenda. EPA also did not establish pretreatment requirements for either onshore or stripper well subcategories. Existing effluent limitations guidelines do not apply to wastewater discharges from coalbed methane extraction. As EPA subsequently explained, because there was no significant coalbed methane production in 1979, the oil and gas extraction rulemakings did not consider coalbed methane extraction in any of the supporting analyses or records. EPA officials also told us that the coalbed methane process is fundamentally different than traditional oil and gas exploration because of the volume of water that must be removed from the coalbed before production can begin, which they see as a significant distinction for potentially applicable technology. As will be discussed later in this appendix, in October 2011, EPA announced its intention to develop effluent limitations guidelines and standards for wastewater discharges from the coalbed methane industry. When an oil and gas well proposing to discharge pollutants to a surface water is not covered by the existing guidelines, effluent limitations included in the permit are determined on a case-by-case basis by the relevant permitting authority, using best professional judgmentapplicable state rules or guidance. EPA officials were not aware of any other unconventional oil and gas extraction processes, besides coalbed methane extraction, that are not covered by the existing effluent limitations guidelines. Table 8 summarizes the coverage and key requirements of the existing guidelines. there shall be no discharge of waste water pollutants into navigable waters from any source associated with production, field exploration, drilling, well completion, or well treatment (i.e., produced water, drilling muds, drill cuttings, and produced sand). Because an NPDES permit is only required where a facility discharges or proposes to discharge a pollutant, and as the technology-based requirement of “no discharge” must be applied in the permit, facilities subject to a “no discharge” limit are not required to apply for such permits. According to the 1976 Federal Register Notice of the Proposed Rule, technologies for managing produced water to achieve no discharge to surface waters were expected to include evaporation ponds, or underground injection, either for enhanced recovery of oil or gas in the producing formation or for disposal to a deep formation. Further, EPA indicated that drilling muds, drill cuttings, well treatment wastes, and produced sands would be disposed by land disposal so as not to reach navigable waterways. The effluent limitations guideline for the Oil and Gas Extraction point source category also established a subcategory for Agricultural and Wildlife Water Use to cover a geographical subset of operations in which produced water is of good enough quality to be used for wildlife or livestock watering or other agricultural uses and that the produced water is actually put to such use during periods of discharge. This subcategory guideline is only applicable to facilities located west of the 98th meridian, which extends from approximately the eastern border of North Dakota south through central Texas. EPA explained in the preamble to this rule that “t is intended as a relatively restrictive subcategorization based on the unique factors of prior usage in the Region, arid conditions and the existence of low salinity, potable water.” “no discharge of waste pollutants into navigable waters from any source (other than produced water) associated with production, field exploration, drilling, well completion, or well treatment (i.e., drilling muds, drill cuttings, and produced sands),” and for produced water discharges a daily maximum limitation of 35 milligrams per liter of oil and grease. At oil and gas well sites meeting the conditions of location, produced water quality, and use of produced water for wildlife or livestock watering or agricultural use, the produced water may be discharged to waters of the United States. In terms of water quality, the produced water must be and must not exceed the daily maximum for “good enough” for this use, oil and grease. States generally issue these permits, and are responsible for determining whether the water is of appropriate water quality for the beneficial use.guidance on this topic. EPA is responsible for oversight and has not issued EPA has not revised the guildeines, such as to add limitations for additional pollutants, to define “good enough” water quality, or to establish potentially more stringent guidelines. EPA officials stated that it has not done so because in certain locations the produced water from oil and gas development is high quality, and because treatment would cost more than injection, thus discouraging the beneficial use of this water. With respect to the subcategories of oil and gas wells covered by the effluent limitations guidelines, discharges are authorized only for oil and gas wells under the Agricultural and Wildlife Water Use and Stripper well subcategories. These well sites that discharge wastewater to surface waters must, as noted above, obtain a NPDES permit from the permitting authority (state, tribe, or EPA). The permit is to incorporate the applicable effluent limitations guideline, if one exists, and include effluent monitoring and reporting requirements. Officials also stated that individual permits may contain limits for pollutants other than oil and grease. According to EPA, 349 discharge permits in the Agricultural and Wildlife Water Use subcategory have been issued. Most of these permitted discharges are located in Wyoming, Montana, and Colorado. Anticipated Rulemaking to Develop Effluent Limitations Guidelines for Oil and Gas Extraction from Coalbed Methane Formations On October 26, 2011, EPA announced in its Final 2010 Effluent Limitations Program Plan that the agency will develop effluent limitations guidelines and standards for wastewater discharges from the coalbed methane extraction industry. With respect to coalbed methane extraction, as noted above, there is no existing effluent limitations guideline applicable to associated wastewaters. Coalbed methane operations discharging wastewaters to surface waters must nonetheless obtain a NPDES permit, but in the absence of a federal effluent limitations guideline, the permitting authority determines the permit limits based on best professional judgment, as well as any applicable state rules or guidelines. EPA had identified the industry for consideration in prior years, and initiated work leading to a detailed study beginning in 2007. found that states are primarily issuing individual permits, but they are also issuing some general permits and watershed permits covering one or more wells through a streamlined process. According to EPA officials, eastern states have generally based effluent limitations in permits on the coal mining effluent limitations guideline, although that guideline does not have limitations for total dissolved solids or chlorides that are key components of produced water. In the six states reviewed, EPA identified 861 coalbed methane discharge permits.most coalbed methane wastewater discharges have NPDES permits. initiate rulemaking. EPA is in the preproposal stage of rulemaking for the coalbed methane effluent guidelines and standards.indicates the projected date for publication of the proposed rule is June 2013. Generally Applicable Pretreatment Standards and POTW Obligations Facilities discharging industrial wastewater to publicly-owned treatment works (POTW) treatment plants are subject to general pretreatment requirements. In addition, the POTW receiving such industrial wastewaters also has responsibilities related to its own permit and to receiving these wastewaters. EPA has issued general pretreatment requirements applicable to all existing and new indirect dischargers of pollutants (other than of purely domestic, or sanitary, sewage) to a POTW, including any dischargers of wastewaters associated with oil and gas wells. Notably, such discharges are subject to a general requirement that the pollutants do not cause pass through or interference with the POTW. For a discharge to cause pass through, it must contribute to violation of the POTW’s NPDES permit; to cause interference, it must contribute to the noncompliance of its sewage sludge use or disposal. Other standard provisions for indirect discharges involve a prohibition on corrosive discharges. According to EPA officials, in produced water, concerns for corrosivity would be related to high chlorides and sulfides which could adversely affect pipes and gaskets in the POTW. EPA has stated that NPDES permits for POTWs typically do not contain effluent limits for some of the pollutants of concern from shale gas wastewater, and that some of these pollutants may be harmful to aquatic Specifically, if a POTW did not include information in its NPDES life. permit application indicating that the POTW would receive oil and gas wastewater, or did not otherwise adequately characterize the incoming wastewater as including certain pollutants of concern, the permit may not include limits for these pollutants, as permits generally only contain limits for those pollutants reasonably expected to be present in the wastewater. Regarding pass through, in which an indirect industrial discharger contributes to violation of the receiving POTW’s NPDES permit, Region 3 officials said that POTW operators had not indicated that NPDES violations were caused by oil and gas wastewaters received at the plant, with the following exception. In 2011, EPA issued an administrative order for compliance and request for information to a POTW in New Castle, Pennsylvania, in relation to permit effluent limit violations. The POTW experienced violations of its suspended solids limits spanning over a year, and attributed the violations to salty wastewater from natural gas production it was receiving. The order required the POTW to take several actions including to cease accepting oil and gas exploration and production wastewater until completing an evaluation and sampling, and to eliminate and prevent recurrence of the violations. Generally, local governments operating POTWs are responsible for ensuring that indirect dischargers comply with any applicable national pretreatment standards. Certain POTWs are required to develop pretreatment programs, which set out a facility’s approach to developing, issuing, and enforcing pretreatment requirements on any indirect dischargers to the particular plant. EPA or states may be responsible for ensuring these POTWs meet their obligations and for approving the POTW’s pretreatment plans. According to EPA, regardless of pass through or interference, POTWs should not accept indirect discharges of produced water if the wastewaters have different characteristics than those for which the POTW was originally permitted, without providing adequate notice to the permitting authority. If a POTW accepts oil and gas wastewater with characteristics that were not considered at the time of the permit issuance, then the permit may not adequately protect the receiving water from potential violations of water quality standards. In other words, a POTW may meet its permit limits, yet still contribute to a violation of water quality standards, if the permit does not reflect consideration of all the pollutants actually present, and their concentrations, in the incoming wastewater and in the discharge. According to Region 3 officials, EPA has conducted several investigations of whether discharges from POTWs accepting oil and gas wastewater have prevented receiving waters from meeting water quality standards. Region 3 officials stated that a major impediment to this evaluation was that the NPDES permits reviewed did not have effluent limits or monitoring requirements for the pollutants of concern. EPA also stated that it has data from a 2009 Pennsylvania Department of Environmental Protection violation report documenting a fishkill attributed to a spill of diluted produced water in Hopewell Township, PA. In March 2011, EPA’s Office of Water issued to the Regions a set of questions and answers that provide state and federal permitting authorities in the Marcellus shale region with guidance on permitting treatment and disposal of wastewater from shale gas extraction. guidance states that POTWs must provide adequate notice to the permitting authority (EPA or the authorized state) of any new introduction of pollutants into the POTW from an indirect discharger, if the discharger would be subject to NPDES permit requirements if it were discharging directly to a surface water, among other things. EPA officials indicated that if a POTW is accepting types of wastewater that were not on its original application, EPA could require a modification of the POTW’s NPDES permit, or object to a NPDES renewal that did not address these wastewaters and the facility’s ability to treat them. POTWs may also initiate inclusion of these wastewaters in their permits or permit renewals. For example, EPA Region 3 officials stated that four POTW operators in Pennsylvania in the NPDES renewal process have indicated the intent to continue accepting oil and gas wastewater. In addition, in cases with pass through or interference, EPA could require a POTW to develop a pretreatment program. EPA’s website indicates the agency plans to supplement the existing Office of Water questions and answers document with additional guidance directed to permitting authorities, pretreatment control authorities and POTWs, to provide assistance on how to permit POTWs and other centralized wastewater treatment facilities by clarifying existing CWA authorities and obligations. Specifically, EPA plans to issue two guidance documents, one for permit writers and another for POTWs. Anticipated Rulemaking to Develop Pretreatment Standards for Gas Extraction from Shale Formations With respect to shale gas extraction, the effluent limitations guideline for the onshore subcategory in effect since 1979 has prohibited direct discharges of associated wastewaters; however, EPA has not established pretreatment standards for indirect discharges of such wastewaters. EPA requested and received comments on whether to initiate a rulemaking for the industry in recent years. In 2011, EPA announced it will initiate a rulemaking to develop such pretreatment standards. EPA reviewed existing data, but did not conduct a study to develop data as it had for coalbed methane. EPA found that pollutants in wastewaters associated with shale gas extraction are not treated by the technologies typically used at POTWs or many centralized treatment facilities. the potential to affect drinking water supplies and aquatic life. On this basis, EPA concluded that pretreatment standards are appropriate and decided to initiate a rulemaking. EPA intends to conduct a survey, among other things, to collect information on management of produced water to support the rulemaking. Finally, EPA noted that if it obtains information indicating that POTWs are already adequately treating shale gas wastewater, the agency could adjust the rulemaking plans accordingly.operators of Marcellus shale gas wells stop delivering produced water to POTWs, potentially avoiding the issue. EPA officials stated that other states may nonetheless have a need to utilize POTWs to address these wastewaters and hence could benefit from pretreatment standards. Further, EPA stated that resulting discharges have For example, the state of Pennsylvania requested that EPA is in the preproposal stage of this rulemaking, and EPA’s website indicates the projected date for publication of the proposed rule is 2014. 76 Fed. Reg. at 66,295-96. According to EPA, POTWs typically have permits that do not contain limits for the pollutants of concern in shale gas wastewater; the secondary treatment requirements do not address such pollutants, and is it uncommon for these permits to contain water quality based limitations for such pollutants. Id. at 66,297. Thus, such wastewaters likely pass through the POTWs receiving such wastewaters and the POTWs may not monitor for these pollutants in their effluent. Id. at 66,297. NPDES for Stormwater Discharges In 1987, the Water Quality Act amended CWA to establish a specific program for regulating stormwater discharges of pollutants to waters of the United States. Among other things, the amendments clarified EPA authority to require an NPDES permit for discharges of stormwater from several categories, including in relevant part those associated with industrial activity and construction activity. EPA subsequently issued regulations that address stormwater discharges from several source categories, including certain industrial activities and construction activities. Generally, industrial sites obtain coverage for stormater through a general permit, such as the multisector general permit or construction general To do so, the facility operator submits a notice of intent, and permit. agrees to meet general permit conditions. For example, conditions for the construction general permit include applicable erosion and sediment control, site stabilization, and pollution prevention requirements. oil and gas exploration, production, processing, or treatment operations or transmission facilities composed entirely of flows which are from conveyances or systems of conveyances (including but not limited to pipes, conduits, ditches, and channels) used for collecting and conveying precipitation runoff and which are not contaminated by contact with, or do not come into contact with, any overburden, raw material, intermediate products, finished product, byproduct, or waste products located on the site of such operations. Interpreting the provision exempting oil and gas facilities, EPA issued regulations requiring permits for contaminated stormwater from oil and gas facilities. To determine whether a discharge of stormwater from an oil or gas facility is contaminated, EPA regulations establish that if a facility has had a stormwater discharge that resulted in a discharge exceeding an EPA reportable quantity requiring notification under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) or section 311 of CWA, or which contributes to violation of a water quality standard, the permit requirement is triggered for that facility. Regarding stormwater at oil and gas well sites, officials said it is unlikely there is a permit requirement because it is rare that stormwater would come into contact with raw materials. Nonetheless, if a facility anticipates having a stormwater discharge that includes a reportable quantity of oil or may result in a violation of water quality standards, then the facility would be obligated to apply for a NPDES permit. In applying for the permit, however, the facility has to agree not to discharge pollutants in a reportable quantity and not to discharge pollutants so as to cause a water quality violation. Given this, it is unclear whether facilities would apply for such a permit after they have had a release of a reportable quantity or contributing to a water quality violation. Furthermore, according to officials, EPA relies upon operators self-identifying based on reportable quantities or water quality violations. Despite these factors, EPA reviewed available data for the five states in which EPA administers the NPDES program, including Texas, and identified some stormwater general permit notifications for facilities that could be well sites. EPA regulations require permits for stormwater discharges from construction activities including clearing, grading, and excavating that result in land disturbance. Beginning in 1990, EPA began regulating stormwater discharges from construction sites disturbing more than 5 acres of land under its Phase I rule. Under Phase II rules issued in 1999, EPA regulated stormwater discharges from construction sites disturbing between 1 and 5 acres of land, with initial permit applications due in 2003. With respect to oil and gas well sites, under the statutory provisions and EPA’s Phase 1 stormwater regulations, discharges of stormwater from construction activity would have required a permit only for sites disturbing more than 5 acres and where the stormwater is contaminated by contact with, or comes into contact with, any overburden, raw material, intermediate products, finished product, byproduct, or waste products located on the site of such operations. According to EPA officials, the agency believed few oil and gas sites met these conditions. They further explained that when EPA conducted the Phase II rulemaking for the smaller 1 to 5 acre sites, the agency assumed incorrectly that oil and gas well sites would be smaller than 1 acre and thus did not include oil and gas well sites in their economic analysis of the rule. After the rule’s issuance as it became aware that such sites would fall under the rule, and in light of industry objections over the lack of economic analysis, EPA delayed Phase II implementation at oil and gas well sites until 2006. Before implementation of Phase II regulations at oil and gas well sites began, the Energy Policy Act of 2005 was enacted. The Energy Policy Act of 2005 amended CWA to specifically define the activities included in the oil and gas stormwater exemption. Where the law already exempted from NPDES permit requirements discharges of stormwater from “oil and gas exploration, production, processing, or treatment operations or transmission facilities,” the Energy Policy Act of 2005 added a definition of this term as “all field activities or operations associated with exploration, production, processing, or treatment operations, or transmission facilities, including activities necessary to prepare a site for drilling and for the movement and placement of drilling equipment, whether or not such field activities or operations may be considered to be construction activities.” In response to these amendments, in 2006, EPA revised a key provision of the regulations concerning oil and gas stormwater discharges. The revision provided that discharges of sediment from oil or gas facility construction activities and contributing to a water quality standard violation would not trigger a permit requirement. This revision was vacated and remanded by the Ninth Circuit in 2008. EPA has not subsequently revised the regulations applicable to stormwater discharges from oil and gas facilities; the pre-2006 regulations remain in effect as to this industry. EPA officials said the agency intends to revise its regulations to address the court’s vacatur in an upcoming stormwater rulemaking, with the proposal expected in 2013. According to EPA officials, during construction, oil and gas well sites would have no permit requirement because of the statutory exemption. NPDES Enforcement For violations of the law, or applicable regulations or permits, EPA has authority to issue administrative orders requiring compliance, impose administrative penalties, as well as to bring suit and, in conjunction with the Department of Justice, to impose civil penalties. Among other things, EPA can take such actions if a well operator violates the CWA prohibition on unauthorized discharges of pollutants to surface waters. EPA also has information-gathering and access authority relative to point source owners and operators, which could include certain oil and gas well site operations. For example, EPA has authority to inspect facilities where an effluent source is located. As an example of enforcing the prohibition of unauthorized discharges, in 2011, EPA Region 6 assessed an administrative civil penalty against a company managing an oil production facility in Oklahoma for discharging brine and produced water to a nearby stream. entered a consent agreement with an oil production company in Colorado for unauthorized discharges of produced water from a multiwell site due to a failed gas eliminator valve in a produced water transportation pipeline. The produced water travelled overland for 333 feet, then entered a stream tributary to an interstate river. The company agreed to pay a civil penalty and to conduct a macroinvertebrate study for the affected watershed. Imminent and Substantial Endangerment Authorities American Petroleum & Environmental Consultants, Inc., Cease and Desist Administrative Order, EPA Docket No. CWA-06-2012-1760 (Dec. 12, 2011). take such other action as may be necessary, upon receipt of evidence that a pollution source or combination of sources is presenting an imminent and substantial endangerment to the health of persons or to the livelihood of persons. Unlike the analogous provisions of several other major environmental laws, however, CWA section 504 does not expressly mention administrative orders. Oil and Hazardous Substances Spill Prevention, Reporting, and Response Spill Prevention and Response Plans EPA’s Oil Pollution Prevention regulations, promulgated and amended pursuant to CWA and the Oil Pollution Act, impose spill prevention and response planning requirements on oil and gas well sites that meet thresholds. Specifically, the Spill Prevention, Control, and Countermeasure (SPCC) Rule applies to sites with underground and/or aboveground storage tanks above certain thresholds and where oil could be discharged into or upon navigable waters.production facilities, among others, generally are subject to the rule if they (1) have an aggregate oil storage capacity of greater than 1,320 gallons in aboveground oil storage containers or a total oil storage capacity greater than 42,000 gallons in completely buried storage tanks and (2) could reasonably be expected, due to their location, to discharge harmful Onshore oil and gas quantities of oil into or upon U.S. navigable waters or adjoining shorelines. The SPCC rule, as amended, requires each owner or operator of a regulated facility to prepare and implement a plan that describes how the facility is designed, operated, and maintained to prevent oil discharges into or upon U.S. navigable waters and adjoining shorelines. The plan must also include measures to control, contain, clean up, and mitigate the effects of these discharges. EPA regulations specify requirements for SPCC plans for onshore oil drilling and oil production facilities. Onshore drilling facilities must meet the general requirements for such plans, as well as meet specific discharge prevention and containment procedures: (1) position or locate mobile drilling or workover equipment so as to prevent a discharge; (2) provide catchment basins or diversion structures to intercept and contain discharges of fuel, crude oil, or oily drilling fluids; and (3) install a blowout prevention (BOP) assembly and well control system before drilling below any casing string or during workover operations. Oil production facilities are exempt from the SPCC security provisions. 73 Fed. Reg. 74,236 (Dec. 5, 2008). reviewed the spill data for the oil production sector contained in its study of the exploration and production sector…While these data do not characterize the extent of environmental damage caused by oil discharges from small oil production facilities, they demonstrate that the volume of oil discharged from onshore oil production facilities increasing, and the number of oil discharges on a yearly basis has remained the same, despite a decline in crude oil production. In addition, oil production facilities are often unattended, and typically located in remote areas, which potentially increases the risk of environmental damage from an oil discharge of oil. Various development activities at oil and gas well sites involve storage of oil that may trigger the SPCC regulations to impose these requirements. During initial exploration and drilling, the capacity of the fuel tank of the drill rig is the primary way the SPCC rule could be triggered, and EPA officials said that almost all drill rigs exceed the threshold capacity. During well completion and workover, where hydraulic fracturing is conducted, EPA officials said the capacity of the fuel tank in the turbines and pumps being used for fracturing typically exceed the threshold. As to wells in the production phase, they said there would generally be no SPCC requirement at dry gas wells, because they would not be storing condensate on-site. For wet gas and oil production, the size of the condensate or oil tanks on the site would be the key to whether SPCC is triggered. Id. at 58,802. See also Considerations for the Regulation of Onshore Oil Exploration and Production Facilities Under the Spill Prevention, Control, and Countermeasure Regulation (40 C.F.R. part 112)) (available at www.regulations.gov document EPA–HQ– OPA–2007–0584–0015). EPA has developed guidance related to SPCC applicability and compliance for oil production, drilling, and workovers. According to officials, EPA is currently developing a “frequently asked questions” document about the SPCC program and hydraulic fracturing. This document is being developed in response to an influx of questions about how the SPCC rule applies to gas well sites, particularly from companies active in the Marcellus shale. According to EPA officials, while the SPCC is focused on oil, wet gas wells involve condensates, some of which have traditionally been deemed liquid hydrocarbons and included in the program. In particular, questions have arisen over the lightest condensates (C2 and C4 hydrocarbons), which are usually in gaseous form at standard temperatures and pressures and hence are not included in the SPCC program, whereas storage of heavier condensates, such as C6+ hydrocarbons, has been included (as liquids) in the SPCC program. EPA directly administers the SPCC program. require facilities to report to the agency that they are subject to the SPCC rule and, as of 2008, EPA did not know the universe of SPCC-regulated facilities, but the agency was considering developing some data. EPA officials stated that they have significant data but not complete data because of the lack of a registration or submittal requirement. To ensure that facility owners and operators are meeting SPCC requirements, EPA personnel inspect selected regulated facilities to determine their compliance with the regulations. For some facilities, the SPCC compliance date was in November 2011. EPA is working to develop a national database of sites inspected under the SPCC rule. Officials said that the SPCC program’s database includes 120 inspections at oil and gas production facilities for fiscal year 2011, of which 105 had some form of noncompliance, which varies in significance from paperwork inconsistencies to more serious violations (though EPA officials were unable to specifically quantify the number of more serious violations). The Clean Water Act does not provide EPA with the authority to authorize states to implement the program in its place. According to EPA headquarters officials, EPA generally selects facilities for inspection based on spill reports EPA receives through the National Response Center. The Oil Pollution Prevention Regulation also requires an owner or operator of nontransportation onshore facilities that could, because of location, reasonably be expected to cause substantial harm to the environment by discharging oil into or on the navigable waters or shorelines, to submit to the appropriate EPA Regional office a facility response plan. The regulation specifies criteria to be used in determining whether a facility could reasonably be expected to cause substantial harm and hence triggers such requirement, and it also provides that the EPA Administrator may at any time, on determination considering additional factors, require a facility to submit a facility response plan. A facility owner or operator also may maintain certification that it could not, because of location, reasonably be expected to cause substantial harm by discharging oil into or onto navigable waters or shorelines. Relevant to oil well sites, the initial criteria for requiring a facility response plan is that the facility has total oil storage of 1 million gallons or more. Where such facilities meet at least one of four other criteria—such as lacking secondary containment, or located at distances that could injure fish and wildlife—then a facility response plan is required. The plan is to provide, in essence, an emergency response action plan for the worst-case discharge and other relevant information. According to EPA officials, onshore oil well sites would typically not go over the threshold criteria triggering the requirement for a facility response plan. Officials said there may be a small number of sites where very large or centralized operations with a number of wells connected to central piping and/or storage might trigger a facility response plan. Spill Prohibition and Reporting CWA established the policy of the United States that there should be no discharges of oil or hazardous substances into or upon U.S. navigable waters or onto adjoining shorelines, among other resources, and generally prohibited such discharges. Relevant provisions require reporting of certain discharges of oil or a hazardous substance to these waters. EPA has issued regulations designating those hazardous substances that present an imminent and substantial danger to the public health or welfare when discharged to U.S. navigable waters or onto adjoining shorelines in any quantity. EPA also has determined, in regulations, the quantities of oil and other hazardous substances of which the discharge to U.S. navigable waters or onto adjoining shorelines may be harmful to the public health or welfare or the environment. CWA in conjunction with these regulations require facilities to report to the National Response Center certain unpermitted releases of oil or hazardous substances to surface waters. The National Response Center subsequently sends reports to EPA Regions and headquarters. With respect to oil, discharges of oil must be reported if they “(c)ause a film or sheen upon or discoloration of the surface of the water or adjoining shorelines or cause a sludge or emulsion to be deposited beneath the surface of the water or upon adjoining shorelines,” or if they violate applicable water quality standards. With respect to hazardous substances, EPA has determined threshold quantities—those which may be harmful to the public health or welfare or the environment— known as reportable quantities. Spill Response Authority EPA, as well as other relevant federal agencies, has various response authorities to ensure effective and immediate removal of a discharge, and mitigation or prevention of a substantial threat of a discharge, of oil or a hazardous substance to U.S. navigable waters or onto adjoining shorelines. The National Oil and Hazardous Substances Pollution Contingency Plan, issued by EPA by regulation, provides a system to respond to discharges and to contain, disperse, and remove oil and hazardous substances, among other things.EPA Region 5, in conjunction with the state of Ohio, the Region has responded to several incidents in which orphan wells were found to be leaking or discharging crude oil into waterways. For example, according to Under CWA section 311, as required to carry out its purposes including spill prevention and response, EPA also has authority to require the owner or operator of a facility subject to the Oil Pollution Prevention Regulation, among other provisions, to establish and maintain such records; make such reports; install, use, and maintain such monitoring equipment and methods; provide such other information deemed necessary; and for entry and inspection of such facilities. Enforcement of SPCC and Spill Prohibition and Reporting Requirements For violations of the law, or applicable regulations or permits, EPA has authority to issue administrative orders requiring compliance, impose administrative penalties, as well as to bring suit, in conjunction with the Department of Justice, to impose civil penalties.EPA the authority to access records and inspect facilities, and the ability Section 311 also gives to require provision of information, with respect to persons and facilities subject to section 311, including SPCC program requirements. For example, in Region 8, EPA participated in an effort with the U.S. Fish and Wildlife Service (FWS), states, and tribes, after FWS expressed concerns about migratory birds landing on open pits that contained oil and water, which killed or harmed the birds. surveys to observe pits. Where apparent problems were identified, relevant federal or state agencies were notified and were to give oil and gas operators an opportunity to correct problems. Ground inspections were then conducted where deemed warranted and, if problematic conditions were found, further follow-up action was taken by EPA or the relevant state or other federal agency. As a result of this effort, 99 sites with violations of SPCC requirements were identified. EPA’s report stated that “on-compliance with SPCC requirements was more pervasive than anticipated. Although the SPCC program has been the focus of outreach and compliance assistance nationally for more than 25 years, there remains a strong need to communicate its requirements, inspect regulated facilities, and conduct appropriate technical assistance or enforcement to ensure improved compliance.” The report states that, for most SPCC violations, EPA issued a notice of violation and that many notice of violation recipients came into compliance without escalation to formal enforcement, but that some enforcement actions were taken. Region 8 reported identifying 22 sites with documented SPCC violations as a result of subsequent efforts in 2004-2005. Information on the nature or resolution of these violations was not readily available. EPA Region 8, Oil and Gas Environmental Assessment Effort 1996 – 2002, v (2003). Imminent and Substantial Endangerment Authority for Spills CWA section 311 provides EPA authority to address certain releases of oil or hazardous substances to U.S. navigable waters and adjoining shorelines. Specifically, on determination that “there may be an imminent and substantial threat to the public health or welfare of the United States, including fish, shellfish, and wildlife, public and private property, shorelines, beaches, habitat, and other living and nonliving natural resources under the jurisdiction or control of the United States, because of an actual or threatened discharge of oil or a hazardous substance from a vessel or facility” in violation of the prohibition against discharges of oil or hazardous substances to U.S. navigable waters and adjoining shorelines, EPA may bring suit, or may, after notice to the affected state, take any other action under this section, including issuing administrative orders, that may be necessary to protect the public health and welfare. Appendix IV: Key Requirements and Authorities under the Clean Air Act Production components may include, but are not limited to, wells and related casing head, tubing head and ‘‘Christmas tree’’ piping, as well as pumps, compressors, heater treaters, separators, storage vessels, pneumatic devices and dehydrators. Production operations also include the well drilling, completion and workover processes and includes all the portable non-self-propelled apparatus associated with those operations. In addition, EPA officials have noted that tanks, ponds, and pits are sources of emissions that may be present at well sites. Others have also identified condensate storage tanks and flaring as significant emission sources often associated with gas wells. The key criteria pollutant of concern for oil and gas production is VOCs, as an ozone precursor, and the primary HAP released by the oil and gas production industry are BTEX (benzene, toluene, ethylbenzene, and xylenes) and n-hexane. To address stationary sources under CAA, EPA is required to promulgate industry-specific emissions standards such as National Emission Standards for Hazardous Air Pollutants (NESHAP) and New Source Performance Standards (NSPS) for source categories that EPA has listed under the Act. CAA also provides for review of new and modified major sources of emissions under the Prevention of Significant Deterioration and Nonattainment New Source Review programs, typically implemented by states. CAA and EPA regulations require operating permits, known as Title V permits, for certain stationary sources, and establish minimum requirements for state operating permitting programs.programs is described below as it may apply to oil and gas well sites. Mobile sources associated with oil and gas production may include trucks bringing fuel, water, and supplies to the well site; construction vehicles; and truck-mounted pumps and engines. That is, oil and gas wells may be served by a variety of road and nonroad vehicles and engines. EPA regulates emissions from an array of mobile sources by imposing emission limits on such vehicles and engines; these generally applicable regulations are not specific to the oil and gas industry and are not discussed here. Finally, the Act includes provisions addressing accidental releases of dangerous pollutants to the air. Oil and gas wells are unlikely to trigger the planning aspects of these provisions, according to EPA; however, the well sites are subject to the general duty clause, a self-implementing provision of CAA under which operators are responsible for identifying hazards associated with accidental releases and designing and maintaining a safe facility, taking such steps as are necessary to prevent releases. Table 9 summarizes the applicability of key Clean Air Act programs to emission points at oil and gas well sites. These provisions will be discussed in greater detail in this appendix. National Emission Standards for Hazardous Air Pollutants Hazardous Air Pollutants The 1990 CAA amendments significantly expanded the hazardous air pollutants program; they identified 189 specific HAPs to be regulated, required EPA to list categories of sources to be regulated, and established implementation timelines. The list of HAPs includes several potentially found in oil and gas well emissions. In addition to these listed HAPs, EPA and others have identified hydrogen sulfide, which is found in oil and gas well emissions but is not a listed HAP, as hazardous and toxic to humans. EPA has the authority to add to the HAPs list pollutants which may present, through inhalation or other routes of exposure, a threat of adverse human health effects or adverse environmental effects, but not including releases subject to EPA’s regulation under section 112(r)—namely, the accidental release and risk management regulations. The prevention of accidental releases regulation includes accidental releases of hydrogen sulfide. In a 1993 report to Congress, EPA found that the limited data available did not evidence a significant threat to human health or the environment from “routine” emissions of hydrogen sulfide from oil and gas wells. CAA provides a process to petition EPA to modify the HAPs list. On March 30, 2009, the Sierra Club and 21 other environmental and public health organizations and individuals petitioned EPA to list hydrogen The petitioners asserted that sulfide as a HAP under section 112(b). low-level hydrogen sulfide emissions not addressed by the accidental release provisions in section 112(r) are harmful to human health.officials told us they are considering the petition but have no specific timeline for acting upon it. NESHAPs Overview and Statutory Provisions Restricting Aggregation of Oil and Gas Production Sources EPA is required to promulgate and periodically revise NESHAPs for source categories the agency has identified. NESHAPs may include standards for major sources and for area sources, which are any sources not major. Major source NESHAPs are based on the maximum achievable control technology (MACT), while EPA may use a different standard of generally available control technology for area sources. S under CAA section 112(b), especially since H2S’s routine exposure effects – on a daily basis – are not addressed whatsoever under the accidental release provisions in section 112(r) of CAA.” Id. at 1. combination of HAPs. Normally, the determination of a facility’s potential to emit HAPs is based on the total of all activities at a facility, known as aggregation. Under a unique provision of CAA, however, “emissions from any oil or gas exploration or production well (with its associated equipment) and emissions from any pipeline compressor or pump station shall not be aggregated with emissions from other similar units,” to determine whether such units or stations are major sources of air pollution, or for other purposes under section 112 (e.g., the HAPs section). Finally, facilities that do not contain a regulated unit (e.g., glycol dehydrator or covered storage vessel) are not subject to any requirement in the rule, even if they emit HAPs. Regarding the aggregation provisions, EPA officials explained that the agency has historically interpreted the statutory language to prohibit aggregation of HAP emissions from wells and associated equipment, meaning that each well and piece of associated equipment must be evaluated separately for purposes of determining major source status. EPA has defined “associated equipment” in the regulations. Officials said that EPA has not evaluated the significance of the aggregation prohibition and EPA’s interpretation of it, such as its effect on the numbers of facilities that are or are not regulated as major sources and hence subject to MACT controls. Officials also said that it is likely that the effect of the aggregation provisions on well sites is smaller than its impact on downstream oil and gas production facilities where equipment tends to be larger and would be more likely to trigger MACT requirements if aggregated. 64 Fed. Reg. at 32,619. NESHAPs for Oil and Natural Gas Production Facilities EPA originally promulgated the NESHAPs for Oil and Natural Gas Production source subcategory in two parts: the standard for major sources was issued in 1999, and the NESHAP for area sources in 2007. In April 2012, EPA promulgated amendments to the NESHAPs for major sources. The NESHAPs for major sources apply to emission points of HAPs located at oil and natural gas production facilities (including wells, gathering stations, and processing plants) that are major sources. Under this rule, in determining whether a well site’s potential to emit HAPs equals or exceeds 10 tons per year (the major source threshold), only emissions from equipment other than wells or “associated equipment,” may be aggregated; associated equipment is a defined term, and excludes glycol dehydrators and storage vessels. In other words, emissions from wells are not aggregated; only emissions from glycol dehydrators and storage vessels at a site may be aggregated. Further, the rule exempts facilities exclusively handling and processing “black oil” and small oil and gas production facilities, including well sites, prior to the point of custody transfer.which these exemptions have the effect of cancelling MACT requirements that would otherwise apply to oil and gas wells from unconventional deposits. EPA documents do not indicate the extent to EPA headquarters officials did not know if any oil or gas wells were NESHAP major sources prior to the April 2012 amendments, and EPA officials in each of the four Regions we contacted were unaware of any examples of oil and natural gas wells being regulated as major sources. EPA officials noted that glycol dehydrators are more likely where there are high pressure gas wells, such as in the Jonah-Pinedale area of Wyoming. EPA officials said that a multiple pad well site in this area would very likely be major for HAPs, except that any federally enforceable standards are first applied to determine the potential emissions, and Wyoming’s presumptive best available control technology standards would likely limit the emissions such that the potential to emit would be reduced to area source levels. Analyses developed for the recent amendments also do not identify if any well sites triggered the major source NESHAPs prior to the amendments, but available data suggest few well sites do so. Large glycol dehydrators are those with an actual annual average natural gas flowrate equal to or greater than 85 thousand standard cubic meters per day and actual annual average benzene emissions equal to or greater than 0.90 Mg/yr. 77 Fed. Reg. 49,490 , 49,568-69 (Aug. 16, 2012) (revising 40 C.F.R. §§ 63.761, 63.760(b)). cover vented through a closed vent system to a control device that recovers or destroys HAPs emissions with an efficiency of 95 percent or greater, or for combustion devices, reduces HAPs emissions to a specified outlet concentration. However, these standards only apply at sites that are deemed major sources, and as outlined above, it appears likely that few well sites reach the key threshold emissions level. The April 2012 amendments added one more emission source to the These sources NESHAP major source rule: small glycol dehydrators. must meet a unit-specific limit for emissions of BTEX that is calculated using a formula in the rule based on the unit’s natural gas throughput and gas composition. Existing dehydrators have 3 years to comply, while new dehydrators must comply upon start-up. Id. at 49,503; see also EPA, Summary of Requirements for Processes and Equipment at Natural Gas Well Sites. these sources in order to analyze and establish MACT emission standards for this subcategory of storage vessels.” In addition, the April 2012 amendments changed a key definition in the NESHAPs for determining major source status. change (i.e., revision to the definition of “associated equipment”) is that emissions from all storage vessels and all glycol dehydrators now will be counted toward determining whether a facility is a major source under the NESHAP for Oil and Natural Gas Production. EPA documents do not indicate the extent to which the change in definition will result in additional oil and gas wells being subject to the MACT requirement. CAA prohibits EPA from listing oil and gas production wells (with its associated equipment) as a specific “area source” category, unless the area source category is for oil and gas production wells located in any metropolitan statistical area or consolidated metropolitan statistical area with a population in excess of 1 million, and the EPA Administrator determines that emissions of HAPs from such wells present more than a negligible risk of adverse effects to public health. Id. at 49,501, 49,569 (revising 40 C.F.R. § 63.761). EPA defines these control areas with reference to parameters used by the U.S. Census Bureau to identify densely settled areas. See 72 Fed. Reg. 26, 28 (2007). limit for benzene. For area sources outside of these control areas, an operational standard is required instead of an add-on control. Area sources are required to notify EPA that they are subject to the rule; additional information, including periodic reports, are required for area sources within a control area. The area source notifications are sent to a specific EPA e-mail box. EPA does not track whether the facilities providing notification are well sites or other components of the oil and natural gas production sector, so it is difficult to determine to what extent oil and gas well sites are subject to the area source NESHAP. Regarding EPA’s authority to establish an area source category for oil and gas wells in metropolitan statistical areas, if certain conditions are met, officials said that EPA has not considered doing so. They said that they have not analyzed well emissions in relation to location in or outside a metropolitan statistical area, and that if the agency were to consider developing an area source within metropolitan statistical areas, they would need to conduct a new data collection effort. Other NESHAPs In addition, EPA has promulgated other NESHAPs, the applicability of which to oil and gas well sites depends upon the particular equipment— and factors such as capacity or emission rate—used at a well site. Although some published materials suggest several NESHAPs may apply, based on discussions with EPA, the primary NESHAP that officials believe could apply at oil and gas well sites is the Boilers and Process Heaters NESHAP for major sources. The major source rule for boilers and process heaters has an unusual feature in that, to determine applicability of the rule, it references whether or not an oil and gas production facility falls within the major source definition under the NESHAPs for Oil and Gas Production Facilities (subpart HH). If an oil and gas well were a major source under the Oil and Gas NESHAP, then any boilers or process heaters with heat input of 10 million British thermal units (BTU) per hour are subject to emission limitations requirements, and any smaller heaters are subject to work standards, under the Boiler NESHAP. These requirements differ from those in the NESHAP for Oil and Gas Production Facilities by, among other things, imposing limits for other pollutants, such as particulate matter, hydrogen chloride, mercury, carbon monoxide, and dioxins/furans, depending on the type of unit. Officials stated that some glycol dehydrators at well sites could be over the trigger heat input and would be subject to the Boiler NESHAP requirements if the oil and gas site were a major source subject to the rule. As noted above, it is not known how many, if any, well sites are major sources. Where a gas well has a compressor, the compressor engine may be EPA did not have available subject to standards for stationary engines. information on the extent to which these engines are present at well sites and, if so, whether they fall under these rules, which are based on equipment and are not specific to the oil and gas industry. New Source Performance Standards EPA promulgates NSPS, which are generally applicable to (1) new or reconstructed facilities and (2) facilities that have undergone modification—that is, any physical change in, or change in the method of operation of, a facility which increases the amount of any air pollutant emitted by such source or which results in the emission of any air pollutant not previously emitted. These rules are implemented by EPA or by states through delegation.the NSPS primarily regulates VOCs (as an ozone precursor). For the oil and gas production industry, In 1985, EPA promulgated NSPS for the oil and gas industry focused on natural gas processing plants, but did not include any standards for emissions from preprocessing production activities. promulgated such standards for some production emissions, notably completion and recompletion of certain hydraulically fractured gas wells.In addition, some other generally applicable standards for certain equipment may apply at oil and gas well sites. April 2012 Amendments to NSPS In April 2012, EPA promulgated amendments to the NSPS for the Oil and Gas sector, including new standards applicable to the production source category. The new standards were issued pursuant to a 2010 consent decree that settled a challenge brought by environmental groups over EPA’s failure to conduct required reviews of the existing standards. Following publication of the new rules in August 2012, an industry group petitioned EPA to reconsider certain aspects of the new rules. 40 C.F.R. pt. 60, subparts KKK, LLL. completions and recompletions of natural gas wells, with variable implementation dates as described in table 10. These practices are designed to capture emissions from flowback from hydraulically fractured wells, and reduce VOC emissions. EPA’s regulatory impact analysis estimated that the rules will apply to about 9,700 new wells per year, and to about 1,200 existing wells being recompleted per year. Of these, EPA’s analysis estimates that nearly 9,400 wells will be required to use “green completion” techniques to capture and treat flowback emissions so that the captured natural gas can be sold or otherwise used, while the remainder will use completion combustion. Additionally, to reduce VOC emissions, the April 2012 rule establishes standards including those for, as relevant to gas well sites, gas-driven pneumatic controller devices and storage vessels, subject to thresholds. According to EPA documents, over 13,600 pneumatic controllers will be affected, but it is not clear the extent to which these are located at well sites. Similarly, EPA documents estimate that 304 storage vessels annually will trip the threshold of 6 tons per year of VOC and thus be subject to the rule, and EPA officials expect most of these storage vessels will be located at wells. When asked about the potential increased burden of the amended NSPS rules, officials said that it was not clear whether the rule would result in more or fewer CAA-related permits. For example, the applicability of NSPS may trigger a state requirement to get a construction permit or other type of permit. These permits may be triggered by, among other things, a facility’s “potential to emit” that is calculated assuming all federally enforceable controls are in place. Officials said that the NSPS, which are federally enforceable requirements, will reduce actual emissions and thus could reduce the number of facilities that trigger the requirement for these state permits. In the new rule, EPA generally exempted covered facilities from the obligation to obtain a Title V operating permit. Other NSPS EPA has issued equipment-focused NSPS for certain equipment that may be used at oil and gas well sites. These include NSPS for Volatile Organic Liquid Storage Vessels (Including Petroleum Liquid Storage Vessels). These standards apply to such tanks with a capacity greater than or equal to 75 cubic meters that is used to store volatile organic liquids and that were built, reconstructed, or modified after July 23, 1984. Tanks attached to trucks and other mobile vehicles are excluded.while there are tanks at well sites, they are often smaller than the threshold in this rule. Specifically, while the standards apply to tanks greater than 75 cubic meters (about 475 barrels, according to EPA), an individual tank typically found at oil and gas sites is often between 250 – 400 barrels, hence avoiding coverage under this rule. Other NSPS that have been identified as potentially relevant include those for gas turbines and steam generators. EPA officials said, however, that typical activity at well sites is not enough to trigger thresholds for coverage under this rule, either. New Source Review CAA New Source Review (NSR) provisions require a source to obtain a permit and undertake other obligations to control its emissions of air pollution prior to construction of a new source or modification of an existing stationary source. However, NSR only applies if the construction project results in actual emissions or the potential to emit regulated air contaminants at or above certain threshold levels established in the NSR regulations. For a new source, NSR is triggered only if the emissions would cause the source to qualify as major. For an existing major source making a modification, NSR is triggered only if the modification will result in a significant increase in emissions and a significant net emissions increase of that pollutant. Relevant to NSR, the emission profile for oil and gas wells would include hydrogen sulfide and VOCs, among others. In most areas, states implement the NSR permitting programs. The major NSR program is actually composed of the following two separate programs: Nonattainment NSR applies to emission of specific pollutants from sources located in areas designated as nonattainment for those pollutants because they do not meet the pollutant-specific national ambient air quality standards. Prevention of Significant Deterioration (PSD) applies to emissions of all other regulated pollutants from sources located in attainment areas where such standards are met or in areas unclassifiable for such standards. For PSD, the major source threshold is generally 250 tons per year of any regulated air pollutant. Determining whether a facility is a major source, together with identifying which emissions should be included in doing so is guided by the process as for Title V permits, discussed below. While a 1993 EPA report appeared to suggest that most oil and gas extraction wells would not likely be subject to PSD regulations based on the applicability criteria, the specific determination of which emission units, including wells, must be included in determining whether a source is major (source aggregation) involves a case-by-case, fact-specific analysis. For nonattainment NSR, the major source threshold ranges from 100 tons per year down to 10 tons per year depending on the severity of the air quality problem where the source is located and the specific pollutant at issue. To be a major source under nonattainment NSR, the source must emit or have a potential to emit above the major source level set for the specific regulated air pollutant (or its precursor) for which the area is designated nonattainment. With respect to nonattainment NSR, EPA officials stated that some large wells in nonattainment zones could be major sources standing alone because of low emission thresholds in certain areas; as noted above, such thresholds could be as low as 10 tons per year in the most severe nonattainment areas, versus 250 tons per year in attainment areas. Title V Operating Permits Relevant to oil and gas production, CAA generally requires Title V permits for the operation of determined based on the facility’s actual emissions or “potential to emit;” any source, including a nonmajor source, subject to a NSPS; any source, including an area source, subject to a NESHAP, among any source required to obtain a PSD or NSR permit. Thus, whether a Title V permit is required depends on whether the source (1) is subject to one of these other requirements, unless EPA has exempted the particular area sources or nonmajor sources from the Title V permit requirement, or (2) meets the emissions thresholds for a major source. Title V permits for a major source must include all applicable requirements for all relevant emission units in the major source. Title V permits for nonmajor sources must include all applicable requirements applicable to emissions units that caused the source to be subject to the Title V permitting requirements. Title V permits may need to add monitoring, reporting, or other requirements but generally do not add new emissions control requirements (rather they consolidate requirements from throughout CAA programs and contain conditions to assure compliance with such requirements). According to EPA officials, the permits help operators and the public to understand what the requirements are for compliance with CAA and help assure compliance with such requirements. Title V permits are generally issued by states and, in some instances, EPA Regional offices. As of August 2012, EPA officials were unaware of any Title V permits issued solely on the basis of oil and gas well site emissions alone. EPA officials stated that some oil and gas well sites have adopted federally enforceable emissions limits such that the sites do not need a Title V permit, which they would otherwise have triggered. In addition, EPA identified a March 2012 case in which a state environmental agency alleged, among other things, that an oil and gas production site had VOC emissions of over 600 tpy, which would require a Title V permit. The operator disputed the violations but agreed to submit an application for a Title V permit. Source Determinations and Aggregation Issues for Title V and NSR Applicable to both NSR and related determinations for Title V, EPA regulations specify three factors that must be met in source determinations—whether the emissions points are under common control, belong to the same major industrial grouping, and are located on contiguous or adjacent properties. Thus, in contrast to the NESHAPs, in determining whether significance thresholds for emissions are met for purposes of NSR or Title V, EPA and states must aggregate VOC emissions from oil and gas well sites that are both (1) contiguous or adjacent and (2) under common control. To determine whether a source meets the emissions thresholds for a Title V or NSR major source designation, EPA applies these regulatory criteria to evaluate whether to aggregate oil and gas production wells with other emission sources. Specifically, permitting authorities (EPA or authorized states or local authorities) have in particular matters, on a case-by-case basis, aggregated emissions from facilities to determine major sources, for purposes of Title V operating permits or NSR. Determining when emissions must be aggregated is a fact-based inquiry that is made by permitting authorities on a case-by-case basis. While authorized states are typically responsible for making source determinations, EPA headquarters has stated that Regional offices should continue to review and comment on source determinations to assure consistency with regulations and historical practice. In addition, EPA Regions may be responsible for source determinations in areas where they are responsible for permitting. Aggregation of emissions from the oil and gas industry generally, including production facilities, has received recent attention. For example, in 2007, EPA provided guidance on how to evaluate aggregation in source determinations for the oil and gas industry. EPA later withdrew this industry-specific guidance and emphasized that source determinations in this industry were governed by the existing regulations, the existing interpretations of them, and need for case-specific application of the regulations in each permitting action. See Summit Petroleum Corp. v. EPA, Nos. 09-4348, 10-4572, slip op. (6th Cir. August 7, 2012). oil and gas activities with the compressor station in determining the source. A citizen group appealed this decision to the Environmental Appeals Board. Both citizen group challenges were ultimately dismissed after the parties engaged in a dispute resolution process. EPA entered settlements with the citizen group and agreed to undertake a pilot program for the purpose of studying, improving, and streamlining source determinations in the oil and gas industry in new or renewal Title V permits for which EPA Region 8 is the initial Title V permitting authority. In sum, several recent disputes over aggregation of oil and gas facilities involve whether or not well emissions should be aggregated; however, whether or not well emissions are aggregated for Title V or PSD purposes generally would not affect other federal requirements for emission controls at well sites. Greenhouse Gas Reporting Rule In 2009, EPA promulgated the Greenhouse Gas Reporting Rule, providing a framework for the greenhouse gas reporting program and establishing requirements for some source categories. According to EPA, the goals of the program are to obtain data that are of sufficient quality that they can be used to support a range of future climate change policies and regulations; to balance the rule coverage to maximize the amount of emissions reported while minimizing reporting from small emitters; and to create reporting requirements that are consistent with existing programs by using existing estimation and reporting methodologies to reduce reporting burden, where feasible. EPA subsequently issued and amended a rule to implement the program for the category of Petroleum and Natural Gas Systems, including oil and gas wells. According to EPA, oil and gas well sites may contain sources of greenhouse gas emissions including: (1) combustion sources, such as engines used on-site and which typically burn natural gas or diesel fuel, and (2) process sources, such as equipment leaks and vented emissions. The process sources include pneumatic devices, dehydrators, and compressors. EPA has identified the onshore production subcategory as the largest segment for equipment leaks and vented and flared emissions in the petroleum and natural gas system source category. The rule requires petroleum and natural gas facilities—including oil and gas well sites—that emit 25,000 metric tons or more of carbon dioxide equivalent per year to report certain data to EPA. Specifically, oil and gas production facilities are to report annual emissions of carbon dioxide, methane, and nitrous oxide from equipment leaks and venting, gas flaring, and stationary and portable combustion. Reporting is to begin in September 2012, for calendar year 2011. 76 Fed. Reg. 73,886, 73,889, 73,899 (Nov. 29, 2011) (amending 40 C.F.R. § 98.3). For purposes of this rule, onshore petroleum and natural gas production is defined to include all equipment on a single well pad or associated with a single well pad (including but not limited to compressors, generators, dehydrators, storage vessels, and portable non-self-propelled equipment which includes well drilling and completion equipment, workover equipment, gravity separation equipment, auxiliary non-transportation- related equipment, and leased, rented or contracted equipment or storage facilities), used in the production, extraction, recovery, lifting, stabilization, separation, or treating of petroleum and/or natural gas (including condensate). Moreover, the rule defines an onshore oil and gas production facility as including all oil or gas equipment on or associated with a well pad and carbon dioxide enhanced oil recovery operations that are under common ownership or control and that are located in a single hydrocarbon basin; thus, for example, where multiple wells are owned or operated by the same person or entity in a single basin, the owner or operator is to report well data collectively for each hydrocarbon basin. EPA estimated that this facility definition for onshore petroleum and natural gas production will result in 85 percent GHG emissions coverage of this industry segment, and EPA documents estimate that emissions from approximately 467,000 onshore wells are covered under the rule. Accidental Releases Section 112(r) of CAA establishes the chemical accidental release prevention program applicable to specifically listed “regulated substances,” as well as other extremely hazardous substances. This provision, among other things, required EPA to publish regulations and guidance for chemical accident prevention at facilities using substances that pose the greatest risk of harm from accidental releases; the resulting regulatory program is known as the Risk Management Program. In conjunction with the program, EPA was required to promulgate a list of at least 100 substances which, in the case of an accidental release, are known to cause or may reasonably be anticipated to cause death, injury, or serious adverse effects to human health or the environment, and to periodically review the list. Among others, hydrogen sulfide is included on the list of regulated substances. Section 112(r) also established the Chemical Safety Board; and the general duty for owners and operators of facilities to take steps to prevent accidental releases of the listed and other extremely hazardous substances, among other things. Accidental Release Prevention (Risk Management Program) Whether and the extent to which a facility is subject to the Risk Management Program requirements depends on the regulated substances present and their quantities, the processes, and the presence of receptors. Generally, the regulation requires, for covered processes, a three-part program including (1) a hazard assessment; (2) a prevention program that includes safety procedures and maintenance, monitoring, and employee training measures; and (3) an emergency response program. 63 Fed. Reg. 640 (Jan. 6, 1998); 65 Fed. Reg. 13,243, 13,244 (Mar. 13, 2000). naturally occurring hydrocarbon mixtures, which include any combination of the following: condensate, crude oil, field gas, and produced water (defined as water extracted from the earth from an oil or natural gas production well, or that is separated from oil or natural gas after extraction), regulated substances in gasoline, when in distribution or related storage for use as fuel for internal combustion engines, and a flammable substance when the substance is used as a fuel. Regarding the exemption of naturally occurring hydrocarbon mixtures prior to entry into a processing plant or refinery, EPA explained at the time that the agency believed they do not warrant regulation, noting that the general duty clause would apply when site-specific factors make an In addition, EPA stated that, unlisted chemical extremely hazardous. for naturally occurring hydrocarbons and for regulated substances in gasoline, a key consideration was EPA’s original intent to exempt flammable mixtures that do not meet a preexisting standard—the National Fire Protection Association flammability hazard rating of 4. EPA has also explained that this rating reflects the potential to result in vapor cloud explosions and boiling liquid expanding vapor explosions, which it found pose the greatest potential hazard from flammable substances to the public and environment. 40 C.F.R § 68.126 (2012). Program rule. In this context, the Chemical Safety Information, Site Security and Fuels Regulatory Relief Act prohibited EPA from listing flammable substances used as fuel, solely because of their explosive potential. EPA then revised the regulation, adding the exemption to comply with the act. The regulated chemicals present at oil and gas well sites include components of natural gas (such as butane, propane, methane, and ethane), but these are exempt from the threshold determination of a facility subject to the Risk Management Program when present in “naturally occurring hydrocarbon mixtures.” If an oil or gas well site nonetheless uses or stores some of the regulated chemicals not encompassed by the exemptions, it could trigger the risk management requirements. General Duty to Prevent Accidental Releases The owners and operators of stationary sources producing, processing, handling or storing such substances have a general duty …to identify hazards which may result from such releases using appropriate hazard assessment techniques, to design and maintain a safe facility taking such steps as are necessary to prevent releases, and to minimize the consequences of accidental releases which do occur. Known as the “general duty clause,” the provision is analogous to a negligence standard, according to EPA officials. In other words, if there is a known risk and a way to mitigate it, then the operator should conduct risk mitigation. As explained in an EPA report, “responsibilities include the conduct of appropriate hazard assessments and the design, operations, and maintenance of a safe facility,” as well as release mitigation and community protection. EPA officials noted that industry standards (such as from the American National Standards Institute or the American Petroleum Institute) and fire codes are used in determining the duty of care. EPA has published Chemical Safety Alerts to advise the regulated community of its general duty clause obligations. The general duty clause applies to sources handling or storing substances listed by EPA in the Risk Management Program regulations or any other extremely hazardous substance, without a threshold. EPA headquarters officials said that, conceivably, the general duty clause would apply to every single well but stated that it would be in EPA Regions’ discretion where and when to use the general duty clause to conduct inspections. In some Regions, EPA has conducted inspections of gas well sites to enforce the general duty clause, including identifying noncompliance with certain safety standards. EPA Regional officials said that they use infrared video cameras to conduct inspections to identify leaks of methane from storage tanks or other equipment at well sites. For example, EPA Region 6 officials said they have conducted 45 inspections at well sites since July 2010 and issued 10 administrative orders related to violations of CAA general duty clause. EPA officials said that all well sites are required to comply with the general duty clause but that EPA prioritizes and selects sites for inspections based on risk. Imminent and Substantial Endangerment Authority Respecting Accidental Releases Section 112(r) also provides EPA with the authority to issue orders as may be necessary to protect the public health when the EPA Administrator determines that there may be an imminent and substantial endangerment to human health or welfare or the environment because of an actual or threatened accidental release of a regulated substance. Chemical Safety Board The Chemical Safety Board, established by section 112(r), is charged with investigating and publicly reporting on accidental releases resulting in a fatality, serious injury, or substantial property damages. The board is authorized, among other things, to make recommendations to EPA. In September 2011, the Chemical Safety Board released a report investigating three incidents involving fatality and injuries at oil and gas storage tanks located at well sites and surveyed an additional 23 such incidents that occurred between 1983 and 2010. The report found that these accidents occurred when the victims—all young adults—gathered at rural unmanned oil and gas storage sites lacking fencing and warning signs. This report concluded such sites pose a public safety risk. The report also reviewed federal, state, and local regulations, inherently safer designs of tanks, and industry standards. Noting that exploration and production storage tanks are exempt from the security requirements of CWA and from the risk management requirements of CAA, the Chemical Safety Board recommended that EPA encourage owners and operators to reduce these risks. Specifically, the Chemical Safety Board recommended EPA “publish a safety alert directed to owners and operators of exploration and production facilities with flammable storage tanks, advising them of their general duty clause responsibilities for accident prevention under CAA.” The letter requests that EPA provide within 180 days a response stating how EPA will address the recommendation. On June 27, 2012, EPA responded to the Chemical Safety Board and stated that EPA agrees to develop and publish a safety alert and anticipates the agency will be able to publish a final safety alert by June 2013. The Chemical Safety Board also made related recommendations to several states and industry associations. EPA Enforcement Authorities Even where a state implements key CAA provisions, EPA retains oversight and enforcement authority. For example, EPA may initiate an enforcement action via an administrative order or a civil action for a violation of any requirement or prohibition of an applicable SIP, permit, or certain other requirement or prohibition after notification to the state and the party. CAA also gives EPA authorities regarding access to records and the ability to require provision of information, as to any person who owns or operates any emission source, among others. Imminent and Substantial Endangerment Authority Where EPA receives evidence that a source or a combination of sources present an imminent and substantial endangerment to public health or welfare, or the environment, EPA may bring suit or, where prompt action is needed, issue orders to stop the emission of air pollutant or take other necessary action.and attempt to confirm the accuracy of information before taking such actions. Appendix V: Key Requirements and Authorities under the Resource Conservation and Recovery Act In 1976, Congress passed the Resource Conservation and Recovery Act (RCRA), generally establishing EPA authority to regulate the generation, transportation, treatment, storage, and disposal of hazardous waste, and also including some provisions respecting solid waste. As to solid waste, RCRA provided a more limited federal role and included incentives for states to implement programs to manage nonhazardous solid waste disposal, a prohibition on open dumping of wastes, and a requirement for EPA to promulgate technical criteria for classifying solid waste disposal facilities, among other things. Subtitle C – Hazardous Waste a solid waste, or combination of solid wastes, which because of its quantity, concentration, or physical, chemical, or infectious characteristics may (A) cause, or significantly contribute to an increase in mortality or an increase in serious irreversible, or incapacitating reversible, illness; or (B) pose a substantial present or potential hazard to human health or the environment when improperly treated, stored, transported, or disposed of, or otherwise managed. RCRA Subtitle D, 42 U.S.C. ch. 82, subch. IV (§§ 6941-6949a) (2012). ignitability, corrosivity, or reactivity. The generation, transport, and disposal of wastes meeting the RCRA regulatory hazardous definition are generally subject to RCRA Subtitle C requirements, such as reporting, using a manifest, and disposing of the waste in approved ways, such as through hazardous waste landfill. Exemption of Certain Oil and Gas Production Wastes from Regulation as Hazardous Waste under RCRA Subtitle C Notwithstanding the provisions for identifying hazardous wastes, the Solid Waste Disposal Act Amendments of 1980 created a separate process for certain oil and gas exploration and production wastes. Under the statute, these wastes would not be subject to regulation as hazardous waste under RCRA Subtitle C unless specific actions were taken. The amendments required EPA to conduct and publish “a detailed and comprehensive study…on the adverse effects, if any, of drilling fluids, produced waters, and other wastes associated with the exploration, development, or production of crude oil or natural gas or geothermal energy on human health and the environment.” The study report was to “include appropriate findings and recommendations for Federal and non-Federal actions concerning such effects.” he Administrator shall, after public hearings and opportunity for comment, determine either to promulgate regulations under this subchapter for drilling fluids, produced waters, and other wastes associated with the exploration, development, or production of crude oil or natural gas or geothermal energy or that such regulations are unwarranted. The Administrator shall transmit his decision, along with any regulations, if necessary, to both Houses of Congress. Such regulations shall take effect only when authorized by Act of Congress. In considering the first factor, EPA found that a wide variety of management practices are utilized for these wastes, and that many alternatives to these current practices are not feasible or applicable at individual sites…As to the second factor, EPA found that existing State and Federal regulations are generally adequate to control the management of oil and gas wastes. Certain regulatory gaps do exist, however, and enforcement of existing regulations in some States is inadequate. EPA’s review of the third factor found that imposition of Subtitle C regulations for all oil and gas wastes could subject billions of barrels of waste to regulation under Subtitle C as hazardous wastes and would cause a severe economic impact on the industry and on oil and gas production in the U.S…and could cause severe short-term strains on the capacity of Subtitle C Treatment, Storage, and Disposal Facilities..and a significant increase in the Subtitle C permitting burden for State and Federal hazardous waste programs. Id. at 25,446. EPA stated that regulation of these wastes as hazardous waste under Subtitle C posed significant problems, including the lack of flexibility in the statute to take into account the varying geological, climatological, geographic, and other differences characteristic of oil and gas production sites, and to consider cost in applying the requirements—such that EPA would be unable to craft a program to avoid severe economic impacts and to fill only the gaps in existing programs. In lieu of regulating these wastes as hazardous waste under Subtitle C, EPA announced “a three-pronged approach toward filling the gaps in existing State and Federal regulatory programs,” comprised of (1) improving existing programs under RCRA, the Safe Drinking Water Act, and the Clean Water Act; (2) working with states to improve their programs; and (3) working with Congress on any additional legislation EPA further stated that it planned to revise its that might be needed. existing standards under Subtitle D of RCRA, “tailoring these standards to address the special problems posed by oil, gas, and geothermal wastes and filling the regulatory gaps,” and “in developing these tailored Subtitle D standards for crude oil and natural gas wastes, EPA will focus on gaps in existing State and Federal regulations and develop appropriate standards that are protective of human health and the environment. Gaps in existing programs include adequate controls specific to associated wastes and certain management practices and facilities for large-volume wastes, including roadspreading, landspreading, and impoundments.” EPA, Exploration & Production Waste and RCRA, presented at ASTSWMO Annual Meeting (Oct. 26, 2011). Environmental Regulations (STRONGER) program. EPA also worked with industry representatives to develop best management practices for exploration and production wastes, but these efforts did not culminate in any document or guidance. On September 8, 2010, the Natural Resources Defense Council submitted a petition requesting regulation of waste associated with the exploration, development, or production of oil, natural gas, and geothermal energy. the determination not to regulate these wastes because, among other things, the underlying assumptions—concerning the availability of alternative disposal practices, the adequacy of state regulations, and economic harm to the oil and gas industry—are no longer valid. The petition requests that EPA promulgate regulations applying to wastes from the exploration, development and production of oil and natural gas under Subtitle C of RCRA. Letter, NRDC to EPA, Petition for Rulemaking Pursuant to Section 6974(a) of the Resource Conservation and Recovery Act Concerning the Regulation of Wastes Associated with the Exploration, Development, or Production of Crude Oil or Natural Gas or Geothermal Energy (Sept. 8, 2010); see also RCRA § 7004(a), 42 U.S.C. § 6974(a) (2012). issue, the agency intends to issue a proposed response to the petition. The proposed response will be printed in the Federal Register, and EPA will establish an electronic docket and provide an opportunity for public comment. Although EPA has not yet sought public comment on the petition, the agency has received several unsolicited comment letters, including from two industry associations, the STRONGER program, and two states. If EPA revises the regulatory determination for some or all exploration and production wastes, the agency would conduct a full regulatory process to propose the regulations. Under the key RCRA provision, the regulations would not become effective until authorized by congressional action. Should the exemption be lifted, not all exploration and production wastes would necessarily be hazardous. Rather, whether particular exploration and production wastes would be hazardous and subject to regulation would depend on whether those particular wastes meet the regulatory definition of hazardous (i.e., are a listed waste or exhibit a characteristic of hazardous waste). Oil and Gas Exploration and Production Wastes That Are Not Exempt from Regulation While well sites wastes originating within the well or generated by field operations such as water separation, demulsifying, degassing, and storage are exempt, RCRA Subtitle C regulations generally apply to other wastes that may be generated at oil and gas wells, such as discarded unused products, solvents used to clean surface machinery, and others, if they are actually hazardous. In 2002, EPA published a guide titled “Exemption of Oil and Gas Exploration and Production Wastes from Federal Hazardous Waste Regulations” that identifies, among other things, a list of nonexempt wastes. The guide identified nonexempt wastes including the following wastes that may be generated by activities at oil and gas well sites: unused fracturing fluids or acids; oil and gas service company wastes such as empty drums, drum rinsate, sandblast media, painting wastes, spent solvents, spilled chemicals, and waste acids; vacuum truck and drum rinsate from trucks and drums transporting or containing nonexempt waste; used equipment lubricating oils; waste compressor oil, filters, and blowdown; used hydraulic fluids; caustic or acid cleaners; radioactive tracer wastes; and drums, insulation, and miscellaneous solids. According to EPA’s guidance document, this list represents some types of wastes that, if hazardous, are not exempt from Subtitle C regulation; however, these wastes may or may not be hazardous in a particular situation. These wastes are hazardous if they are a listed hazardous waste or exhibit a hazardous characteristic, such as ignitability or toxicity. If hazardous, then the facility is subject to waste management requirements that vary depending upon the amount of hazardous waste generated per calendar month. RCRA regulations establish several categories for facilities generating hazardous waste, with differing reporting obligations. Among these, the lowest level category is conditionally exempt small quantity generators, composed of facilities generating no more than 100 kilograms (220 pounds) per month of hazardous waste. These facilities are subject to limits on the amount of hazardous waste they accumulate, as well as general requirements to determine which wastes are hazardous and to ensure that any hazardous wastes sent for off-site disposal are sent to state-approved facilities, RCRA-permitted or interim status, or for certain wastes, universal waste facilities, facilities beneficially using, recycling, or reclaiming the waste. Generally, conditionally exempt small quantity generators would not be required to have an EPA ID number. Small quantity generators are those facilities generating more than 100 kilograms (220 pounds) but less than 1,000 kilograms (2,220 pounds) per month of hazardous waste. These facilities are subject to limits on the amount of hazardous waste they accumulate, as well as storage requirements, and general requirements to determine which wastes are hazardous and to ensure that any hazardous wastes sent for off-site disposal are sent to RCRA-permitted or interim status facilities. In addition, the small quantity generators are required to have an EPA ID number and use manifests, by which hazardous waste may be tracked. Id. at § 262.12(a) (a generator, other than a conditionally exempt small quantity generator, is essentially required to obtain an identification number before storing the waste or offering it to a transporter.). and inspection purposes. Generally, EPA (or the authorized state’s) involvement at generator-only sites includes receiving notifications and issuing identification numbers, receiving biennial reports, conducting compliance assurance activities such as inspections, and investigating alleged problems. EPA has not undertaken a specific assessment of the extent to which oil and gas well sites are generating small amounts of regulated hazardous wastes and consequently are regulated as small quantity generators or conditionally exempt small quantity generators. EPA officials were unaware of the extent to which oil and gas well sites generate nonexempt hazardous waste (e.g., hazardous wastes other than exempt exploration and production wastes) in quantities significant enough to require an EPA ID number. EPA Region 8 officials were unaware of any instances in which a well site requested an EPA ID number. A challenge in understanding the extent to which oil and gas well sites are regulated stems in part from the use of North American Industry Classification System (NAICS) codes. While there is a code at the six-digit level that generally corresponds with oil and gas production, it appears that, for some facilities with this code, the facility entry includes associated downstream facilities such as a compressor station or gas processing plant, making it impossible to use RCRAInfo – a publicly available EPA database that contains information on RCRA generators -- alone to identify well sites triggering the particular requirement of interest. For example, this database shows that some facilities with the oil and gas production NAICS code are listed as conditionally exempt small quantity generators. GAO’s review of a small sample of these listings suggests some may include downstream facilities, while others appear to be well sites. Subtitle D – Solid Waste Oil and gas exploration and production wastes may be RCRA statutory solid wastes even if they are exempt from hazardous waste requirements or are nonhazardous wastes. As compared with hazardous waste, RCRA provided EPA a different and largely nonregulatory role for solid waste. EPA’s role in solid waste management is focused on assisting states in developing solid waste management programs. For example, EPA developed guidelines for certain aspects of solid waste management. A key part of EPA’s limited regulatory role for solid waste was to establish criteria defining which solid waste disposal facilities and practices are “sanitary landfills” and those which constitute “open dumps,” where RCRA prohibited open dumping of solid waste. Consistent with the scheme established by RCRA Subtitle D, states have primary responsibility for managing disposal of solid waste, including that resulting from oil and gas exploration and production. State solid waste programs regulate treatment (which may include incineration) and land disposal of these wastes, among other things. In addition, states may have specific programs to address oil and gas production wastes, and some states put such wastes in a special category of solid waste, such as industrial wastes, with more stringent requirements than the federal minimum requirements. (See report and app. IX for discussion of selected aspects of state waste management.) Enforcement EPA has certain enforcement authorities to address hazardous wastes. RCRA sections 3007, 3008, and 3013 collectively provide EPA with authorities to monitor compliance, conduct investigations, and enforce Subtitle C (the hazardous waste subtitle) and its implementing regulations. Each of these key authorities depends, among other things, on the existence or presence of a hazardous waste in a given situation. EPA’s authority under sections 3007 and 3013 extends beyond waste that is regulated as hazardous under Subtitle C (e.g., wastes meeting the regulatory definition of hazardous waste), and includes waste that meets the statutory definition of hazardous waste in RCRA section 1004(5). For example, section 3008(a) authorizes EPA to issue administrative compliance orders “whenever on the basis of any information” the EPA Administrator determines that any person has violated or is in violation of any requirement of Subtitle C. These orders may require the person ,to come into compliance immediately or by a specific time frame and/or pay a civil penalty for any past or current violation and may include suspension or revocation of a facility’s RCRA permit. Alternatively, EPA, through the Department of Justice, may file a civil action in federal court for violations of RCRA and its implementing regulations and permits. EPA must give notice to the state, if it has an EPA-authorized hazardous waste program, prior to issuing an order or filing a civil judicial action. Section 3007(a) gives EPA authority to inspect and copy records and to obtain samples from any person who generates, stores, treats, transports, disposes of, or otherwise handles or has handled hazardous wastes, and to enter sites where hazardous wastes are or have been generated, stored, treated, disposed of, or transported from. Section 3007 also establishes mandatory compliance inspections. EPA has interpreted its section 3007 authority, discussed above, to include the authority to access records and sites related to solid waste “that the Agency reasonably believes may pose a hazard when improperly managed.” EPA officials did not provide any examples of EPA using its section 3007 authority at oil or gas well sites. Section 3013 authorizes EPA to issue an order requiring monitoring, testing, analysis, and reporting if the EPA Administrator determines, upon receipt of any information, that the presence or release of any hazardous waste at a facility or site at which hazardous waste is, or has been, stored, treated, or disposed of may present a substantial hazard to human health or the environment. Furthermore, in certain circumstances, EPA may use its authority under section 3013 to conduct its own investigation into the nature and extent of a potential hazard. EPA officials did not provide any examples of EPA using these hazardous waste enforcement provisions for incidents arising at oil or gas well sites. EPA has fewer enforcement responsibilities and authorities for nonhazardous waste facilities under RCRA Subtitle D, than it does for hazardous waste activities regulated under RCRA Subtitle C. In particular, state solid waste programs are based in state law and generally are not subject to enforcement or overfiling by EPA. RCRA’s prohibition on open dumping of solid and hazardous waste is enforceable by citizen suit. Imminent and Substantial Endangerment Authority EPA has imminent and substantial endangerment authority to address both hazardous and solid wastes. Section 7003 authorizes EPA to issue administrative orders and to file suit in federal district court. In addition, “upon receipt of evidence that the past or present handling, storage, treatment, transportation or disposal of any solid waste or hazardous waste may present an imminent and substantial endangerment to health or the environment,” EPA has authority to restrain any person who has contributed or who is contributing to such handling, storage, treatment, transportation or disposal, from such activity, to order them to take such other action as may be necessary, or both. Such orders can be issued to a person who contributed in the past or is currently contributing to the imminent and substantial endangerment to health or the environment. Section 7003 orders are enforceable; if a nonfederal recipient fails to comply, EPA can enforce the order, including fines, by requesting that Department of Justice file suit in federal court. EPA’s imminent and substantial endangerment authority is not limited to Subtitle C regulated hazardous wastes but also includes statutory solid wastes and hazardous wastes. EPA has interpreted the authority broadly, to allow a range of actions to be taken, including addressing the threat of endangerment. Nonetheless, EPA officials noted that a section 7003 action is distinct from, for example, the agency’s Subtitle C enforcement authorities because the objective of such an action is to abate the imminent and substantial endangerment, rather than to enforce specific RCRA requirements. Whether RCRA section 7003 authority is applicable to a given situation requires a fact-based determination that the facts establish the statutory elements, including the existence of conditions that may present an imminent and substantial endangerment. EPA has issued section 7003 orders at several facilities handling wastes from oil and gas well sites. For example, as previously discussed, EPA Region 8 participated in an effort with the FWS, states, and tribes, after the FWS expressed concerns about migratory birds landing on open pits that contained oil and water, which killed or harmed the birds. The effort involved aerial surveys to observe pits. Where apparent problems were identified, relevant federal or state agencies were notified and were to give oil and gas operators an opportunity to correct problems. Ground inspections were then conducted where deemed warranted and, if problematic conditions were found, further follow up action was taken by EPA or the relevant state or other federal agency. As a result of this effort, EPA issued nine orders pursuant to RCRA section 7003 authority. According to the report, the orders required operators “to remove oil from pits, install effective exclusionary devices, and/or clean up sites.” EPA Region 8 has issued section 7003 orders to several commercial oilfield waste disposal facility operators in Wyoming, finding each site endangered the environment including having caused bird mortalities due to inadequate pit management. As another example, in 2005, EPA Region 6 entered into an agreement with an exploration company and property owners at a site in Oklahoma where the contents of a well drilling waste pit had been relocated onto residential property; the agreement required the waste to be removed, among other things. Appendix VI: Key Requirements and Authorities under the Comprehensive Environmental Response, Compensation, and Liability Act In 1980, Congress passed the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), often referred to as “Superfund,” to address the cleanup of releases of hazardous substances, pollutants, and contaminants nationwide and, in so doing, protect human health and the environment from their effects. The enactment of CERCLA gave the federal government the authority to respond to actual and threatened releases of hazardous substances, pollutants, and contaminants that may endanger public health or welfare or the environment, as well as requiring reporting of hazardous substances releases above threshold quantities. CERCLA also established a liability scheme, whereby potentially responsible parties such as owners and operators may be liable for cleanup and other costs stemming from the release (or threatened release) of hazardous substances into the environment from a facility. CERCLA is primarily a remedial statute; it is preventive in that it authorizes responses to threatened releases of hazardous substances, pollutants, and contaminants, and to the extent that the liability scheme provides incentives for owners and operators to take care to avoid releases to the environment. Relevant Exclusions and Definitions Under a provision known as the petroleum exclusion, CERCLA’s provisions do not apply to releases to the environment that are purely petroleum, including crude oil and natural gas, and fractions of crude oil including the hazardous substances, such as benzene, that are indigenous in those petroleum substances. EPA can respond to releases of hazardous substances, however, even if there are colocated petroleum releases. any injection of fluids or other materials authorized under applicable State law (i) for the purpose of stimulating or treating wells for the production of crude oil, natural gas, or water, (ii) for the purpose of secondary, tertiary, or other enhanced recovery of crude oil or natural gas, or (iii) which are brought to the surface in conjunction with the production of crude oil or natural gas and which are reinjected. However, EPA has explained, “he National Response Center must be notified in any situation involving the use of injection fluids or materials that are not authorized specifically by State law for purposes of the development of crude oil or natural gas supplies and resulting in a release of a hazardous substance” at or above the threshold reporting quantity. CERCLA Hazardous Substance Release Reporting Where there has been a release of a hazardous substance, CERCLA section 103 requires a person in charge of a facility to report such releases above reportable quantities as soon as he/she has knowledge of such release to the National Response Center. EPA regulations establish CERCLA hazardous substances and their reportable quantities. While releases of pure petroleum (e.g., petroleum in which hazardous substances have not increased such as by addition or processing) are excluded, releases of CERCLA hazardous substances that are commingled with petroleum are subject to the reporting requirement. Oil and gas well operators would be required to report any releases to the environment of other hazardous substances, for example, if a stored hazardous substance was accidentally spilled onto the ground, or if hazardous substances above the reportable quantity were injected but not authorized by state law. The National Response Center—managed by the U.S. Coast Guard— receives release reports and forwards them to EPA Regions. When receiving a report, according to EPA Regional staff will screen the report for such factors as what was spilled and in what quantity and whether the spill threatens surface waters, to determine if EPA needs to respond and, if appropriate, will obtain additional information on the event, and/or send an on-scene coordinator to the site. EPA officials also noted they use the release reports to refer sites to program enforcement offices, such as the Clean Water Act’s SPCC program, for follow-up. Although release reports are publicly available, the available search terms do not readily differentiate oil and gas well sites from other types of oil and gas facilities. EPA officials noted that there had been approximately 200 reports of oil spills from oil facilities in the last 5 years. EPA Region 5 officials stated that oil spills are more often related to pipelines, tank sites, or trucking accidents, with few occurring at well sites. Relevant EPA Authorities EPA established the Superfund program to carry out its responsibilities and authorities under CERCLA. Under the Superfund program, EPA implements its authorities to compel parties responsible for contaminating sites—via releases of hazardous substances—to clean them up, as well as to enter into agreements with such parties for them to conduct the cleanup. In addition, EPA can itself conduct response actions, which may include investigations and cleanup activities, and then seek reimbursement from the responsible parties. The Superfund cleanup process involves a series of steps during which specific activities—such as investigations and cleanups—take place or decisions are made. The CERCLA program has two basic types of cleanup: (1) cleanups under the removal process, which generally address short-term threats, and (2) cleanups under the remedial action process, which are generally longer-term cleanup actions. In determining whether to use removal or remedial authority to take a response action, EPA considers the time-sensitivity, complexity, comprehensiveness, and cost of the response action. Several EPA Superfund authorities are particularly relevant to oil and gas well operations, including the following: Investigations, monitoring, coordination. Under section 104(b), EPA generally may conduct investigation activities with appropriated program funds whenever a hazardous substance is released or there is a substantial threat of such a release, or there is reason to believe a release has occurred or is about to occur. These activities may include monitoring, surveys, testing, and other information gathering, as well as planning, legal, fiscal, economic, engineering, architectural, and other studies or investigations, as deemed appropriate. Information gathering and access. Under section 104(e), EPA has authority to obtain information as well as authorities to enter property Specifically, EPA may and to conduct inspections and take samples. require a person to furnish information about the identification, nature, and quantity of materials that have been or are generated, treated, stored, or disposed of at a facility or transported thereto, or the nature or extent of a release or threatened release of a hazardous substance or pollutant or contaminant, or the ability of a person to pay for or to perform a cleanup, including related documents and records, among other things. Where there is a reasonable basis to believe there may be a release or threat of release of a hazardous substance or pollutant or contaminant, EPA is authorized to enter a facility or property where such release is or may be threatened, among other things, and may inspect and obtain samples. EPA may obtain access by agreement, warrant, or administrative order. If consent is not granted, EPA may issue administrative orders or, through the Department of Justice, file civil actions, to compel compliance with requests made under these provisions. Removals. Under section 104(a), EPA generally has authority to act whenever there has been a release or substantial threat of release into the environment of any hazardous substance. EPA generally may conduct removal actions, among other things. Removal actions are broadly defined and include actions to monitor, assess, and evaluate the release; the disposal of removed material; and other actions to prevent, minimize, or mitigate damage to the public health or welfare or to the environment such as provision of alternative drinking water supplies. Imminent and substantial endangerment authority related to releases of a pollutant or contaminant. Under section 104(a), EPA has authority to act whenever a release or substantial threat of release into the environment of any pollutant or contaminant may present an imminent and substantial danger to the public health or welfare. This provides EPA with authority over releases of substances that are not CERCLA however, as hazardous but that may harm public health or welfare; noted above, releases that are purely petroleum are excluded. Under this authority, EPA may conduct removals, provide for remedial action, or take any other response measure consistent with the National Contingency Plan. Authorities to pursue potentially responsible parties. In addition, under section 106(a), EPA, through the Department of Justice, can pursue injunctive relief in court, where an actual or threatened release of a hazardous substance from a facility may pose an imminent and substantial endangerment to the public health or welfare or the environment. EPA also can issue an administrative order requiring a potentially responsible party to take response actions as may be necessary to protect public health and welfare and the environment. CERCLA also provides authorities for EPA to pursue cleanup and related costs from potentially responsible parties, and to enter settlements, as well as providing for liability of potentially responsible parties for damages to federal, state, and tribal natural resources. EPA has utilized its CERCLA authorities at several locations where it has been alleged that hazardous substance releases from oil and gas well sites have contaminated land or groundwater. In an example at a conventional oil well, in the 1990s, EPA, as represented by the Department of Justice, reached an agreement in which an oil exploration and production company pled guilty to a criminal felony count related to CERCLA violations when operators disposed of waste oil and hazardous substances by injecting them down the annuli (the space between the well casing and the surrounding rock) of the oil wells, over a 2-year period. According to the Department of Justice, the company agreed to spend $22 million to resolve the criminal case and related civil claims, which included claims brought under RCRA, SDWA, and EPCRA, as well as CERCLA. See Richard M. Fetzer, On-Scene Coordinator EPA, Action Memorandum to Dennis Carney, Associate Division Director, Hazardous Site Cleanup Division, EPA, re: Request for Funding for a Removal Action at the Dimock Residential Groundwater Site, Jan. 19, 2012. contamination investigations at Pavillion, Wyoming.referenced CERCLA section 104(e) authority in requesting information from operators of wells proximate to the Pavillion site. EPA has used CERCLA section 104(e) in conjunction with other authorities in several “multimedia” information requests, where EPA seeks information under multiple statutes and for multiple media—air, land, water—that may be affected. In 2011, for example, EPA used CERCLA and other authorities to request information concerning a blowout at a Marcellus shale natural gas well in Bradford, Pennsylvania. In this instance, a well blowout during hydraulic fracturing resulted in the release of flowback fluids to a tributary of the Susquehanna River, as well as combustible gases to the atmosphere. Appendix VII: Key Requirements and Authorities under the Emergency Planning and Community Right-to-Know Act The Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA) provides a mechanism to help communities plan for emergencies involving extremely hazardous substances, and to provide individuals and communities with access to information regarding the storage and releases of certain toxic chemicals, extremely hazardous substances, and hazardous chemicals in their communities. Generally Applicable Chemical Information, Inventory, and Release Reporting EPCRA imposes a set of generally applicable requirements to report information on the uses, inventories, and releases into the environment of hazardous and toxic chemicals above threshold quantities. Regarding releases, EPCRA section 304 requires owners or operators of facilities where a chemical is produced, used, or stored to notify state and local emergency planning authorities of certain releases. The releases for which EPCRA requires reporting partially overlap with those for which the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) requires reporting. Where there is overlap, EPCRA’s procedures ensure state and local authorities receive this information, and CERCLA’s procedures ensure federal authorities receive notification. Regarding reporting of chemical information and inventories, EPCRA sections 311 and 312 requirements apply only to those facilities storing or using (1) more than 500 pounds or the threshold planning quantity, whichever is lower, of extremely hazardous substances, or (2) more than 10,000 pounds of other hazardous chemicals. These facilities are required to provide chemical information (e.g., Material Safety Data Sheet or other detailed list) and submit an annual inventory report to state and local emergency planning authorities and to the local fire department with jurisdiction over the facilities. Requirements under EPCRA That May Be Triggered at Well Sites Well sites are subject to EPCRA sections 304, 311, and 312, among others, and may be subject to reporting requirements to the extent that the chemicals used, stored, or produced at well sites meet the respective reporting thresholds. Under EPCRA section 304, any facility, such as a well site, that produces, uses, or stores any hazardous chemical and has a release above the reportable quantity of a CERCLA hazardous substance or an extremely hazardous substance, must provide notification to state and local emergency planning authorities, as well as the National Response Center. Under EPCRA sections 311 and 312, any facility, such as a well site, at which an extremely hazardous chemical or any other hazardous chemical is present at the relevant threshold quantity, must meet inventory reporting requirements. For extremely hazardous chemicals, the threshold is 500 pounds or its threshold planning quantity, whichever is less. For all other hazardous chemicals, the reporting threshold is 10,000 pounds. For example, if the aggregate amount of hydrofluoric acid, an extremely hazardous chemical with a threshold planning quantity of 100 pounds, at a well site exceeds that threshold then the facility must report under sections 311 and 312. As another example, if a well stores or uses more than 10,000 pounds of drip gas or natural gas condensate at any one time, then the facility must report under sections 311 and 312. The extent to which these requirements are triggered at oil and gas well sites depends on the presence and quantities of listed chemicals at such sites, among other things. We did not locate any publicly available data on the quantity of chemicals stored at actual or typical well sites, but FracFocus provides self-reported data on the types of chemicals used in hydraulic fracturing, meaning that these chemicals are present and used at well sites. According to data in FracFocus, some hydraulic fracturing operations may use various hazardous chemicals, including some that are also CERCLA hazardous substances, such as hydrochloric acid, formaldehyde, formic acid, acetaldehyde, ethylene glycol, methanol, acetic acid, sodium hydroxide, potassium hydroxide, acrylamide, and naphthalene; of these, one is also considered “extremely hazardous.” According to EPA, its Regional offices have several cases in development where the facility triggered the reporting requirements under 311 and 312 during all phases of operation, including drilling, hydraulic fracturing, and production. EPA stated that, based on the Regions’ experience, section 311 and 312 requirements could be triggered at every well site. EPA provided an example of section 312 information for a well site, which according to EPA officials, indicates that some hazardous chemicals may be present at the particular well site in quantities that would trigger section 311 and 312 requirements. The information provided by EPA suggests that the types of chemicals with maximum on-site quantities of 10,000 to 99,999 pounds are the following: cement and associated additives; drilling mud and associated additives; lubricants, drilling mud additives; and alkalinity and pH control material. The information provided by EPA also suggests that the types of chemicals with maximum on-site quantities of 100,000 to 999,999 pounds are the following: weight materials, and fuels. Toxic Release Inventory EPCRA also requires some facilities in listed industries to report to EPA their releases of listed toxic chemicals to the environment; at present, these requirements do not apply to oil and gas well operations. Section 313 of EPCRA generally requires certain facilities that manufacture, process, or otherwise use any of more than 600 listed individual chemicals and chemical categories, to report annually to EPA and their respective state, for those chemicals used above threshold quantities. Facilities need to report the amounts that they released to the environment and whether they were released into the air, water, or soil. Id. at (i). Full-time employee is defined as 2,000 hours per year of full-time equivalent employment. A facility would calculate the number of full-time employees by totaling the hours worked during the calendar year by all employees, including contract employees, and dividing that total by 2,000 hours. 40 C.F.R.§ 372.3 (2012). American Industry Classification System (NAICS) codes, and subsequently to update as needed to reflect changes to the NAICS codes. EPCRA section 313(b)(1)(B) provides EPA with authority to add or delete industrial codes. 1988. In the initial regulations, EPA discussed its approach to evaluating additional industrial codes under its discretionary authority but did not add any at that time. Oil and gas extraction industries were not included on the statutory list of Standard Industrial Classification codes and hence were not subject to the rule. EPCRA § 313(b)(1)(B), 42 U.S.C. § 11023(b)(1)(B) (2012). EPA can also add individual facilities. EPCRA § 313(b)(2), 42 U.S.C. § 11023(b)(2) (2012). One industry group, oil and gas extraction classified in [Standard Industrial Classification] code 13, is believed to conduct significant management activities that involve EPCRA section 313 chemicals. EPA is deferring action to add this industry group at this time because of questions regarding how particular facilities should be identified. This industry group is unique in that it may have related activities located over significantly large geographic areas. While together these activities may involve the management of significant quantities of EPCRA section 313 chemicals in addition to requiring significant employee involvement, taken at the smallest unit (individual well), neither the employee nor the chemical thresholds are likely to be met. EPA will be addressing these issues in the future. The preamble of the final rule stated in part, “ number of commenters support EPA’s decision not to include oil and gas exploration and production in its proposal, and urge EPA not to propose adding this industry in the future. EPA considered the inclusion of this industry group prior to its proposal, and indicated in the proposal that one consideration for not including it was concern over how a ‘facility’ would be defined for purposes of reporting in EPCRA section 313 …This issue, in addition to other questions, led EPA to not include this industry group. EPA will continue its dialogue with the oil and gas exploration and production industry and other interested parties, and may consider action on this industry group in the future.” In fall 2011, EPA conducted a discussion forum on regulations.gov. The background information provided in the forum stated that EPA was considering a rule to add or expand coverage to the following industry sectors: Iron Ore Mining, Phosphate Mining, Solid Waste Combustors and Incinerators, Large Dry Cleaners, Petroleum Bulk Storage, and Steam Generation from Coal and/or Oil. EPA officials told us that, for the current possible rulemaking, the initial screening process for sectors to consider adding to the TRI included review of those sectors, such as oil and gas production, that were considered but ultimately not added in the 1997 rule. In addition, EPA officials said the initial screening process also included sectors covered by analogous registries of other countries. According to EPA, the oil and gas sector falls into both categories and was considered in the initial screening. As of July 2012, EPA officials stated that EPA does not anticipate adding oil and gas exploration and production sites as part of the possible rule currently under consideration to add industry sectors to the scope of TRI. EPA officials explained that the agency has not changed its assessment of the oil and gas sector as it pertains to TRI reporting since the 1996 proposed rule and stated that adding oil and gas well sites would likely provide a substantially incomplete picture of the chemical uses and releases at these sites, and would therefore be of limited utility in providing information to communities. EPA officials noted that Canada’s National Pollutant Release Inventory (NPRI) has data on Canadian oil and gas wells for some TRI chemicals. Specifically, EPA identified several TRI chemicals that were also reported to the Canadian NPRI by oil and gas facilities as being released, disposed of, and/or transferred in large quantities in reporting year 2010 in Canada including ammonia, arsenic, cadmium, copper, hexavalent chromium, hydrogen sulfide, lead, manganese, mercury, phenanthrene, phosphorus, sulfuric acid aerosols, and zinc compounds. If oil and gas exploration and production were added to the industries required to report to the TRI, such facilities meeting relevant thresholds would have to report releases of hydrogen sulfide, which is among the chemicals of particular concern some have cited. In October 2011, EPA lifted its administrative stay of the EPCRA section 313 reporting requirements for hydrogen sulfide, which had been in effect since 1994, shortly after the chemical was added to the list of toxic chemicals. EPA conducted a technical evaluation of hydrogen sulfide and found no basis for continuing the administrative stay of the reporting requirements. The first reports under EPCRA section 313 for hydrogen sulfide will be due on July 1, 2013, for reporting year 2012. Enforcement EPCRA provides EPA with various authorities to enforce the act’s requirements. For example, for violations of EPCRA section 311 or section 312 requirements, such as provision of annual inventory reports to state and local authorities, EPA may assess administrative penalties, or initiate court actions to assess civil penalties. In cases of violations of section 304 release reporting requirements, EPA may assess administrative penalties, among other things. Appendix VIII: Key Requirements and Authorities under the Toxic Substances Control Act To help protect human health and the environment, the Toxic Substances Control Act (TSCA) authorizes EPA to regulate the manufacture, processing, use, distribution in commerce, and disposal of chemical substances and mixtures. EPA has authorities by which it may assess and manage chemical risks, including (1) to collect information about chemical substances and mixtures; (2) upon making certain findings, to require companies to conduct testing on chemical substances and mixtures; and (3) upon making certain findings, to take action to protect adequately against unreasonable risks such as by either prohibiting or limiting manufacture, processing, or distribution in commerce of chemical substances or by placing restrictions on chemical uses. EPA maintains the TSCA Chemical Substance Inventory that currently lists over 84,000 chemicals that are or have been manufactured or processed in the United States; about 62,000 were already in commerce when EPA began reviewing chemicals in 1979. Generally, TSCA’s reporting requirements fall on the manufacturers (including importers), processors, and distributors of chemicals, rather than users of the chemicals. According to EPA, some of the chemicals on the TSCA Chemical Substance Inventory are used in oil and gas exploration and production. For example, in response to our request, EPA identified several chemicals on the FracFocus list of “chemicals used most often” which are on the TSCA inventory.representative of different product function categories, are as follows: These examples, which EPA chose as Hydrochloric acid – Acid; Peroxydisulfuric acid, ammonium salt – Breaker; Ethanaminium, 2-hydroxy-N,N,N-trimethyl-, chloride (1:1) – Clay Methanol – Corrosion Inhibitor; and 2-Propenamide, homopolymer – Friction Reducer. As part of EPA’s Study on the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources, EPA is currently analyzing information provided by nine hydraulic fracturing service companies, including a list of chemicals the companies identify as used in hydraulic fracturing operations. EPA officials said that they expect most of these chemicals disclosed by the service companies to appear on the TSCA inventory list, provided that chemicals are not classified solely as pesticides. EPA does not expect to be able to compare the list of chemicals provided by the nine hydraulic fracturing service companies to the TSCA inventory until the release of a draft report of the Study on the Potential Impacts of Hydraulic Fracturing on Drinking Water Resources for peer review, expected in late 2014. For those chemicals that are listed, some hydraulic fracturing service companies may be manufacturers, processors, or distributors, and could be subject to certain TSCA reporting provisions. On August 4, 2011, Earthjustice and 114 others filed a petition with EPA asking the agency to exercise TSCA authorities and issue rules to require manufacturers, processors, and distributors of chemicals used in oil and gas exploration or production to develop and/or provide certain information. The petition asserts that more than 10,000 gallons of such chemicals may be used to fracture a single well. EPA denied the portion of the petition requesting that EPA issue a TSCA section 4 rule to require identification and toxicity testing of chemicals used in oil and gas exploration or production, stating that the petition did not set forth facts sufficient to support the findings required for such test rules. The petition also requested that EPA issue new rule(s) under TSCA section 8 to require, for these chemicals, maintenance and submission of various records, call-in of records of allegations of significant adverse reactions, and submission of all existing not previously reported health EPA granted the section 8(a) and 8(d) portions of and safety studies. the petition in part, stating that the agency believes “there is value in initiating a proposed rulemaking process under TSCA authorities to obtain data on chemical substances and mixtures used in hydraulic fracturing,” but denying them so far as they concern other chemical substances used in oil and gas exploration and production but not in hydraulic fracturing. EPA is drafting an Advance Notice of Proposed Rulemaking for the section 8(a) and (d) rules. As of August 31, 2012, EPA has not released a publication date for this proposed rulemaking. EPA also intends to convene a stakeholder process to gather additional information for use in developing a proposed rule, and “to develop an overall approach that would minimize reporting burdens and costs, take advantage of existing information, and avoid duplication of efforts.” EPA officials said that the agency will consider, among other things, how to address confidential business information as it develops the proposal. A TSCA section 8(a) rule, once issued, may require reporting, insofar as known or reasonably ascertainable, of such chemical information as chemical names, molecular structure, category of use, volume, byproducts, existing environmental and health effects data, disposal practices, and worker exposure. Regulations promulgated under TSCA section 8(d) are to require submission to EPA of reasonably ascertainable health and safety studies. TSCA provides EPA with certain enforcement authorities. For example, EPA may impose a civil penalty for certain violations of TSCA, such as failing to comply with requirements to notify and provide certain information to EPA before manufacturing a new chemical, or by using for commercial purposes a chemical substance that the user had reason to know was manufactured, processed, or distributed in violation of such requirements, among other things. Appendix IX: Selected State Requirements All six states we reviewed have state agencies responsible for implementing and enforcing environmental and public health requirements, which include overseeing oil and gas development (see table 11). In five of the six states we reviewed, this responsibility is split primarily between two different agencies. In general, one of these agencies has primary responsibility for regulating oil and gas development activities such as drilling that occur on the well pad and for managing and disposing of certain wastes generated on-site, while the other agency has a broader mandate for implementing and enforcing environmental or public health requirements, some aspects of which may affect oil and gas development. For example, the Colorado Oil and Gas Conservation Commission regulates activities such as drilling, hydraulic fracturing, and disposal of produced water in Class II UIC wells, while the Colorado Department of Public Health and Environment regulates discharges to surface waters, commercial solid waste facilities, and certain air emissions. In contrast, oil and gas development in Pennsylvania is primarily governed by one agency—the Pennsylvania Department of Environmental Protection. This appendix presents information about state statutory and regulatory requirements in the areas of siting and site preparation (see table 12); drilling, casing, and cementing (see table 13); hydraulic fracturing (see table 14); well plugging (see table 15); site reclamation (see table 16); waste management in pits (see table 17); waste management through underground injection (see table 18); and managing air emissions (see table 19). Requirements presented in the following tables have been summarized mainly from state regulations, though references to state statutes are included in certain circumstances. Appendix X: Crosswalk between Selected Requirements from EPA, States, and Federal Lands Table 20 is intended to show representative areas of regulation, focused on substantive requirements specific to oil and gas wells. The table includes EPA’s environmental and public health requirements, requirements from the six states included in our review, and additional requirements that apply for the development of federally-owned mineral resources. Other activities at oil and gas well sites may also be subject to federal or state regulation. Appendix XI: Comments from the Department of Agriculture Appendix XII: Comments from the Department of the Interior Appendix XIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Barbara Patterson, Assistant Director; Elizabeth Beardsley; David Bieler; Antoinette Capaccio; Cindy Gilbert; Armetha Liles; Alison O’Neill; and Janice Poling made key contributions to this report.
Technological improvements have allowed the extraction of oil and natural gas from onshore unconventional reservoirs such as shale, tight sandstone, and coalbed methane formations. Specifically, advances in horizontal drilling techniques combined with hydraulic fracturing (pumping water, sand, and chemicals into wells to fracture underground rock formations and allow oil or gas to flow) have increased domestic development of oil and natural gas from these unconventional reservoirs. The increase in such development has raised concerns about potential environmental and public health effects and whether existing federal and state environmental and public health requirements are adequate. GAO was asked to review environmental and public health requirements for unconventional oil and gas development and (1) describe federal requirements; (2) describe state requirements; (3) describe additional requirements that apply on federal lands; and (4) identify challenges, if any, that federal and state agencies reported facing in regulating oil and gas development from unconventional reservoirs. GAO identified and analyzed federal laws, state laws in six selected states (Colorado, North Dakota, Ohio, Pennsylvania, Texas, and Wyoming), and interviewed federal and state officials and representatives from industry, environmental, and public health organizations. GAO is not making recommendations. In commenting on the report, agencies provided information on recent regulatory activities and technical comments. As with conventional oil and gas development, requirements from eight federal environmental and public health laws apply to unconventional oil and gas development. For example, the Clean Water Act (CWA) regulates discharges of pollutants into surface waters. Among other things, CWA requires oil and gas well site operators to obtain permits for discharges of produced water—which includes fluids used for hydraulic fracturing, as well as water that occurs naturally in oil- or gas-bearing formations—to surface waters. In addition, the Resource Conservation and Recovery Act (RCRA) governs the management and disposal of hazardous wastes, among other things. However, key exemptions or limitations in regulatory coverage affect the applicability of six of these environmental and public health laws. For example, CWA also generally regulates stormwater discharges by requiring that facilities associated with industrial and construction activities get permits, but the law and its regulations largely exempt oil and gas well sites. In addition, oil and gas exploration and production wastes are exempt from RCRA hazardous waste requirements based on a regulatory determination made by the Environmental Protection Agency (EPA) in 1988. EPA generally retains its authorities under federal environmental and public health laws to respond to environmental contamination. All six states in GAO’s review implement additional requirements governing activities associated with oil and gas development and have updated some aspects of their requirements in recent years. For example, all six states have requirements related to how wells are to be drilled and how casing—steel pipe within the well—is to be installed and cemented in place, though the specifics of their requirements vary. The states also have requirements related to well site selection and preparation, which may include baseline testing of water wells before drilling or stormwater management. Oil and gas development on federal lands must comply with applicable federal environmental and state laws, as well as additional requirements. These requirements are the same for conventional and unconventional oil and gas development. The Bureau of Land Management (BLM) oversees oil and gas development on approximately 700 million subsurface acres. BLM regulations for leases and permits govern similar types of activities as state requirements, such as requirements for how operators drill the well and install casing. BLM recently proposed new regulations for hydraulic fracturing of wells on public lands. Federal and state agencies reported several challenges in regulating oil and gas development from unconventional reservoirs. EPA officials reported that conducting inspection and enforcement activities and having limited legal authorities are challenges. For example, conducting inspection and enforcement activities is challenging due to limited information, such as data on groundwater quality prior to drilling. EPA officials also said that the exclusion of exploration and production waste from hazardous waste regulations under RCRA significantly limits EPA’s role in regulating these wastes. In addition, BLM and state officials reported that hiring and retaining staff and educating the public are challenges. For example, officials from several states and BLM said that retaining employees is difficult because qualified staff are frequently offered more money for private sector positions within the oil and gas industry.
Background Most FEHBP plans contract with a PBM to help manage their prescription drug benefits, and those that do not contract with a PBM have internal components that employ techniques commonly used by PBMs, according to OPM officials. The three FEHBP plans we reviewed covered more than half of all FEHBP enrollees and paid $3.3 billion for about 65 million prescriptions dispensed to these enrollees in 2001. Table 1 shows plan enrollment and PBMs we reviewed. PBMs offer health plans a variety of services including negotiating price discounts with retail pharmacies, negotiating rebates with manufacturers, and operating mail-order prescription services and administrative claims processing systems. PBMs also provide health plans with clinical services such as formulary development and management, prior authorization and drug utilization reviews to screen prescriptions for such issues as adverse interactions or therapy duplication, and substitution of generic drugs for therapeutically equivalent brand-name drugs. In order to provide these services, PBMs operate with multiple stakeholders in a complex set of relationships, as shown in figure 1. Health plans are primarily responsible for overseeing PBM activities and for reporting to OPM any problems that could affect benefits service delivery to enrollees. OPM oversight responsibilities include negotiating plan benefits and changes, monitoring drug benefit service delivery, reviewing customer service reports, conducting on-site visits with pharmacy benefit managers, and handling appeals and complaints from FEHBP enrollees regarding their pharmacy benefits. PBMs Achieved Savings through Price Discounts, Rebate Payments, and Managing Drug Use PBMs achieved savings for FEHBP plans primarily by obtaining price discounts for drugs, obtaining rebate payments from manufacturers, and employing various intervention techniques to control drug utilization and cost. In comparison to cash-paying customer prices, PBMs we reviewed obtained significant discounts from retail pharmacies and offered even greater discounts when prescriptions were dispensed through mail-order pharmacies. In addition, PBMs passed on to plans some or all manufacturers’ rebates associated with the FEHBP plans’ contracts and used intervention techniques that reduced plan spending on drug benefits. PBMs Obtained Discounted Prices Significantly Below Those Paid by Cash-Paying Customers In comparison to prices cash-paying customers without third-party coverage would pay at retail pharmacies, the PBMs we examined achieved significant discounts for drugs purchased at retail pharmacies and offered even greater discounts through their mail-order pharmacies. The average price PBMs obtained for drugs from retail pharmacies was about 18 percent below the average price cash-paying customers would pay at retail pharmacies for 14 selected brand-name drugs and 47 percent below the cash price for 4 selected generic drugs. For the same quantity, the average price paid at mail order for the brand and generic drugs was about 27 percent and 53 percent below the average cash-paying customer price, respectively. (See fig. 2.) Moreover, PBMs we reviewed obtained greater discounts from retail pharmacies than did state Medicaid programs, which represent another major purchaser of drugs through retail pharmacies. We estimate that the average reimbursement rate for drugs by 5 Medicaid programs we reviewed was about 11 percent below the average price cash-paying customers would pay at retail pharmacies for the selected brand-name drugs (compared to 18 percent for the FEHBP plans we reviewed) and 23 percent below the average cash price for the selected generic drugs (compared to 47 percent for the FEHBP plans we reviewed). While PBMs negotiated prices significantly lower than a cash-paying customer would pay, these discounts may overstate the level of savings plans achieve from using PBMs since no benchmark exists to accurately determine what discounts plans would obtain without a PBM. In the absence of a PBM, FEHBP plans could obtain some level of drug price discounts from retail pharmacies and drug manufacturers but would also directly incur the costs associated with undertaking these responsibilities. Also, PBMs can negotiate deeper discounts for plans with smaller networks of retail pharmacies because the pharmacies can anticipate receiving a higher concentration of the plans’ enrollees. For example, BCBS introduced its basic option in 2002 that includes a smaller network of retail pharmacies—about 70 percent as many pharmacies as its standard option—and deeper discounts in its retail pharmacy payments compared to its standard option. PBMs Further Reduced Plans’ Drug Expenditures by Passing Through Certain Manufacturer Rebates PBMs also passed through to the FEHBP plans they contracted with some or all of drug manufacturer rebates associated with their FEHBP business. Over the past 4 years, we estimate that the plans we reviewed received rebate payments that effectively reduced plans’ annual spending on prescription drugs by 3 percent to 9 percent. The share of rebates PBMs pass through to plans varies and is subject to contractual agreements negotiated between PBMs and the plans. Rebates and formularies are interrelated. Drug manufacturers provide PBMs certain rebates depending not only on inclusion of their drugs on a plan’s formulary but also on the PBMs’ ability to increase a manufacturer’s market share for certain drugs. Formulary incentives, such as lower enrollee cost sharing for certain drugs compared to competing therapeutically equivalent drugs, encourage the former’s use. Manufacturers may pay higher rebates when formularies have stronger incentives to use specific drugs. Therefore, PBMs may be able to provide other health plans with higher rebates if their formularies are more restrictive than those of the FEHBP plans we examined. PBM Intervention Techniques Contributed to Plans’ Savings, but Are Difficult to Quantify Although PBM intervention techniques help contain plans’ cost increases by managing drug utilization and identifying opportunities to dispense less expensive drugs, their full impact on savings is not easily quantifiable. The FEHBP plans and PBMs we reviewed reported savings for individual intervention techniques ranging from less than 1 percent to 9 percent of plans’ total drug spending in 2001. Because plans varied in their use of intervention techniques and employed different cost savings methodologies, these estimates may not be comparable across plans. Techniques plans most commonly used included concurrent drug utilization review, prior authorization, therapeutic brand interchange, and brand to generic substitution. The reported cumulative effect of several techniques for one plan amounted to 14 percent of drug spending. Measuring cost savings from PBM intervention techniques is difficult for various reasons, including: Savings methodologies did not reflect the effect intervention techniques may have over time on enrollees’ utilization patterns and physicians’ prescribing practices. That is, there may be a sentinel effect from PBMs’ reviews whereby enrollees and physicians may stop filling or prescribing drugs that do not meet PBMs’ utilization review or refill criteria, but the extent to which these behavior changes occur is beyond the scope of PBMs’ data systems. Plans and PBMs we reviewed did not consistently measure the number or costs of drugs not dispensed as a result of PBM interventions that result in drug substitutions, denials for adverse drug interaction, or other interventions, making it difficult to estimate savings from certain intervention techniques. Plans did not systematically measure savings when the primary goal of the intervention technique was patient safety and compliance with drugs’ clinical guidelines. Among various intervention techniques, concurrent drug utilization and prior authorization provided some plans the largest quantifiable savings. The following are examples of intervention savings estimates reported by plans we reviewed. Drug utilization review includes the PBM examining prescriptions concurrently at the time of purchase to assess safety considerations, such as potential adverse interactions, and compliance with clinical guidelines, including quantity and dose. These reviews can also occur retrospectively to analyze enrollees’ drug utilization and physicians’ prescribing patterns. Two plans estimated savings from drug utilization review ranging from 6 percent to 9 percent, with about 60 percent to 80 percent of the savings from concurrent reviews, including claim denials from the PBM to prevent early drug refills and safety advisories to caution pharmacists about potential adverse interactions or therapy duplications. The remaining estimated savings are from retrospective reviews. Prior authorization requires enrollees to receive approval from the plan or PBM before dispensing certain drugs that treat conditions or illnesses not otherwise covered by plans, have high costs, have a high potential for abuse, or are ordered in unusual quantities. Some plans may also require prior authorization for nonformulary drugs. Each of the plans we reviewed required prior authorization for certain drugs such as growth hormones and a drug used to treat Alzheimer’s disease. Two plans reported savings from prior authorization ranging from 1 percent to 6 percent of plan spending for drugs that either were not dispensed or were substituted for with less costly alternatives. Therapeutic interchange encourages the substitution of less expensive formulary brand-name medications considered safe and effective for more expensive nonformulary drugs within the same drug class. Two plans reported savings ranging from 1 percent to 4.5 percent from therapeutic interchange. These estimates are in addition to savings associated with rebates plans earned for drugs in the formulary. Generic substitution involves dispensing less expensive, chemically- equivalent generic drugs in place of brand name drugs. Where a PBM specifically intervened by contacting the physician to change a prescription from requiring a brand name to allowing a generic drug, one plan reported savings of less than 1 percent of the plan’s total drug spending. The other two plans said they do not have readily available data to measure savings from PBM interventions for generic drugs. All three plans reported more general information on their generic drug use, but the extent to which generic drugs are used cannot solely be attributed to PBMs because plan benefit design and physician prescribing patterns also influence generic drug use. On average, the plans we reviewed reported that generic drugs were dispensed more often by retail pharmacies (about 45 percent of all drugs dispensed) than by mail-order pharmacies (about 34 percent). The difference in use of generic drugs may in part reflect differences in the types of drugs that are typically dispensed through retail and mail-order pharmacies. For drugs where a generic version was available, the retail and mail-order pharmacies dispensed generic drugs at more similar rates—on average 89 percent of the time for retail pharmacies and 87 percent of the time for mail-order pharmacies. PBMs Provided FEHBP Enrollees Generally Unrestricted Access to Prescription Drugs, Cost Savings, and Other Benefits PBMs we reviewed generally provided enrollees with access to a nearby pharmacy, maintained formularies for plan enrollees that included drugs in most major therapeutic categories, and provided access to nonformulary drugs when medically necessary. The FEHBP plans passed on savings generated by the PBMs to enrollees in the form of lower out-of- pocket costs for prescription drugs in certain instances, such as through lower cost sharing for drugs obtained through mail-order pharmacies, and a smaller increase in premiums for all enrollees than might occur absent the PBM savings. Enrollees also benefited from PBM intervention programs to prevent potentially dangerous drug interactions and customer service that generally met or exceeded quality standards established in contracts negotiated with the FEHBP plans. PBMs Provided Enrollees Access to Broad Retail Pharmacy Networks and Generally Nonrestrictive Drug Formularies Nearly all FEHBP enrollees had a retail pharmacy participating in their plan within a few miles of their residence. Two of the plans required the PBM to assure that at least 90 percent of enrollees had at least one pharmacy located within 5 miles of their residences. The PBMs for these plans reported to us they exceeded plans’ access standards and that close to 100 percent of enrollees live within 5 miles of a network pharmacy. The third plan did not have a specific contractual access standard, but plan officials said they have verified that well over 90 percent of enrollees live within 5 miles of a network pharmacy. We also compared the PBMs’ networks statewide in five states to the total of licensed retail pharmacies and found high levels of pharmacy participation. In most instances, we estimate that more than 90 percent to nearly 100 percent of licensed retail pharmacies participated in the PBM networks. Enrollees also had few restrictions on which drugs they could obtain. While the plans’ formularies varied with respect to the number of drugs covered, they included prescription drugs in most major therapeutic categories. To provide a benchmark for comparing the breadth and depth of the FEHBP formularies, we compared the three formularies to the outpatient prescription drugs included in the Department of Veterans Affairs (VA) National Formulary, considered by the Institute of Medicine to be not overly restrictive. Each plan included over 90 percent of the drugs listed on the VA formulary or a therapeutically equivalent alternative, and included at least one drug in 93 percent to 98 percent of the therapeutic classes covered by VA. (See table 2.) Each plan provided enrollees access to nonformulary drugs, although sometimes with higher cost sharing requirements. GEHA provided coverage to all nonformulary drugs at no additional cost to enrollees. BCBS had additional cost sharing requirements for nonformulary and certain formulary drugs under its basic option plan. Enrollees must pay a flat $25 copayment for formulary brand drugs but must pay the greater of a $35 copayment or 50 percent of the plan’s cost for nonformulary brand drugs (known as coinsurance). BCBS required the enrollees to pay the same 25 percent coinsurance for formulary and nonformulary drugs under its standard option plan. PacifiCare of California did not impose additional cost sharing for nonformulary drugs but generally required enrollees (or their physicians) to demonstrate the medical necessity and lack of effective alternative formulary drugs prior to approving coverage of a nonformulary drug. PBM Savings Helped Reduce Enrollees’ Costs for Out-of-Pocket Prescription Drug Spending and Premiums FEHBP enrollees benefited from cost savings generated from PBM services through lower costs for mail-order prescriptions, lower cost sharing linked to PBMs’ discounts obtained from retail pharmacies, and a lower increase in premiums overall. PBM mail-order pharmacy programs often provided for lower out-of-pocket costs for 90-day supplies of drugs than an enrollee would pay for the same prescriptions filled at a retail pharmacy. The GEHA high option plan and PacifiCare of California imposed lower cost-sharing requirements for mail order while the BCBS standard option plan imposed a flat copayment for mail order but required enrollees to pay 25-percent coinsurance at retail. The flat copayments provided an incentive for enrollees to use mail order for more expensive brand drugs. Only the GEHA standard plan included the same cost sharing requirements for both retail and mail order. (See table 3.) The interaction between a plan’s benefit design and PBM cost savings can also affect the amount of enrollees’ out-of-pocket costs for prescription drugs. For example, in instances where a plan required enrollees to pay a coinsurance rate representing a portion of the actual drug cost, enrollees shared directly in price discounts PBMs obtained from pharmacies. To illustrate, for a hypothetical drug with an undiscounted cash price of $64, and a PBM-obtained discount price of $52, an enrollee in a plan with a 25- percent coinsurance requirement would pay $13 rather than $16. In contrast, where a plan’s benefit design provides for a fixed copayment, such as $15 per prescription, enrollees would pay the same regardless of the discount that PBMs obtained. PBM savings were also passed on to enrollees in the form of premiums that were less than they otherwise would be. Fee-for-service FEHBP plan premiums are based on past years’ claims data for FEHBP enrollees. Consequently, PBM reductions in plan claims costs for prescription drugs translate into lower premiums for enrollees in later years. For example, we estimate that PBM savings in the form of rebates passed on to the two fee- for-service FEHBP plans we examined between 1998 and 2000 translate into about a 1-percent decrease from what the plans’ future premiums would have been. In contrast to savings through cost sharing and other benefit design features that accrue only to those enrollees who use the prescription drug benefit, PBM savings in the form of premium savings accrue to all enrollees, regardless of whether they use prescription drugs. Enrollees Also Benefit from PBM Drug Utilization Review Programs and Customer Service Each FEHBP plan’s PBM provided a drug utilization review program to screen prescription drug therapies for such problems as adverse interactions, incorrect dosages, or improper duration of treatment. PBMs maintained a centralized database on each enrollee’s drug history and shared this information electronically with pharmacies at the time the prescription was filled. PBMs are often the only entity with complete information on a patient’s medications—particularly when enrollees are prescribed medication by more than one physician or fill prescriptions at different pharmacies. We have previously reported that automated drug utilization systems linked to a centralized database provide a more thorough prospective review and more benefits than reviews based on manual or local systems. PBMs provide customer service when they interact directly with FEHBP enrollees, such as when enrollees contact the PBMs to seek information about their prescriptions, resolve problems with having their prescription drugs filled, or obtain drugs through the mail-order pharmacy. Customer service quality is measured against customer service standards negotiated between each FEHBP plan and PBM. These standards included such measures as phone call answer time, mail-order prescription turn-around time and accuracy rates, and customer satisfaction as measured through enrollee surveys. Data provided by the PBMs indicate that they generally met or exceeded these standards, although we did not independently verify these data. Pharmacies Included in PBM Retail Networks Must Accept Discounted Prices and Perform Various Administrative Tasks Retail pharmacies that participate in the PBM networks used by FEHBP plans are affected by PBM policies and practices. For example, PBMs reimbursed pharmacies at levels below cash-paying customers, but above the pharmacies’ estimated drug acquisition costs. Processing PBM or other third-party prescriptions involves additional administrative requirements compared to cash transactions, and some PBMs may draw business away from retail pharmacies by providing savings and other incentives to encourage pharmacy customers to use PBMs’ mail-order pharmacies. Nevertheless, participation in the PBM retail networks is important for pharmacies because the PBMs serving the FEHBP plans we reviewed also contract with other clients that cumulatively represent a large share of the national population that purchase prescription and other nonprescription items from retail pharmacies. PBMs Reimbursed Retail Pharmacies Less than Cash-Paying Customers but Above Estimated Costs PBMs for the three FEHBP plans we reviewed reimbursed retail pharmacies at rates below what a cash-paying customer would pay but still above the pharmacies’ estimated acquisition costs. The average price paid for a typical 30-day supply was nearly 18 percent below the cash- paying customer price for 14 selected brand-name drugs and 47 percent below the average case price for 4 selected generic drugs. As a result, the gross margin earned by retail pharmacies on the PBM transactions is lower on average than for cash-paying customers. We estimate that these PBM discounted prices are higher on average than the pharmacies’ cost to acquire these drugs. Retail pharmacies typically purchase drugs from intermediary wholesale distributors and, to a lesser extent, from drug manufacturers directly. Because no data source exists to identify pharmacies’ actual acquisition costs for drugs, we used the wholesale acquisition cost (WAC) and added a mark-up of 3 percent to estimate pharmacy acquisition costs for drugs purchased from wholesalers. Accordingly, for the three FEHBP plans we reviewed, we estimate that the prices that the PBMs paid to retail pharmacies provided an average margin of about 8 percent above the pharmacies’ average acquisition costs for 10 brand drugs we reviewed. These estimated margins on the drugs do not reflect a drug store’s profit on drug sales because store overhead and dispensing costs are not deducted. They also do not reflect the costs of drugs when purchased directly from manufacturers rather than wholesalers nor any rebates or discounts that pharmacies may receive from suppliers or manufacturers. Moreover, because WAC is an average of prices charged by manufacturers to multiple purchasers, it may not accurately reflect the acquisition costs for any individual retail pharmacy. PBM Transactions Require Additional Administrative Tasks and Incur Higher Processing Costs for Retail Pharmacies PBM and other third-party transactions require pharmacy staff to undertake tasks not associated with cash-paying customer transactions, such as submitting claims electronically, responding to prior authorization requests, contacting physicians to approve formulary drug substitutions, and responding to patients’ questions about their health plan benefits. Pharmacists and pharmacy association representatives we interviewed indicated that the administrative requirements imposed by FEHBP- participating PBMs are generally similar to those imposed by PBMs associated with other health plans. Several studies have found that pharmacy staff spent significant time addressing third-party payment issues. For example, based on surveys of 201 retail pharmacies, one consultant found that 20 percent of pharmacy staff time was spent on activities directly related to third-party issues. A synthesis of multiple studies concluded that third-party prescriptions cost from $0.36 to $1.55 more than cash transactions to process. Compared to larger chain pharmacies, independent pharmacies may find PBM processing tasks particularly burdensome or costly. For example, independent pharmacies may be more likely to use pharmacists to process third-party transactions because they tend to have fewer other staff available, such as pharmacy technicians and clerks, according to a retail pharmacy association official. One study found that the average labor cost to process third-party prescriptions that required pharmacy staff intervention (such as responding to an initial claim denial) was 44 percent higher for an independent than a chain pharmacy. This study attributes the higher costs to the independent pharmacy’s greater reliance on pharmacists for performing certain third-party processing tasks. PBMs Use Financial and Other Incentives to Steer Retail Pharmacy Customers to Mail-Order Programs PBMs may also attempt to steer some enrollees away from retail pharmacies to their mail-order pharmacies. Two of the PBMs we reviewed send letters to some enrollees who purchase medications at a retail pharmacy informing them that their costs under the mail-service pharmacy program would be lower. These letters may include forms to facilitate the transfer of the prescription from the retail to the mail-order pharmacy. In 2001, the three FEHBP plans we reviewed dispensed 21 percent of all prescriptions through mail order, a higher share than the industry average. Nationally, a growing but still small share of prescription drugs is dispensed through mail-order pharmacies—about 5 percent of prescriptions and 17 percent of prescription sales in 2001. Most Pharmacies Participate in PBMs’ Retail Networks Most licensed pharmacies participate in the FEHBP PBMs’ retail pharmacy networks, in part because PBMs represent such a substantial market share–nearly 200 million Americans in 2001. Plan and PBM representatives noted that access to these enrollees benefits retail pharmacies by increasing traffic in the stores and thus sales of prescriptions and nonprescription items. According to NACDS, nonprescription sales nationally accounted for 5 percent of total sales for independent pharmacies and 39 percent of total sales for chain pharmacies in 2001. However, pharmacy association representatives report that PBMs’ large market shares leave many retail pharmacies with little leverage in negotiating with PBMs. These officials indicate that retail pharmacies may have to “take or leave” a PBMs’ proposed contract with actual negotiations only occurring in instances when a large chain will not accept the contractual terms or an independent pharmacy without nearby competitors in a rural area must be included to meet health plans’ access requirements. While it is difficult to assess how frequently these situations occur, chain pharmacies constituted 37 percent of all retail pharmacies and the top four chain drugs stores accounted for 30 percent of all pharmacy sales in 2000, according to NACDS. PBMs Received Compensation from Plans and Payments from Manufacturers for Their FEHBP Business PBMs received compensation directly from FEHBP plans for administrative services and drug costs as well as payments from pharmaceutical manufacturers. (See fig. 3.) PBM earnings from administrative fees and payments for mail-order drugs paid by the plans we reviewed varied depending on contractual arrangements. In addition, the PBMs we reviewed varied as to whether they retained a portion of drug manufacturer rebates associated with the FEHBP contracts, and all the PBMs received other rebates or payments from drug manufacturers. Specifically, the PBMs we reviewed received administrative fees, payments for drugs, and manufacturer rebates for their FEHBP business. They also received other rebates or payments from drug manufacturers based on their entire line of business with a particular manufacturer. Administrative fees. PBMs charged plans fees for a broad range of clinical and administrative services, including utilization reviews, prior authorization, formulary development and compliance, claims processing, and reporting. Administrative fees for plans we reviewed varied but on average accounted for about 1.5 percent of total plan drug spending in 2001. Payments for Retail and Mail-Order Drugs. PBMs we reviewed retained little or no revenue from plan payments for retail drug costs and dispensing fees because they were largely passed through to retail pharmacies. While not disclosing their acquisition costs for mail-order drugs, PBM officials said that plan payments were somewhat higher than their payments to pharmaceutical manufacturers for mail-order drugs. Using the average manufacturer price (AMP) as a proxy for PBMs’ mail- order acquisition costs, we estimate that the discounted price for mail- order drugs that plans and enrollees paid were on average higher than the estimated mail-order acquisition cost for some (but not all) brand-name drugs and all generic drugs that we reviewed. On average, the AMP was about 2 percent below the plan prices for 7 of the 14 brand-name drugs we reviewed but about 3 percent higher than the plan prices for the other 7 brand-name drugs. The AMP was below plan prices for all four generic drugs we reviewed. Rebates. PBMs shared with the FEHBP plans certain rebates that a drug manufacturer provides a PBM associated with their FEHBP business, although the extent to which the PBMs retained a portion of these rebates varied, depending on the contracts negotiated between the plans and PBMs. We estimate the rebates retained by the PBMs we reviewed represented less than half of one percent of total plan drug spending. The plans we reviewed varied as to whether they reimbursed PBMs separately for administrative services in exchange for a larger share of contractual rebates or they received less of the contractual rebates and were charged low or no fees for administrative services. PBMs also received other manufacturer rebates or payments for services based on their total volume of a particular manufacturer’s drugs sold through FEHBP plans and other plans. For example, one PBM we reviewed earned additional manufacturer rebates for its efforts to increase drug manufacturers’ share of certain products. The PBMs also received fees from manufacturers for various services, such as encouraging physicians to change prescribing patterns, educational services to enrollees regarding compliance with certain drug regimens, and data reporting services. These rebates and other payments were a large portion of PBMs’ earnings, according to PBM officials and industry experts, but the actual amounts were undisclosed because they are proprietary. Public financial information suggests that manufacturer payments are important sources of earnings. For example, in financial reports submitted to the SEC, two of the PBMs we reviewed stated that manufacturer rebates and fees were key to their profitability. Concluding Observations PBMs are central to most FEHBP plan efforts to manage their prescription drug benefits, and PBMs have helped the FEHBP plans we reviewed reduce what they would likely otherwise pay in prescription drug expenditures while generally maintaining wide access to most retail pharmacies and drugs. As the cost of prescription drugs continues to increase, FEHBP plans are likely to encourage PBMs to continue to leverage their purchasing power with drug manufacturers and retail pharmacies and pass on the savings to the plans and their enrollees. However, attempts to achieve additional cost savings can involve trade- offs for plan enrollees. For example, additional savings through formulary management can accrue if more restrictive formularies are used, but enrollees would likely have unrestricted access to fewer drugs. Similarly, retail pharmacies may be willing to provide deeper discounts as part of smaller, more selective retail pharmacy networks. Smaller networks have the potential to draw more enrollees into participating stores but offer enrollees access to fewer retail pharmacies. OPM, FEHBP plans, and PBMs must balance these trade-offs in designing affordable and accessible prescription drug benefits for federal employees. Agency and Other Comments and Our Evaluation We provided a draft of this report to OPM, the three plans and three PBMs we reviewed, two pharmacy associations (NACDS and NCPA), and two independent expert reviewers. In written comments, OPM generally concurred with our findings. OPM highlighted the advantages and trade-offs associated with FEHBP plans’ use of PBMs in providing affordable drug benefits and providing enrollees with access to prescription drugs. Appendix II contains OPM’s comments. The plans and PBMs reviewed the report for the accuracy of information regarding their arrangements and provided technical comments regarding information we reported about them, which we incorporated as appropriate. Two independent external experts on pharmaceutical drug pricing who were not affiliated with PBMs, pharmacies, or drug manufacturers indicated that the draft was fair and balanced. They also provided technical comments that we incorporated as appropriate. In oral comments, NACDS’ Vice President for Policy and Programs expressed strong concerns, particularly focusing on the scope of our work, and NCPA’s Senior Vice President for Government Affairs and General Counsel separately informed us that he generally concurred with NACDS’ comments. NACDS’ concerns included the following: Our draft did not adequately address the overall PBM industry and how it operates, including special economic relationships that may exist between some drug manufacturers and PBMs. The NACDS representative stated that these relationships create incentives for PBMs to encourage use of certain manufacturers’ drugs even if they are more costly to the plan or enrollees. As we noted in the draft, we were asked to examine the role of PBMs specifically for FEHBP-participating plans and enrollees, not the PBM industry in general. While the savings we report through discounts, rebates, and certain interventions do not reflect whether PBMs encourage higher-cost drugs, the FEHBP plans we reviewed informed us they believed they saved money from using PBMs. Relationships between PBMs and manufacturers and pharmacies for other plans were beyond the scope of this report. In response to the concern about PBMs’ influence on drug switching, we added information based on two PBMs’ filings with the SEC regarding an ongoing Department of Justice investigation of certain PBMs’ relationships with pharmaceutical manufacturers and retail pharmacies. The draft report did not include information about all three plans’ use of generic drugs, which is one means to reduce the overall cost of the drug benefit. In the draft report, we addressed savings PBMs achieve through direct interventions to switch from a prescribed brand drug to a generic, as opposed to overall generic use rates, which are affected by other factors such as plans’ benefit designs. To clarify our findings, we added information on the relative use of generic drugs among the retail and mail order pharmacy services for the plans we reviewed. Our finding that the PBMs we reviewed retained little or no compensation from the payments they receive from plans for retail drugs because they pass these payments on in total to the retail pharmacies seemed inconsistent with NACDS’ experience. While PBMs’ contractual arrangements with other plans may differ, the contractual arrangements with the FEHBP-participating plans we reviewed resulted in the PBMs passing through to the retail pharmacies the entire payment that they receive from the plans. Our estimate that retail pharmacies’ drug acquisition costs are on average about 8 percent below the payments they receive from the FEHBP plans we reviewed implies this is a profit and does not adequately acknowledge overhead costs. Our draft report stated that this estimated margin does not reflect a retail drug store’s profit because it does not include overhead costs nor certain other savings that may be available to some drug stores. We revised the report to better clarify this point and added information regarding NACDS’ and other recent studies’ estimates of overhead costs for retail pharmacies on a per prescription basis. We are sending copies of this report to the Director of the Office of Personnel Management, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-7118. Another contact and key contributors to this assignment are listed in appendix III. Appendix I: Scope and Methodology We examined the use of pharmacy benefit managers (PBM) by three Federal Employees Health Benefits Program (FEHBP) plans: Blue Cross and Blue Shield (BCBS), Government Employees Hospital Association (GEHA), and PacifiCare of California. Together, these plans accounted for about 55 percent of the 8.3 million people covered through FEHBP plans as of July 2002 and represented various plan types and PBM contractors. BCBS contracted with the two largest PBMs in the United States, Medco Health Solutions and AdvancePCS, for its pharmacy benefit services. GEHA contracted with Medco Health Solutions and PacifiCare of California contracted with Prescription Solutions, another subsidiary of PacifiCare Health Systems. We reviewed contracts between the PBMs and plans, financial statements regarding payments made between the plans and PBMs, and retail and mail-order prices for selected drugs from the FEHBP plans we reviewed and the PBMs with which they contracted. We also obtained pricing information from retail pharmacies, interviewed officials at the Office of Personnel Management (OPM), the federal agency responsible for administering FEHBP, and associations representing PBMs and retail pharmacies, and reviewed studies regarding the use of PBMs and prescription drug payments. Specifically, to assess the drug discount savings PBMs achieved, we selected 18 drugs that were among the drugs with the highest expenditures or number of prescriptions dispensed based on data reported by the plans. Combined, these 18 high-volume/high-expenditure drugs represented 12 percent of all prescriptions dispensed to enrollees of the selected FEHBP plans and 16 percent of total plans’ drug expenditures in 2001. In selecting these drugs, we also sought to ensure a distribution of generic and brand drugs for a range of treatment conditions sold by different drug manufacturers. Table 4 lists the drugs included in our price comparisons. At our request, the plans provided prices paid as of April 2002 for the most common strength, dosage form, and quantity dispensed for these drugs at retail pharmacies (typically, a 30-day supply) and at mail-order pharmacies (typically, a 90-day supply). Prices represent the plan and enrollees’ share of the drug ingredient cost—expressed as a discount from an industry standard price such as the average wholesale price (AWP) or maximum allowable cost (MAC)—plus a dispensing fee. We did not independently verify the accuracy of these plan-reported prices. To compare prices negotiated with PBMs for retail and mail-order prescriptions to cash prices a customer without third-party coverage would pay at retail pharmacies, we surveyed 36 pharmacies in California, North Dakota, Washington, D.C., and the Virginia and Maryland suburbs of Washington, D.C., from April 18 through April 30, 2002. We selected the locations to be geographically diverse, specifically including California because it is the only state in which PacifiCare of California operates, North Dakota to include a state with a low population density, and the Washington, D.C., metropolitan area because it includes a large number of FEHBP enrollees. We randomly selected 12 pharmacies in each of these areas, including both large chain pharmacies and independent or small chain pharmacies. We determined that each of the pharmacies surveyed participated in the retail networks for each of our selected FEHBP plans serving that area. From each pharmacy, we obtained prices for a 30-day supply of the 18 selected drugs. These prices are applicable only to the pharmacies surveyed and at the time they were obtained. We also compared prices plans paid to retail and mail-order pharmacies to the pharmacies’ estimated acquisition costs. Retail pharmacies typically purchase drugs from intermediary wholesale distributors and—to a lesser extent—drug manufacturers, while PBM-owned mail-order pharmacies more typically purchase drugs from manufacturers. Since no data source exists to identify pharmacy acquisition costs, we estimated retail pharmacies’ acquisition costs for drugs purchased from wholesalers using the wholesale acquisition prices (WAC) reported in Red Book, a compilation of drug pricing data published by Medical Economics Company, Inc., as of April 2002. We added 3 percent to WAC to estimate the wholesalers’ margin, based on information provided by retail pharmacy officials. To estimate mail-order pharmacies’ acquisition costs for drugs purchased directly from drug manufacturers, we used industry- reported and confidential average manufacturers’ price information (AMP) obtained from the Centers for Medicare & Medicaid Services. We selected WAC and AMP prices for our 18 selected drugs using the most common national drug code reported by the plans for reimbursing retail and mail- order prescription claims. The acquisition costs we have estimated cannot be generalized beyond the drugs we reviewed. Also, the acquisition costs we reported are based on averages for the drugs we reviewed, and individual pharmacies or mail-order operations may have higher or lower acquisition costs. To assess enrollee access to prescription drugs, we compared the number of retail pharmacies in the plans’ retail pharmacy networks to the total number of licensed retail pharmacies in California, the District of Columbia, Maryland, North Dakota, and Virginia. To examine the breadth and depth of each plan’s formulary, we compared each plan’s formulary to the National Formulary developed by the Department of Veterans Affairs (VA). Although the VA formulary was designed for the veteran-specific population, it is considered by the Institute of Medicine as not overly restrictive based on its comparison with other formularies and clinical literature. We obtained the National Formulary from the VA’s Pharmacy Benefits Management Strategic Healthcare Group. The VA formulary contains approximately 1,200 items, including generic, brand name, and over-the-counter drugs, devices, and supplies. We requested that VA officials remove devices, supplies, and drugs that are usually prescribed on an in-patient basis or are available over-the-counter because the FEHBP plans we reviewed cover inpatient drugs as part of the hospital benefit and do not cover drugs available over-the-counter. The resulting list included 513 outpatient prescription drugs representing 162 therapeutic classes. To examine the breadth and depth of each plan’s formulary relative to these outpatient prescription drugs from the VA formulary, we determined whether each of the drugs and therapeutic classes included on the list of drugs drawn from the VA formulary was also included on each of the plan formularies. Each plan also provided us with examples of therapeutically equivalent drugs included on the plan’s formulary for drugs that did not have an exact match on the VA formulary list. We considered a VA therapeutic class to be included on a plan formulary if at least one of the VA drugs in that class or a therapeutically equivalent drug was listed in the plan formulary. For VA therapeutic classes not included on a plan formulary, we used National Institutes of Health and Medco Health Solutions on-line databases to analyze the types of medical conditions treated by the excluded drugs within these classes. Appendix II: Comments from the Office of Personnel Management Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments The following staff made important contributions to this report: Rashmi Agarwal, Randy Dirosa, Betty Kirksey, Carmen Rivera-Lowitt, and Annesha White. Related GAO Products VA and DOD Health Care: Factors Contributing to Reduced Pharmacy Costs and Continuing Challenges. GAO-02-969T. Washington, D.C.: July 22, 2002. Medicare Outpatient Drugs: Program Payments Should Better Reflect Market Prices. GAO-02-531T. Washington, D.C.: March 14, 2002. Prescription Drugs: Prices Available Through Discount Cards and From Other Sources. GAO-02-280R. Washington, D.C.: December 5, 2001. Medicare: Payments for Covered Outpatient Drugs Exceed Providers’ Cost. GAO-01-1118. Washington, D.C.: September 21, 2001. VA Drug Formulary: Better Oversight Is Required, but Veterans Are Getting Needed Drugs. GAO-01-183. Washington, D.C.: January 29, 2001. Prescription Drugs: Adapting Private Sector Management Methods for a Medicare Benefit. GAO/T-HEHS-00-112. Washington, D.C.: May 11, 2000. Prescription Drug Benefits: Applying Private Sector Management Methods to Medicare. GAO/T-HEHS-00-84. Washington, D.C.: March 22, 2000. Pharmacy Benefit Managers: FEHBP Plans Satisfied With Savings and Services, but Retail Pharmacies Have Concerns. GAO/HEHS-97-47. Washington, D.C.: February 21, 1997.
Rising prescription drug costs have contributed to rising employer health plans premiums in recent years. Most federal employees, retirees, and their dependents participating in the Federal Employees Health Benefits Program (FEHBP), administered by the Office of Personnel Management (OPM), are enrolled in plans that contract with pharmacy benefit managers (PBM) to administer their prescription drug benefits. GAO was asked to examine how pharmacy benefits managers participating in the federal program affect health plans, enrollees, and pharmacies. GAO examined the use of PBMs by three plans representing about 55 percent of the 8.3 million people covered by FEHBP plans. For example, GAO surveyed 36 retail pharmacies on prices that a customer without third party coverage would pay for 18 high-volume or high-expenditure drugs and compared these prices to prices paid by the plans and PBMs. The PBMs reviewed produced savings for health plans participating in FEHBP by obtaining drug price discounts from retail pharmacies and dispensing drugs at lower costs through mail-order pharmacies, passing on certain manufacturer rebates to the plans, and operating drug utilization control programs. For example, the average price PBMs obtained from retail pharmacies for 14 brand name drugs was about 18 percent below the average price paid by customers without third-party coverage. Enrollees in the plans reviewed had wide access to retail pharmacies, coverage of most drugs, and benefited from cost savings generated by the PBMs. Enrollees typically paid lower out-of-pocket costs for prescriptions filled through mail-order pharmacies and benefited from other savings that reduced plans' costs and therefore helped to lessen rising premiums. Most retail pharmacies participate in the FEHBP plans' networks in order to obtain business from the large number of enrollees covered. Pharmacy associations report that the PBMs' large market shares leave some retail pharmacies with little leverage in negotiating with PBMs. Retail pharmacies must accept discounted reimbursements from PBMs they contract with and perform additional administrative tasks associated with claims processing. OPM generally concurred with GAO's findings. The plans and PBMs reviewed provided technical comments, and two independent reviewers stated the report was fair and balanced. One pharmacy association expressed strong concerns, including that the report did not more broadly address economic relationships in the PBM industry. GAO examined relationships between the PBMs and manufacturers and pharmacies specific to their FEHBP business. However, relationships between PBMs and other entities for other plans were beyond the report's scope.
Background FHA and RHS operate a variety of loan guarantee programs, organized under three budget accounts, that support the financing of single-family and multifamily housing, as well as healthcare facilities (see fig. 1). The guarantees substantially reduce the financial risk for lenders in the event that borrowers default, thereby allowing lenders to make loans available to more borrowers. FHA and RHS loan guarantees for multifamily properties are often combined with other financing sources, such as low-income housing tax credits and tax-exempt bonds issued by states and localities. FHA is the federal government’s principal provider of mortgage loan guarantees and operates numerous loan guarantee programs. In fiscal year 2004, FHA guaranteed over $107 billion in loans under the MMI/CMHI account, the vast majority of which occurred within the 203(b) program. The 203(b) program provides loan guarantees for the purchase or refinancing of single-family homes. The other program in the MMI/CMHI account is the Section 213 program, which guarantees mortgage loans to facilitate the construction, substantial rehabilitation, and purchase of cooperative housing projects. Because both programs currently have negative subsidy costs, neither requires credit subsidy budget authority. The MMI/CMHI account received $185 billion in commitment authority in fiscal year 2004. FHA’s GI/SRI account, which received $29 billion in commitment authority in fiscal year 2004, supports an array of programs. These include programs that facilitate the development, construction, rehabilitation, purchase, and refinancing of multifamily apartments and healthcare facilities. For example, the 221(d)(4) program—FHA’s largest multifamily program—guarantees loans to for-profit developers of multifamily apartments, and the 221(d)(3) program guarantees loans to nonprofit developers. The GI/SRI account also includes several specialized single-family programs, such as the 203(k) (rehabilitation mortgage), Section 234 (condominiums), Title I (property improvement and manufactured housing), and Section 255 (home equity conversion mortgage) programs. In contrast to the MMI/CMHI account, several of the programs in the GI/SRI account have positive subsidy costs, which require credit subsidy budget authority. In fiscal year 2004, four GI/SRI account programs—Section 221(d)(3), Section 241, Multifamily Operating Loss, and Title I Property Improvement—received $15 million in credit subsidy budget authority. RHS’s loan guarantee programs, as a whole, are much smaller than FHA’s, are targeted to rural areas, and have more income restrictions. Under its RHIF account, RHS guarantees loans through two programs—the Section 502 program and the Section 538 program. The Section 502 program serves rural residents with incomes not exceeding 115 percent of the U.S. median income who wish to purchase or refinance a single-family home. In fiscal year 2004, this program received $2.7 billion in commitment authority and about $40 million in credit subsidy budget authority. The Section 538 program guarantees loans to nonprofit or for-profit developers for the construction, acquisition, and rehabilitation of multifamily rental housing in rural areas that serve households with incomes that do not exceed 115 percent of area median income. In fiscal year 2004, this program received $100 million in commitment authority and about $6 million in credit subsidy budget authority. In formulating and executing the budgets for their loan guarantee programs, FHA and RHS must adhere to specific federal budgetary and accounting requirements. The process of preparing their annual budget request requires that FHA and RHS prepare estimates of the dollar amount of loans they anticipate guaranteeing nearly 2 years in advance. These estimates influence the amount of commitment authority and credit subsidy budget authority the agencies request and receive. The Federal Credit Reform Act of 1990 requires the President’s Budget to reflect the costs of credit programs and include the planned level of new loan guarantees associated with each appropriation request. Agencies therefore must calculate and estimate the long-term cost, known as the credit subsidy cost, to the federal government of extending or guaranteeing credit and the amount of new loan guarantees they plan on making. The agencies estimate these costs for each program by calculating a credit subsidy rate that takes into account factors such as fees, defaults, and recoveries, and applying this rate to the total dollar amount of loans they anticipate guaranteeing. When an agency decides to guarantee a loan, it uses this rate to determine the credit subsidy cost of doing so. Under programs requiring positive credit subsidies, the agency can issue the new guarantee only if the budget authority to cover this cost is available. In contrast, programs with negative subsidies are constrained only by commitment authority, which limits the amount of financial risk the federal government assumes each year. FHA and RHS receive their commitment authority limit and appropriations of credit subsidy budget authority on a somewhat different basis. Although FHA, as required, estimates its commitment authority and credit subsidy needs for each loan guarantee program under the MMI/CMHI and GI/SRI accounts, it requests and receives these authorities on an account, rather than a program, basis. Congress has generally not specified a level of commitment authority or credit subsidy budget authority for each program. In addition, FHA routinely requests and receives a commitment authority limit that exceeds the dollar amount of loans it has estimated it will make, a practice that helps prevent exhaustion of commitment authority before the end of a fiscal year. In contrast, RHS receives credit subsidy budget authority on a program basis. That is, the Section 502 program and Section 538 program receive separate appropriations of credit subsidy budget authority. For both of these programs, the commitment authority limit is the amount of credit subsidy budget authority divided by the credit subsidy rate. Difficulties in Estimating Demand Underlie FHA and RHS’s 10 Suspensions of Loan Guarantee Programs Since 1994 On 10 occasions since 1994, FHA and RHS have suspended the issuance of loan guarantees under certain programs because the programs effectively exhausted their commitment authority or credit subsidy budget authority before the end of a fiscal year. Specifically, FHA suspended programs six times and RHS four times. Several factors contributed to these suspensions, including unforeseeable fluctuations in mortgage interest rates that led to changes in the demand for loan guarantees. Further, the need to make budget estimates nearly 2 years in advance compounds the difficulty of predicting demand. As a result, and because of resource constraints and competing priorities within the federal budget, the resources appropriated for these programs have not always reflected the amounts required to keep them operating for an entire fiscal year. FHA Suspended Guaranteed Loan Programs Six Times Over the Past Decade As shown in table 1, FHA has suspended the issuance of loan guarantees under certain programs six times since 1994 after effectively exhausting the commitment authority or credit subsidy budget authority for these programs before the end of the fiscal year. For example, from fiscal year 1994 through fiscal year 2004, FHA suspended the programs with positive subsidy costs under its GI/SRI account three times—in February 1994, July 2000, and April 2001—after effectively exhausting the credit subsidy budget authority under this account. FHA has also suspended all of the programs under the GI/SRI account three times after effectively exhausting the account’s commitment authority. On September 16, 2003, FHA suspended the issuance of loan guarantees under the GI/SRI account until Congress raised the commitment authority limit in a supplemental appropriations act. The other two suspensions occurred while the agency was operating under a series of continuing resolutions in early fiscal year 2004. The first suspension occurred in early December 2003, when FHA exhausted the $3.8 billion in commitment authority provided in the first of these resolutions. FHA lifted the suspension after receiving an additional $3.9 billion under a subsequent resolution in mid- December 2003 but suspended the programs again after exhausting this amount on January 14, 2004. FHA restarted the programs approximately 2 weeks later, after Congress passed the Consolidated Appropriations Act, 2004. RHS Suspended Its Section 502 Program Four Times during the Past Decade RHS has suspended its Section 502 program four times since 1994—in fiscal years 1995, 1996, 2003, and 2004—after effectively exhausting its credit subsidy budget authority. However, in some cases, RHS was able to take actions that delayed or mitigated the impact of the suspensions on borrowers and lenders. For example, in early August 2003, RHS transferred $3.6 million in budget authority from the Section 523 (Mutual and Self-help Technical Assistance) grant program to the Section 502 program in order to delay suspension of the program. This transfer enabled the Section 502 program to guarantee an additional $297 million in loans and delayed suspension of the program until August 27. Also, during the 4-week suspension period, RHS continued to accept and approve loan guarantee applications submitted by lenders and committed to issuing the guarantees as soon as it received its next appropriation. Further, in March 2004, RHS anticipated that the Section 502 program would exhaust its credit subsidy budget authority early in the fourth quarter. In June, RHS increased the program’s guarantee fee and transferred a total of $7 million in budget authority from the Section 504 (Natural Disaster), 514 (Farm Labor Housing), 515 (Multifamily Housing), 516 (Rural Housing Assistance), and 538 programs to the Section 502 program. The increase in the guarantee fee enabled RHS to issue an additional $100 million in loan guarantees, and the transfers enabled RHS to issue an additional $531 million in loan guarantees. Although these actions delayed suspension of the program, RHS eventually had to suspend the program on August 24, 2004. During this suspension period, RHS again accepted and approved loan guarantee applications submitted by lenders and committed to issuing the guarantees as soon as it received its next appropriation. Difficulties in Estimating Program Demand Contributed to the Exhaustion of Commitment and Budget Authority before the End of a Fiscal Year Due partly to difficulties in estimating the demand for loan guarantee programs, the resources budgeted for these programs have not always reflected the amounts required to keep them operating for a full fiscal year. Estimating demand for budget purposes is difficult for several reasons. A primary reason is that demand for loan guarantees is highly responsive to interest rates, which are volatile and difficult to forecast. For example, due in part to the decline in mortgage interest rates in fiscal year 2003, the number of FHA single-family refinancing loans was 60 percent higher than in fiscal year 2002. According to FHA officials, they could not have anticipated the interest rate change or reflected it in their fiscal year 2003 budget. As a result, FHA used its commitment authority faster than anticipated and effectively exhausted the authority for the GI/SRI account 2 weeks before the end of the fiscal year. Similarly, according to RHS officials, low interest rates in fiscal year 2003 resulted in significantly higher demand for Section 502 loan guarantees (and a corresponding increase in the use of commitment authority) compared with the previous 3 years (see fig. 2). Because RHS based its fiscal year 2003 budget estimate primarily on actual demand levels from these prior years, the amount the agency requested and was appropriated for the Section 502 program was not adequate to fund the program for the entire fiscal year, resulting in suspension of the program in late August 2003. In addition, FHA and RHS have implemented program and policy changes that were not foreseen or whose specific effects could not be known at the time the agencies developed their budgets. For example, in response to a statutory change that occurred after HUD submitted its fiscal year 2004 budget request, FHA increased its individual loan limits for multifamily housing in high-cost areas during the second and third quarters of fiscal year 2004. FHA officials told us that while they expected that these changes would result in higher utilization of commitment authority, they could not have factored them into the department’s budget request. Additionally, in the beginning of fiscal year 2003—well after federal agencies had developed their budgets for that year—the administration established a goal to increase the number of minority homeowners by at least 5.5 million families by 2010. To help achieve this goal, RHS, among other things, lowered its guarantee fee, conducted outreach with minority lenders, and promoted credit counseling and homeownership education. According to RHS, these actions helped increase loan volume under the Section 502 program to an historic high but could not have been taken into account in preparing the agency’s fiscal year 2003 budget. Compounding the difficulty in predicting demand is the federal budget process, which requires that FHA and RHS submit to OMB estimates of the dollar amount they anticipate guaranteeing in a given year nearly 2 years in advance. The agencies’ estimates influence the amount of commitment authority and credit subsidy budget authority the agencies request and receive through the budget process. Because these estimates are prepared so far in advance, they cannot be made with a high level of certainty. Further, the agencies’ appropriations do not always reflect estimates of program demand because of resource constraints and competing priorities within the federal budget. FHA and RHS Manage Their Programs in a Similar Manner but Estimate and Notify Congress of the Rate at Which They Will Exhaust Commitment and Budget Authority Differently FHA and RHS basically manage their loan guarantee programs on a first- come, first-served basis, a factor limiting both agencies’ ability to control the rate at which they use commitment authority and obligate credit subsidy budget authority. FHA is required to estimate, at least monthly, the rate at which it will use commitment authority for the remainder of any fiscal year and notify Congress (1) if an estimate indicates that the agency will exhaust its commitment authority before the end of a fiscal year or (2) when 75 percent of the authority has been used. FHA has recently complied with the 75 percent notification requirement, but could not provide us with documentation of notifications prior to fiscal year 2003. FHA has also prepared the estimates on a daily basis since the beginning of fiscal year 2004 and determined that none of the estimates indicated that it would exhaust its commitment authority before the end of a fiscal year. Our analysis indicates that FHA’s basic approach for making estimates does not always accurately forecast whether the agency will exhaust its commitment authority; however, FHA officials and federal budget experts said that more complex methods would not necessarily produce better estimates. Although not subject to the same requirements as FHA, RHS periodically estimates the rate at which it will obligate credit subsidy budget authority for its Section 502 and Section 538 programs and in recent years has notified Congress when the agency’s estimates indicated that the Section 502 program would deplete its budget authority before the end of the fiscal year. FHA and RHS Manage Their Loan Guarantee Programs on a First-Come, First- Served Basis FHA and RHS basically manage their loan guarantee programs on a first- come, first-served basis, a factor that limits control over the rate at which they use commitment authority and obligate credit subsidy budget authority. More specifically, according to FHA and RHS officials, neither agency prioritizes or rejects eligible applications as long as sufficient commitment and budget authority are available because they have determined that, with few exceptions, they lack the authority to do so. The agencies do not, for example, try to reduce their utilization or obligation rates by placing a higher priority on smaller loans than larger loans. FHA officials told us that even if they had this authority, they would not want to be in the position of judging whether loans under one program should be guaranteed before loans under another program or choosing between eligible loans under the same program. Consequently, all FHA programs (those with positive and negative subsidy costs) under the same account provide loan guarantees until the account’s commitment authority is exhausted, or, for programs with positive subsidy costs, until either the account’s commitment authority or credit subsidy budget authority is exhausted. FHA and RHS implement the first-come, first-served approach somewhat differently. Although FHA makes loan guarantees through its single-family and multifamily housing field offices, it does not allocate commitment authority and credit subsidy budget authority to these offices in advance of using and obligating the authorities. In contrast, under its Section 502 program, RHS first allocates the budget authority to its state offices based on a formula. Each state office then obligates the budget authority on a first-come, first-served basis. RHS also maintains a central reserve that can be used to supplement funding to state offices that run out of budget authority before the end of the fiscal year. In addition, RHS may redistribute budget authority from state offices that have more than necessary to state offices with shortfalls. For its Section 538 program, RHS obligates budget authority on a first-come, first-served basis without first allocating the funds to its state offices. Appendix II provides additional information on FHA’s and RHS’s processes for making loan guarantees. FHA Has Specific Estimation and Notification Requirements for Utilization of Commitment Authority and Relies Primarily on a Straightforward Estimation Process to Satisfy These Requirements FHA is required by statute to estimate, on at least a monthly basis, the rate at which it will use commitment authority for the remainder of the fiscal year and to notify Congress (1) when 75 percent of the authority has been used or (2) if estimates indicate that the authority will be exhausted before the end of the year. These notifications help Congress to determine whether supplemental authority may be needed to prevent a suspension of the programs due to the exhaustion of commitment authority. These requirements do not pertain to FHA’s credit subsidy budget authority. To determine when it has reached the 75 percent level, FHA continuously monitors the amount of commitment authority used under its loan guarantee programs. FHA currently relies on several unintegrated data systems to monitor its authority balances. An FHA official receives end-of- day activity reports from all guaranteed lending programs on commitment authority utilization and credit subsidy budget authority obligations and manually enters the data into a spreadsheet on a daily basis. By the end of calendar year 2006, FHA expects to complete the implementation of a new subsidiary ledger accounting system that, according to FHA officials, will replace the spreadsheet and provide them with real-time utilization and obligation data. Although FHA notified Congress, as required, when it had used 75 percent of its commitment authority in fiscal years 2003 and 2004, it could not provide us with documentation of notifications prior to fiscal year 2003. Specifically: In June 2003, FHA notified Congress that it had used 75 percent of the commitment authority under the MMI/CMHI account and that it anticipated using 75 percent of the commitment authority under the GI/SRI account within a few weeks. In January 2004, FHA notified Congress that while it did not anticipate exhausting the commitment authority provided under a continuing resolution, it had used 75 percent of the commitment authority under the MMI/CMHI account. In July 2004, FHA notified Congress that the agency estimated it would use 75 percent of the commitment authority under the GI/SRI account within a few weeks and that while the utilization rate was slightly lower than the rate necessary to exhaust the commitment authority before the end of the fiscal year, there was a possibility of a shortfall. FHA has estimated the rates of future use of commitment authority on a daily basis since fiscal year 2004, essentially using a “straight-line” method that applies the utilization rate experienced up until the time of the analysis to the remainder of the fiscal year. To supplement the straight-line estimates, FHA officials indicated that they also use their judgment and experience to factor in market and economic variables, such as interest rates. Although FHA provided us with examples of its straight-line estimates, it did not maintain records of its more comprehensive estimates, which incorporated judgments about these other variables. FHA officials told us that none of these more comprehensive estimates made after the agency had received its fiscal year 2004 appropriation clearly indicated that either the MMI/CMHI or the GI/SRI account would exhaust its commitment authority before the end of the fiscal year. The officials said that they do not make similar estimates of obligation rates for credit subsidy budget authority but indicated that they monitor actual obligations on a daily basis and monitor anticipated obligations by periodically querying the FHA field offices that process loan guarantees. Although a straight-line estimation analysis has its limitations, FHA officials told us they do not believe that a more complex method for making estimates—one that might systematically account for the effects of additional variables—would necessarily result in more accurate estimates because of the inherent unpredictability of the demand for loans. They also said that it would be difficult to develop such a method. Officials from OMB, CBO, and housing industry groups agreed that it is difficult to estimate the rate at which commitment authority will be used and that a more complex method may not yield better estimates. While FHA maintains data on its utilization of commitment authority, it could not provide us with complete records of its straight-line estimates. In the absence of these estimates, we analyzed FHA’s data on commitment authority utilization and found that a basic straight-line method cannot always accurately predict whether the agency will exhaust its commitment authority before the end of a fiscal year. As shown in table 2, by the end of March 2003—halfway through the fiscal year—FHA had used less than half of its commitment authority (45.5 percent) under the GI/SRI account. Assuming the same utilization rate for the second half of the year (i.e., 45.5 percent over 6 months), we estimated that FHA would have used 91 percent of its commitment authority by the end of the fiscal year. However, in actuality, FHA used 91 percent of its commitment authority by the end of August—earlier than it might have estimated based on a straight-line analysis—and was forced to suspend the issuance of loan guarantees under this account in the middle of September. Even if FHA had conducted this analysis at the end of June, it would have estimated that it would use less than 100 percent of its commitment authority by the end of the fiscal year. Further, as shown in table 3, straight-line calculations can also overestimate utilization. Specifically, an analysis conducted at the end of March 2004, when FHA had used 52.5 percent of the commitment authority under the GI/SRI account, would have projected that FHA would exhaust the authority before the end of the fiscal year and that almost 105 percent of its commitment authority would be needed in order to prevent a suspension. However, FHA actually used only 95.2 percent of its total commitment authority by the end of the fiscal year. Variations in utilization rates are a fundamental reason why FHA faces difficulty in estimating its use of commitment authority for the entire year. For example, in fiscal year 2003, FHA’s monthly utilization rates ranged from 3.5 percent in November to 14.2 percent in December. In addition, the widely varying size of multifamily projects adds to the difficulty in projecting volume, and a single large project can significantly change a utilization rate. RHS Does Not Have Estimation and Notification Requirements but Has Relied on a Complex Estimation Process to Notify Congress of the Rate at Which it Obligates Budget Authority Although not subject to the same requirements as FHA, RHS, as a matter of policy, monitors its obligations of credit subsidy budget authority on a daily basis and has recently notified Congress when it appeared that its Section 502 program would exhaust its credit subsidy budget authority before the end of a fiscal year. In August 2003, RHS notified Congress that credit subsidy budget authority for the Section 502 program would soon be exhausted and that the agency was exercising its authority to transfer budget authority between programs to help cover the expected shortfall. Similarly, in early 2004, RHS officials notified Congress that credit subsidy budget authority for the Section 502 program might be exhausted by July 2004 because of a strong demand for housing that would most likely remain constant or increase. Then, in June 2004, RHS notified Congress that credit subsidy budget authority for the Section 502 program would be exhausted early in the fourth quarter and that in order to continue guaranteeing loans, RHS would (1) increase the guarantee fee—effectively decreasing the subsidy rate and allowing the agency to guarantee more loans—and (2) exercise its authority to transfer budget authority. In contrast, RHS officials told us that in August 2004 they estimated that the Section 538 program would exhaust its credit subsidy budget authority by September 15, but did not notify Congress of this situation. However, the program was able to operate until the end of the fiscal year. In contrast to FHA, RHS’s estimation process is less formulaic, more reliant on staff judgment, and performed less frequently. To estimate when the Section 502 program may exhaust its budget authority, RHS officials told us they analyze obligation data and external variables at least monthly. RHS officials explained that, depending on current program performance and time elapsed into the fiscal year, they may base the estimate on obligation rates from a specific prior year or an average of several prior years and on differences in obligations from the previous year(s) to the current year. RHS officials emphasized that they also use their experience and judgment to incorporate market and economic information, such as interest rates and data on new housing starts, into formulating the estimates. Because RHS’s estimation process (1) can differ from one estimate to another, (2) relies heavily on program officials’ interpretations of external variables, and (3) does not include documentation of all the data used and assumptions made in reaching the estimates, we could not replicate this process to assess it. However, RHS provided us the results of an estimate from April 2004, which accurately predicted that the Section 502 program would exhaust its budget authority before the end of the fiscal year. Because RHS’s Section 538 program is relatively small—it guaranteed 42 loans in fiscal year 2003—RHS officials told us they are able to estimate whether the credit subsidy budget authority for the program will be sufficient for the entire fiscal year by surveying RHS’s state offices and participating lenders about anticipated demand for loan guarantees. Congress, FHA, and RHS Could Exercise Options to Help Prevent Suspensions, but Options Would Have Other Implications Through discussions with FHA, RHS, OMB, CBO, and housing industry officials, and a review of relevant literature, we identified options—some of which would require statutory changes—that could provide better warning of future suspensions of loan guarantee programs or help prevent them altogether. For example, by requiring FHA to provide more frequent notifications concerning its commitment authority balances and creating notification requirements for FHA and RHS concerning their balances of credit subsidy budget authority, Congress could gain additional and more timely information to consider whether supplemental appropriations would be needed to prevent program suspensions. Congress could also provide FHA higher annual limits on commitment authority to minimize the likelihood that the agency would exhaust this authority before the end of a fiscal year. To help prevent program suspensions due to the exhaustion of credit subsidy budget authority, Congress could (1) combine multifamily programs with negative and positive subsidy costs under the GI/SRI account to eliminate the need for credit subsidy appropriations, (2) authorize FHA to use negative subsidies to cover any shortfalls in credit subsidy budget authority, or (3) make budget authority from the subsequent year’s appropriation available in the current year. Finally, the agencies can continue to use or be given additional administrative tools to help delay or prevent program suspensions due to exhaustion of credit subsidy budget authority. However, each of the options we identified would have legal, budgetary, administrative, or oversight implications, and their specific impacts would depend on how they were structured and implemented. Expanding FHA Notifications on the Use of Commitment Authority As noted previously, FHA is currently required to notify its authorizing and appropriations committees when it has used 75 percent of the commitment authority for the MMI/CMHI and GI/SRI accounts. (In contrast to FHA, RHS—which manages its programs based on credit subsidy budget authority—does not have a notification requirement.) Congress could require FHA to provide additional notifications before and after the agency has reached the 75 percent level—for example, when the agency has used specified percentages of commitment authority or at certain points in the fiscal year. More frequent notifications would provide additional and more timely information to Congress on the status of commitment authority balances for FHA’s MMI/CMHI and GI/SRI accounts. For example, in June 2003, FHA, as required, notified Congress that it would soon use 75 percent of the commitment authority in its GI/SRI account. However, this was the only notification Congress received prior to FHA’s suspension of the GI/SRI account programs in mid-September. Had FHA been required to provide an additional notification when it reached, for example, the 90 percent level, Congress would have been notified in August—when there was a strong possibility that the programs would need to be suspended—giving Congress timelier information to consider providing supplemental commitment authority that could have prevented the suspension. FHA could implement this option with little administrative effort because it already maintains the data on its commitment authority balances that would be needed to meet expanded notification requirements. Expanding Notifications to Include Obligations of Credit Subsidy Budget Authority by FHA and RHS As discussed previously, the exhaustion of credit subsidy budget authority before the end of a fiscal year has resulted in FHA and RHS suspending the issuance of loan guarantees. Currently, neither agency is required to notify Congress of the status of its balances of credit subsidy budget authority. Congress could require FHA and RHS to provide such notifications—for example, when they have obligated specified percentages or at certain points in the fiscal year. These notifications would apply only to FHA’s GI/SRI account and RHS’s Section 502 and 538 programs, which require credit subsidy budget authority. Requiring these notifications would provide Congress with more information to use in considering if supplemental appropriations would be needed to prevent program suspensions. FHA and RHS could implement this option with little administrative effort because they already maintain the data on their balances of credit subsidy budget authority that would be needed to meet the notification requirements. Establishing a Higher Limit on FHA Commitment Authority The amount of commitment authority for FHA’s loan guarantee programs is set in annual appropriations acts and serves as a limitation on the volume of loans the agency can guarantee. For programs under FHA’s MMI/CMHI account, this limitation exists even though they generate substantial negative subsidies. As noted previously, for the programs with positive subsidy costs under the GI/SRI account, the volume of loans FHA can guarantee is also limited by annual appropriations of credit subsidy budget authority. FHA’s annual budget requests and enacted levels of commitment authority for its MMI/CMHI and GI/SRI accounts reflect commitment authority limits that usually exceed the dollar volume of loans the agency estimates it will actually guarantee. According to FHA officials, the “cushion” between the enacted commitment authority limit and FHA’s estimate of guarantees is intended to minimize the possibility of FHA exhausting its authority before the end of the fiscal year. The enacted commitment authority limits are increased periodically to reflect growth in the loan guarantee programs over time but do not always reflect changes in FHA’s estimates from year to year. As a result, the difference between the enacted commitment authority limits and FHA’s estimates—what FHA refers to as “standby authority”—has varied considerably. For example, from fiscal years 1999 through 2004, the enacted commitment authority limits exceeded FHA’s estimates by anywhere from 5 to 49 percent for the MMI/CMHI account and 0 to 94 percent for the GI/SRI account. To overcome the inherent difficulties in forecasting program demand and to help ensure that FHA’s commitment authority limit is high enough to prevent program suspensions, Congress could enact total commitment authority limits that exceed FHA’s estimates by at least a minimum level. With a higher commitment authority limit, it is possible that FHA would guarantee a higher volume of loans—thereby assuming a greater insurance risk—than it would otherwise. In that event, loan programs with negative subsidy costs, such as FHA’s 203(b) program, would, all other things being equal, increase the amount of negative subsidies available to offset FHA’s budget but also increase the agency’s exposure to risk. In contrast, loan volume for programs with positive subsidy costs under FHA’s GI/SRI account would continue to be limited by the annual credit subsidy appropriation and so would not be affected by this option. Depending on the level of additional loan guarantee activity resulting from a higher limit, FHA may also require supplemental administrative resources to process, review, and manage additional loan guarantees. Combining Multifamily Programs under FHA’s GI/SRI Account for Credit Subsidy Purposes Currently, several multifamily, healthcare, and single-family programs make up FHA’s GI/SRI account, and programs may have a positive or negative credit subsidy rate. Under the Federal Credit Reform Act of 1990, the President’s Budget must reflect the costs of loan guarantee programs and must include the amount of new loan guarantees planned. Federal agencies must therefore prepare a budget estimate for each loan guarantee program which represents the amount of credit subsidy budget authority the program would require or the amount of negative subsidy the program would generate. For example, for fiscal year 2004, FHA estimated that it would need approximately $8 million in credit subsidy budget authority for three multifamily programs under the GI/SRI account. FHA also estimated that the remaining six multifamily programs under the account would generate approximately $79 million in negative subsidies. As proposed by the Millennial Housing Commission in 2002, HUD could combine all nine of these programs for credit subsidy purposes, which, unless current credit subsidy rates and levels of program activity changed dramatically, would result in a single negative credit subsidy rate and thus eliminate the need for annual appropriations of credit subsidy budget authority. Currently, negative subsidies generated by some of FHA’s multifamily programs are considered as offsetting receipts in the agency’s annual budgets. Using some of the negative subsidies to, in effect, pay for the positive subsidies required for other GI/SRI programs would reduce the offset, all other things being equal. The elimination of credit subsidy appropriations under a combined multifamily program could compensate for the reduced offset. However, because the programs with positive subsidies would no longer be constrained by appropriations of budget authority, they could experience more activity and higher resulting costs than they would otherwise, thus increasing the budget deficit (all other things being equal). Because FHA already estimates credit subsidy rates for each multifamily program to comply with Federal Credit Reform Act requirements, limited additional administrative effort would likely be required to merge these rates into a single rate. This option would require congressional action and pose several challenges to Congress and FHA. For example, to the extent that the option may be inconsistent with Federal Credit Reform Act requirements, Congress would have to provide FHA a limited exception to these requirements. Further, congressional oversight would be affected because combining the programs would eliminate the need for credit subsidy budget authority. Therefore, congressional appropriators would only be able to control the size of the programs through limits on commitment authority. Additionally, to maintain its current level of oversight, Congress would need to ensure that HUD continued providing the estimated cost of, and number of guarantees under, individual programs in its annual budget requests. This option would also require FHA to alter its accounting and record keeping systems to accurately track the budget activity for the combined programs. Authorizing Use of Negative Subsidies to Cover Shortfalls in Credit Subsidy Budget Authority for FHA In recent years, negative subsidies generated by the single-family and multifamily programs under FHA’s GI/SRI account have exceeded the account’s positive subsidy requirements (i.e., credit subsidy costs) by over $200 million per year. A bill introduced in April 2001 would authorize FHA to use negative credit subsidies from its GI/SRI account programs to cover the credit subsidy costs of making loan guarantees if FHA exhausted the original appropriation of credit subsidy budget authority before the end of a fiscal year. If this option were implemented, it would be unlikely—given the current credit subsidy rates and level of activity for each program—that FHA would have to suspend the issuance of loan guarantees for GI/SRI account programs due to the exhaustion of credit subsidy budget authority. The proposal would require Congress to amend section 519 of the National Housing Act (codified at 12 U.S.C. § 1735c) to allow the use of negative subsidies as budget authority for programs with positive subsidy costs, which could result in these programs experiencing more activity and higher resulting costs than they would otherwise, thus increasing the budget deficit (all other things being equal). From a budgeting perspective, this option would prevent these subsidies from being used as offsetting receipts in HUD’s overall budget. As a result, additional appropriations or cuts in HUD’s other discretionary spending might be required to compensate for the elimination of the offset. Further, the amount of negative subsidies that CBO estimated FHA would need to cover shortfalls in credit subsidy budget authority would be charged against FHA’s overall budget authority in the current fiscal year. Appropriating Advanced Funding for Credit Subsidy Costs at FHA and RHS To help ensure that FHA and RHS programs with positive subsidy costs would not be suspended due to exhaustion of credit subsidy budget authority, Congress could also provide “advance funding” for FHA and RHS program credit subsidy costs. Advance funding authorizes agencies, if necessary, to charge obligations in excess of the specific amount appropriated for that year to the next fiscal year’s appropriation. Congress could stipulate in the agencies’ annual appropriations acts that an additional amount of budget authority would automatically be made available to cover additional credit subsidy costs in the current fiscal year if the original appropriation of credit subsidy budget authority were exhausted. For example, Congress could specify this amount as a fixed sum or a percentage of the original appropriation. If FHA or RHS were to obligate any of these additional amounts, the amounts would be charged to the agencies’ appropriations of credit subsidy budget authority for the subsequent fiscal year. All other things being equal, this would reduce the amount of budget authority available in the subsequent year. Continuing or Expanding Currently Permitted Practices at FHA and RHS, Such As Increasing Fees or Transferring Budget Authority FHA and RHS have existing tools that they can and have used to help delay or prevent program suspensions. For example, FHA and RHS establish application or guarantee fees for their loan guarantee programs and have the discretion to change them during the fiscal year. All other things being equal, raising fees lowers the credit subsidy rate for the affected program and allows the agencies to cover the credit subsidy costs for more loan guarantees. For example, in June 2004, RHS increased its loan guarantee fee by 25 basis points (0.25 percent) on all Section 502 guaranteed loans. RHS indicated that the fee increase allowed it to reduce its credit subsidy rate and thereby cover the credit subsidy costs for more than 1,000 additional loan guarantees. Additionally, and as discussed previously, RHS has limited authority to transfer budget authority to cover resource shortfalls. RHS used this authority in fiscal years 2003 and 2004, when it transferred funds from various loan and grant programs to cover the credit subsidy costs for the Section 502 program. FHA does not have, but could be given, similar authority by Congress. The agencies cannot transfer budget authority or change fees without significant administrative effort. According to FHA officials, changing application fees requires them to promulgate regulations, while increasing guarantee fees requires them to develop and place a notice in the Federal Register. Furthermore, increasing fees makes loan guarantees less affordable for borrowers. Finally, administrative transfers of budget authority cannot be made without budget authority being available elsewhere in an agency’s budget and requires concurrence by OMB. Agency Comments We provided a draft of this report to HUD and USDA for their review and comment. HUD provided comments in a letter from the Deputy Assistant Secretary for Finance and Budget (see app. IV). HUD agreed with our findings but said it saw difficulties with each of the options we presented for helping to prevent program suspensions. HUD cited specific difficulties with some of the options. For example, HUD questioned the option to expand FHA notifications on the use of commitment authority, saying we presumed that Congress did not act to prevent the suspension of the GI/SRI account programs in fiscal year 2003 because it did not receive timely notifications. Our draft report did not make this presumption. Nevertheless, we clarified the final report to emphasize that had FHA been required to provide an additional notification once there was a strong possibility that the programs would need to be suspended, Congress would have had timelier information to consider providing additional commitment authority. HUD also commented that the option to combine multifamily programs under FHA’s GI/SRI account for credit subsidy purposes is inconsistent with the Federal Credit Reform Act, which requires that credit subsidy rates be determined for each program. Our draft report indicated that this option would require congressional action. We added language to our final report to recognize that this could involve giving FHA a limited exception to Federal Credit Reform Act requirements to the extent that the option may be inconsistent with these requirements. Also, as our draft report stated, to maintain its current level of oversight, Congress would need to ensure that HUD continued providing the estimated cost of, and number of guarantees under, individual programs in its annual budget requests. HUD said that the option for appropriating advance funding for credit subsidy costs was a one-time-only solution because program activity in the year from which funding was advanced would be at risk for suspension due to inadequate credit subsidy budget authority. We disagree that the option would only be a one-time solution, because any year from which funding was advanced could likewise receive an advance from the subsequent fiscal year to avoid program suspensions, if necessary. USDA agreed with our findings and provided technical comments, which we incorporated into this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of the letter. At that time, we will send copies to other interested Members of Congress and congressional committees and to the Secretaries of HUD and USDA. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions or comments on matters discussed in this report, please contact me at (202) 512-8678 or woodd@gao.gov or Steve Westley at (202) 512-6221 or westleys@gao.gov. Major contributors to this report are listed in appendix V. Scope and Methodology To determine how often and why FHA and RHS have suspended their loan guarantee programs due to the exhaustion of commitment authority or credit subsidy budget authority before the end of a fiscal year, we reviewed relevant agency and housing industry notices, budget data, and correspondence relating to program suspensions since fiscal year 1994. We also interviewed cognizant agency and housing industry officials. To determine how FHA and RHS manage, and notify Congress of, their use and obligation of these authorities, we reviewed laws, regulations, and guidance governing the agencies’ approval, monitoring, and estimation processes and the agencies’ procedures for informing Congress of the status of their loan guarantee programs. We also interviewed agency officials responsible for these tasks and obtained information on the information systems they use to administer their loan guarantee programs. Finally, to assess FHA’s approach for estimating utilization of commitment authority, we analyzed FHA monthly budget and accounting data for fiscal years 2003 and 2004. We conducted a straight-line analysis for each month within that time frame that assumed that the agency would use commitment authority for the remainder of the fiscal year at the same rate experienced previously in the year. To identify options that Congress, FHA, and RHS could exercise to help prevent the agencies from suspending their loan guarantee programs before the end of a fiscal year and the likely implications of these options, we interviewed budget, legal, and housing finance specialists from OMB and CBO; housing industry officials from the National Association of Home Builders, the Mortgage Bankers Association, and the National Association of Realtors; and we conducted a literature review to identify relevant studies and legislation. To determine and illustrate the potential implications of these options, we obtained these officials’ views on the effects of various alternatives and analyzed agency budget and accounting data. We assessed the reliability of the data used in our analyses by (1) reviewing existing information about the systems and the data, (2) interviewing agency officials knowledgeable about the data, and (3) examining the data elements (fields) used in our work by comparing known and/or anticipated values. When inconsistencies were found, we discussed our findings with agency officials to understand why inconsistencies could exist. We determined that the data were sufficiently reliable for the purposes of this report. We conducted our work in Washington, D.C., between January 2004 and January 2005 in accordance with generally accepted government auditing standards. FHA and RHS Loan Guarantee Processes FHA’s loan guarantee processes are different for its single-family and multifamily programs. As shown in figure 3, for FHA’s single-family programs, an FHA-approved lender determines a borrower’s (homebuyer’s) eligibility for an FHA loan guarantee. If the lender determines that the homebuyer and the property being financed are eligible, the loan case file is sent to an FHA field office for review. If the field office approves and issues the loan guarantee, FHA then records the amount of commitment authority used and, when appropriate, obligates credit subsidy budget authority. For FHA’s multifamily programs, the process begins when a borrower (developer) applies for a loan from an FHA-approved lender, who in turn submits a loan guarantee application to an FHA field office for review (see fig. 4). If the field office determines that the borrower and the property being financed are eligible, then the lender underwrites the loan and submits an application for commitment—the formal agreement by the government to guarantee the loan once the lender fulfills certain conditions—to the field office. If the field office approves the application, FHA then records the amount of commitment authority used and, when appropriate, obligates credit subsidy budget authority upon headquarters authorization, after which the field office issues the commitment. RHS also has separate loan guarantee processes for its Section 502 and Section 538 programs. For the Section 502 program, as shown in figure 5, a borrower (homebuyer) applies for a guaranteed loan through an RHS- approved lender. RHS is notified and reserves the required amount of credit subsidy budget authority. The RHS field office then reviews the loan documentation and, if the documentation meets RHS’s requirements, obligates credit subsidy budget authority, and issues a commitment. Lender closes loan, submits loan closing package to field office, and field office issues loan guarantee. Field office notified of potentially eligible applicant and informs lender credit subsidy reserved for pending application. Lender performs underwriting tasks and field office reviews loan documentation. Field office obligates credit subsidy budget authority and issues conditional commitment. commitment. As shown in figure 6, for loans guaranteed under the Section 538 program, a borrower (developer) applies for a guaranteed loan through an RHS- approved lender. RHS selects proposals based on eligibility requirements and has a field office review the underwriting. The field office then forwards a request for credit subsidy budget authority to headquarters, which obligates the authority. Applicability of Options to Past Program Suspensions The usefulness of options for delaying or preventing suspensions of FHA’s and RHS’s guaranteed loan programs can be considered in light of whether they would have been applicable to past suspensions. (See table 4.) As previously noted, the expanded notification options would have provided additional information on the status of resources for FHA and RHS guaranteed lending programs and would thus have been applicable to most of the suspensions since fiscal year 2000. Providing a higher limit on commitment authority would have increased the amount of commitment authority available to FHA and, as a result, would have been applicable to the suspension of programs under FHA’s GI/SRI account in fiscal years 2003 and 2004 due to the exhaustion of commitment authority. The option that would combine the multifamily programs under FHA’s GI/SRI account for credit subsidy purposes would likely eliminate the need for appropriations of credit subsidy budget authority and therefore would have been applicable to the suspension of GI/SRI account programs due to the exhaustion of budget authority in fiscal years 2000 and 2001. The option that would permit the use of negative subsidies to cover shortfalls in credit subsidy budget authority would have been applicable to the same suspensions. In addition, the option that would appropriate advance funding for credit subsidy costs would have been applicable to the suspension of programs under FHA’s GI/SRI account in fiscal years 2000 and 2001 and the suspension of RHS’s Section 502 program in fiscal years 2003 and 2004—all of which were due to the exhaustion of credit subsidy budget authority. Further, the option to continue or expand currently permitted practices, such as increasing fees or transferring budget authority, would have been applicable to or was actually used to delay the same four suspensions. For example, RHS used its authority to increase fees to delay suspension of the Section 502 program in fiscal years 2003 and 2004. FHA could have taken similar steps to help avoid or delay the suspension of programs under its GI/SRI account in fiscal years 2000 and 2001. Finally, RHS used its authority to transfer budget authority to delay the suspension of its Section 502 program in fiscal years 2003 and 2004. If FHA had the authority to transfer budget authority, this option would have been applicable to its fiscal year 2000 and 2001 suspensions. Comments from the Department of Housing and Urban Development GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments Staff members who made key contributions to this report include Eric Diamant, Ginger Tierney, Bill Sparling, Patty Hsieh, Barbara Roesmann, Carlos Diz, Linda Rego, Marc Molino, Jerry Sandau, Dan Blair, Christine Bonham, Marcia Carlsen, Rachel DeMarcus, and Alison Martin.
In fiscal year 2004, the Department of Housing and Urban Development's Federal Housing Administration (FHA) and the Department of Agriculture's Rural Housing Service (RHS) guaranteed approximately $136 billion in mortgages for single-family homes, multifamily rental housing, and healthcare facilities under a variety of programs. In past years, both agencies have occasionally had to suspend the issuance of guarantees under some programs when they exhausted the dollar amounts of their commitment authority (which serves as a limit on the volume of new loans that an agency can guarantee) or credit subsidy budget authority (the authority to cover the long-term costs--known as credit subsidy costs--of extending these guarantees) before the end of a fiscal year. These suspensions can be disruptive to homebuyers, developers, and lenders. GAO was asked to determine (1) how often and why FHA and RHS have suspended their loan guarantee programs over the last decade, (2) how these agencies manage and notify Congress of the rate at which the authorities for these programs will be exhausted, and (3) options Congress and the agencies could exercise to help prevent future suspensions and the potential implications of these options. On 10 occasions since 1994, FHA and RHS have suspended the issuance of loan guarantees after exhausting the commitment authority or credit subsidy budget authority for certain programs before the end of a fiscal year. Specifically, FHA suspended several programs six times and RHS suspended one program four times. The resources budgeted for these programs have not always been adequate to keep them operating for a full fiscal year due partly to difficulties in estimating demand for loan guarantees--a difficulty compounded by the process of preparing the budget request to Congress, which requires that the agencies forecast demand nearly 2 years in advance. FHA and RHS both manage their programs on a first-come, first-served basis, a factor limiting their ability to control the rate at which they use commitment authority and obligate budget authority. However, the agencies have different requirements and approaches for estimating the rate at which they will exhaust these authorities and notifying Congress. For example, unlike RHS, FHA is statutorily required to notify Congress when it has used 75 percent of its commitment authority and when it estimates that it will exhaust this authority before the end of a fiscal year. GAO's analysis indicates that FHA's basic approach for making estimates--applying utilization rates experienced up until the time of the analysis to the remainder of the fiscal year--does not always accurately forecast whether the agency will exhaust its commitment authority. However, FHA officials and federal budget experts said that more complex methods would not necessarily produce better estimates. Through discussions with federal agency and mortgage industry officials, GAO identified several options that Congress, FHA, and RHS could exercise to help prevent future suspensions; however, the options would also have budgetary impacts (such as increasing the budget deficit), make oversight of the programs more difficult, or impose additional administrative burdens on the agencies. For example, Congress could require FHA to provide more frequent notifications about the percentage of commitment authority the agency has used and expand this requirement to include obligations of credit subsidy budget authority. This option, which could also be applied to RHS, could give Congress additional and more timely information to consider whether to provide supplemental appropriations before the end of a fiscal year. Other options for Congress include (1) authorizing FHA to use revenues generated by some of its loan guarantee programs to cover any shortfalls in budget authority for others and (2) providing "advance funding"--budget authority made available in an appropriation act for the current fiscal year that comes from a subsequent year's appropriation--for FHA and RHS program credit subsidy costs. Further, FHA and RHS can continue to use or be given additional administrative tools--such as transferring budget authority--to help delay or prevent program suspensions.
Background Haiti is the poorest country in the Western Hemisphere, with more than 75 percent of the population living on less than $2 per day and the unemployment rate estimated at 60 to 70 percent. These conditions were exacerbated when the largest earthquake in Haiti’s recorded history devastated parts of the country, including the capital, on January 12, 2010. Since then, Haiti has suffered from a cholera epidemic that has affected over 450,000 persons and caused over 6,000 deaths. In addition, Haiti has experienced political uncertainty following the earthquake. Due to the inconclusive presidential election of November 2010, the new President was not inaugurated until May 2011. On May 13, 2011, the U.S. and Haitian governments signed the Haiti Reconstruction Grant Agreement. Later, the Haitian Parliament rejected two candidates nominated by the new President for the post of Prime Minister, who serves as head of government. In early October 2011, the Parliament approved the President’s third nominee. In response to the earthquake, in the Act, Congress provided more than $1.14 billion in reconstruction funds for Haiti, available through the end of fiscal year 2012. Of this amount, about $918 million was directed to USAID and State, of which $770 million was provided through the USAID-administered Economic Support Fund (ESF) account and about $148 million through the State-administered International Narcotics Control and Law Enforcement account (see table 1). Of the almost $918 million in supplemental funds, as of September 30, 2011, $356.9 million was allocated for the construction and rehabilitation of infrastructure. In addition to these supplemental funds, USAID and State allocated funding for reconstruction activities from other budgetary sources, such as annual appropriations for fiscal years 2010 and 2011, of which $54.8 million was allocated for construction and rehabilitation of infrastructure. In total, the agencies allocated almost $412 million from both supplemental and other budgetary sources for construction and rehabilitation of infrastructure. In September 2010, State and USAID, as required by the Act, issued a joint spending plan that showed allocations of supplemental reconstruction funds (see fig. 1 for a timeline of key government of Haiti and international donors and U.S. government actions and events after the earthquake). In January 2011, the U.S. government issued the 5-year Post-Earthquake USG Haiti Strategy: Toward Renewal and Economic Opportunity. Consistent with the government of Haiti’s development priorities, the U.S. strategy seeks to, among other things, encourage reconstruction and long-term economic development in several regions of the country— known as “development corridors”—including a region on the northern coast of Haiti that is not close to the earthquake epicenter, but where some people displaced by the earthquake moved. The strategy notes that 65 percent of Haiti’s economic activity was located in greater Port-au- and states the U.S. government’s intent to support new economic Princeopportunities in the Saint-Marc and Cap-Haïtien development corridors, in addition to assisting with reconstruction in the Port-au-Prince corridor, which suffered the most damage from the earthquake (see fig. 2). The U.S. strategy identifies planned U.S. reconstruction assistance in eight sectors. Six of these sectors—energy, ports, shelter, health, food security, and governance and rule of law—involve infrastructure construction and rehabilitation, while two sectors—economic security and education—do not. The strategy also encompasses some of the objectives set forth in the Haitian government’s 10-year Action Plan for National Recovery and Development in Haiti, issued in March 2010. The Action Plan identified and prioritized short- and long-term reconstruction needs in four areas: (1) territorial rebuilding in Port-au-Prince and three targeted regions of Cap-Haïtien, Saint-Marc, and Les Cayes; (2) economic rebuilding in sectors such as construction, agriculture, and tourism; (3) social rebuilding in the health, education, food security, and other sectors; and (4) institutional rebuilding focused on developing government capacity, justice, and a legal and regulatory framework. To advise and support U.S. earthquake reconstruction efforts in Haiti, USAID and State created special offices in Washington, D.C. In February 2010, USAID created the Haiti Task Team (HTT) to help coordinate U.S. emergency relief and reconstruction work and provide technical and staff support. At its peak, the HTT had approximately 50 staff positions, primarily U.S. full- or part-time staff who were temporarily reassigned from USAID’s Bureau for Latin America and the Caribbean and other bureaus, as well as contracted staff. In September 2010, State created the Office of the Haiti Special Coordinator to oversee the planning and implementation of the U.S. strategy in Haiti in coordination with USAID and other U.S. departments and agencies. The Haiti Special Coordinator has approximately 20 staff positions, primarily staff temporarily reassigned full- or part-time from other bureaus and offices within State, and other U.S. agencies as well as contracted staff. USAID and State Have Obligated and Expended Small Amounts of the Almost $412 Million Allocated for Construction Activities As of September 30, 2011, USAID and State allocated $411.6 million for bilateral post-earthquake infrastructure construction activities in Haiti using $356.9 million in fiscal year (FY) 2010 supplemental funds and $54.8 million from regular fiscal year appropriations. In addition, USAID and State had obligated $48.4 million (11.8 percent) and expended $3.1 million (0.8 percent) of the total allocated, as shown in table 2. Combined, USAID and State allocated $411.6 million for post-earthquake infrastructure construction activities as of September 30, 2011 (see table 2). USAID is responsible for implementing most of the infrastructure construction activities. USAID activities in five sectors—shelter, energy, ports, health, and food security—account for $365.7 million (88.9 percent) of the $411.6 million in total funds allocated for infrastructure construction. State has activities in the governance and rule of law sector, which account for $45.9 million (11.1 percent) of the total funds allocated for infrastructure construction. See appendix II for additional detailed information on each of the six sectors. Table 3 shows USAID and State funding allocations by sector. Examples of post-earthquake infrastructure activities are as follows: The energy sector includes rehabilitating substations in the Port-au- Prince area and in the Cap-Haïtien development corridor, providing power to the North Industrial Park—an activity that is key to Haiti’s economic development with donor and private-sector commitments of about $400 million and a goal of creating about 60,000 direct and indirect jobs, according to USAID and State officials. The ports sector includes the construction of a new port along the northern coast of Haiti in the Cap-Haïtien development corridor. Upon completion, the port could provide improved shipping access for firms operating in the North Industrial Park and other locations in the area. The shelter sector includes the development of approximately 15,000 service plots, which include housing and access to essential services, in new residential settlements in the Port-au-Prince and Cap-Haïtien development corridors. The health sector includes the reconstruction of the State University Hospital in the Port-au-Prince development corridor. The food security sector includes the construction of irrigation canals and farm-to-market roads in the Cap-Haïtien, Port-au-Prince, and Saint-Marc development corridors. The governance and rule of law sector includes construction activities funded through State’s Bureau of International Narcotics and Law Enforcement (State/INL); the activities include the rehabilitation of facilities at the Haitian National Police Academy. USAID and State Have Obligated Almost 12 Percent and Expended 0.8 Percent of Funds Allocated for Infrastructure Construction As of September 30, 2011, USAID had obligated $42.6 million for activities in the energy, ports, shelter, health, and food security sectors. The majority (67 percent) of USAID’s obligations have been for two energy activities—$15.8 million for construction of power generation at the North Industrial Park and $12.9 for rehabilitation of 5 power sub- stations in the Port-au-Prince area. State had obligated $5.8 million, as of September 30, 2011, for its activities in the governance and rule of law sector. State officials said that designing and planning of construction activities generally involves a more lengthy process than for other activities, and they expect obligations to grow at a faster pace later this year and during 2012 as USAID and State infrastructure construction activities reach the implementation stage. See table 4 for obligation and expenditure data. USAID Staffing Difficulties Were a Factor in Delaying USAID Infrastructure Construction Activities in Haiti USAID Has Had Difficulty Replacing Staff Since the Earthquake Within a month after the earthquake, 10 of the mission’s 17 U.S. direct- hire staff had departed Haiti, leaving the mission with 7 staff in country to manage a program heavily involved in massive relief operations and anticipating an increase in reconstruction activities. According to mission officials, U.S. direct-hire staff were permitted to leave for several reasons, primarily because approximately 40 percent of U.S. embassy housing was damaged or destroyed, the school for mission staff children was damaged and not functional, and staff were experiencing emotional challenges after the earthquake. To fill the U.S. direct-hire vacancies, USAID posted 10 routine agency wide job announcements in March 2010, but no U.S. direct-hire staff applied. According to USAID officials, potential applicants did not apply due to, among other things, the damaged school, uncertainty about the quality of life in Haiti, and the lack of financial or other incentives in the job announcements. In May 2010, USAID again posted the 10 job announcements and, this time, attracted a number of applicants because the postings included financial incentives and waived the requirement that successful applicants bid on positions in four USAID-designated critical priority countries—Afghanistan, Iraq, Pakistan, and Sudan—upon completion of their tours in Haiti. Having received a sufficient number of applicants for the May 2010 posting, the mission soon selected the staff. However, U.S. direct-hire staff did not begin to arrive in Haiti until early 2011 because, among other things, households and families had to be moved and some staff required up to 6 months in language training. In addition to filling existing positions, USAID received approval from the U.S. Ambassador to Haiti in February 2011 for 15 additional U.S. direct- hire staff to manage the surge in earthquake-related funding. These positions were announced and some candidates selected when the approval was granted in February 2011. Eleven had arrived as of September 2011 and, according to mission officials, all are expected to arrive in Haiti by February 2012. However, the mission will be implementing infrastructure construction activities until at least 2015, according to USAID planning documentation. During the next 4 years, U.S. direct-hire staff will have opportunities to bid for other positions at other posts. As U.S. direct-hire staff leave Haiti, the mission will need to replace them in order to continue the progress of infrastructure construction activities. Lack of Staff Created Planning Delays Since the earthquake, the mission has been operating with a reduced number of U.S. direct-hire staff. During that time, the mission experienced delays in planning and implementing some construction activities. According to USAID officials, U.S. direct-hire staff are essential in planning and establishing continuity for long-term reconstruction activities, including infrastructure activities such as the design of power generation facilities, which require specific skills. Delays in staffing key positions, such as engineers, contracting officers, and sector specialists, created particular difficulties. For example: Engineering: Although USAID allocated more than $300 million in supplemental funds for infrastructure activities that would require engineering skills, the mission’s planning process was delayed due, in part, to the lack of U.S. direct-hire engineers, according to USAID officials. USAID was unable to reassign U.S. direct-hire engineers from other missions around the world because USAID has few engineers worldwide, according to mission officials. At the time of the earthquake, the mission had two Haitian engineers on its staff, but these individuals had experience in constructing and repairing roads and bridges and rehabilitating small infrastructure—with no experience in energy and ports construction. In anticipation of its pending involvement in infrastructure activities, the mission created a new Office of Infrastructure, Energy, and Engineering, but staff with specialized engineering skills did not begin to arrive at the mission until early 2011. For example, the mission’s chief engineer, who is responsible for overseeing construction activities in the energy sector, arrived in April 2011, and an engineer overseeing the construction of health facilities arrived in May 2011. Mission officials in the health sector stated that initial delays in staffing an engineer in the Office of Infrastructure, Energy, and Engineering presented a planning challenge because in-country engineering expertise is essential to rehabilitate a major hospital and numerous smaller health clinics. According to mission officials, the lack of staff and expertise made planning difficult for several reasons. For example, some staff who were involved in developing certain cost and planning estimates and assessing potential constraints had limited expertise, thus requiring more review by others. Further, some construction activities experienced delays as a result of staff shortages. For example, on September 22, 2011, about 3 months later than initially planned, the mission awarded a feasibility study related to its plans to invest in port development in the Cap-Haïtien development corridor. According to USAID officials, the delay occurred because mission staff were focused on energy-sector issues related to the North Industrial Park. The feasibility study is needed before the mission can determine its specific plans for port construction and set an award date for the construction contract that is expected to follow. Contracting: The mission also experienced delays in awarding some infrastructure construction contracts as originally scheduled because it had only one U.S. direct-hire contracting officer in country with the authority to approve contracts of more than $3 million, according to USAID officials. The mission plans to award several complex contracts for more than $3 million each from the more than $300 million in supplemental funding it has allocated for infrastructure activities. The mission currently has three other contracting officers with lower contracting authority approval limits, and as of September 2011, the mission had requested additional contracting officers it anticipates will be needed to keep pace with the planned time frames for the mission’s planned infrastructure activities. Programming: The mission also experienced delays because, for more than 1 year following the earthquake, it did not have several sector directors, including a program office chief and office chiefs for economic growth and democracy and governance offices, among other programming positions. According to mission officials, key decisions during the design and planning of construction activities were delayed, in part, because the mission had not been involved in such activities before the earthquake and there was limited expertise among existing staff. In addition, mission staff have experienced increased workloads and some mission staff have had to take on additional responsibilities outside their normal areas of expertise. For example, the deputy mission director assumed the additional role of leading energy sector planning for about 15 months until the April 2011 arrival of the chief engineer. In addition to other staffing issues, three of USAID’s Haitian employees died in the earthquake. Further, mission officials stated that numerous surviving Haitian employees have been unable to contribute quickly to post- earthquake planning efforts because some had family members who died or were injured in the earthquake and many had residences that were damaged or destroyed. USAID Haiti Relied Largely on Temporary Staff Following the Earthquake Based on our review of USAID policies and guidance, including its Automated Directives System, and statements of mission officials, USAID does not have an expedited process for placing U.S. direct-hire staff to work on reconstruction efforts in an urgent post-disaster or post-crisis situation such as Haiti. Absent such a process, USAID Haiti used USAID’s routine but lengthy staffing process, which involves, among other things, posting job opportunities agency-wide, receiving applications, selecting staff, and waiting until they are released from their current assignments. To meet the increased need for mission staff to manage the program, the agency temporarily hired or reassigned staff, including staff from its Haiti Task Team in Washington, D.C., to complete more than 400 temporary duty assignments for periods ranging from one week to several months. For example, USAID used personal services contracts to hire staff to provide financial management expertise; assigned headquarters-based staff from its Latin America and Caribbean Bureau to manage and oversee rubble removal and other efforts; and provided fiscal year 2010 supplemental funding to an implementing organization to manage people who repaired roads, cleaned drainage canals, and performed other rehabilitation activities. According to mission officials, planning and implementation of reconstruction activities were delayed because the few staff remaining in Haiti were heavily involved in recruiting, placing, and training temporary staff in Haiti. Senior mission staff stated that, for many temporary staff positions, the mission had to develop detailed scopes of work for the positions and then brief and train newly arrived staff on substantive issues. In addition, mission staff noted that the continuity of efforts was sometimes problematic as multiple staff, who turned over frequently, managed the efforts. USAID Planning Is Still Under Way; Various Factors Contributed to Delays in Infrastructure Construction USAID Has Finalized Six of Eight Sector-Specific Planning Documents USAID planning for its earthquake reconstruction activities in Haiti is still under way. The USAID mission in Haiti, in coordination with other U.S. government agencies and USAID in Washington, D.C., has drafted eight Activity Approval Documents (AADs)—detailed planning documents for each sector. As of October 2011, six AADs had been approved: education, energy, food security, governance and rule of law, health, and shelter (see table 5). These six AADs account for 87 percent of USAID’s available supplemental and program funds, according to USAID. The two remaining AADs require additional work before they can be approved, according to USAID officials. Specifically, the AAD for economic security—which does not include any planned infrastructure construction—requires further interagency discussion with the U.S. Department of the Treasury (Treasury), which is the co-implementer of economic activities in Haiti, and the AAD for ports has not been approved because, among other things, USAID is waiting for the Haitian government to make key decisions regarding port regulations, use of territorial waters and the Haitian customs code. According to USAID officials, the AADs for Haiti are more comprehensive and have involved a more extensive review process than is typical. For example, USAID officials stated that the AADs for USAID activities in Haiti have included more analytical metrics, such as the results USAID plans to achieve in return for its investment. These AADs incorporate a discussion of how each sector will meet USAID’s agency wide procurement reform known as USAID Forward, which has, among other objectives, the goal of broadening the base of USAID’s implementing partners. According to USAID officials, incorporating USAID Forward has changed the procurement plan structure in the AADs, requiring more scrutiny by agency officials. For example, USAID indicated for activities described in the AAD procurement plans whether those activities will be targeted at local firms or organizations or use traditional partners. In addition, several factors led to a more extensive review process for the AADs for activities in Haiti. According to USAID officials, AADs are usually drafted at the mission and approved by the mission director. However, due to U.S. foreign policy interests in Haiti, in addition to the review and approval of the Haiti mission director, these AADs have undergone high-level review by USAID and State’s Office of the Haiti Special Coordinator. Because the U.S strategy is a whole-of-government strategy, a number of other U.S. agencies were involved in reviewing and approving the AADs. These agencies included the U.S. Department of Agriculture for food security, Treasury for economic security, and the Centers for Disease Control and Prevention for health. According to USAID and State officials, the inclusion of other agencies in the process added levels of review that have extended AAD development and approval. Prior to AADs being approved in Washington, D.C., the Haiti mission continued with other planning efforts. The mission’s Office of Acquisition and Assistance developed a procurement plan to manage contract actions—such as the award of contracts for planning, design, and construction services—that enable implementation of the activities. In the shelter sector, for example, the mission has awarded contracts for the design of the proposed residential sites; the design includes site layout plans, environmental impact assessments, and housing-unit design specifications. In instances where an AAD had not been approved but where the mission deemed that sufficient planning had been completed, the mission used activity-specific approval memoranda to initiate procurement actions, and, in some instances where design work was completed, award construction contracts. For example, a contract for the construction of a 10-megawatt power plant for the North Industrial Park in the Cap-Haïtien development corridor with a value of approximately $15 million was awarded while the energy AAD was in draft. A Few Infrastructure Construction Activities Are Under Way; Various Factors Contributed to Delays As of September 15, 2011, the mission had awarded its first two infrastructure construction contracts—rehabilitation of five electrical substations in Port-au-Prince and construction of a power plant in the North Industrial Park—at a combined cost of $28.8 million; approximately 7 percent of USAID’s total allocation for infrastructure construction activities. These awards were slightly delayed, as described below: Rehabilitation of five electrical substations in Port-au-Prince: On July 28, 2011, about 1 month later than planned, USAID Haiti awarded a $12.7 million construction contract for work to improve electrical service in Port-au-Prince. USAID extended the time for contractors to submit construction proposals because USAID issued new information in response to contractors’ questions, such as those concerning the length of the contract performance period—considered by contractors to be too short—and requests for clarification of technical documents referenced in the original proposal solicitation. Construction of a power plant in the North Industrial Park: On September 15, 2011, about 6 weeks later than planned, the mission awarded a $15 million contract for construction of a power-generating facility to meet the start-up electrical needs of the North Industrial Park. Some of the delay was to allow contractors time to respond to USAID’s clarifications and new contract requirements that were the result of contractor questions, changes to documents in the original solicitation, and the addition of environmental mitigation requirements. Some of the delay was also due to time spent by USAID mission staff focusing on a bid protest of the Port-au-Prince substation rehabilitation contract. Some key USAID activities experienced delays in their original schedule due to technical issues, and other activities do not yet have a planned award date. These delays are in part related to issues such as the length of time it takes contractors to submit construction proposals, difficulties obtaining building sites, and incomplete environmental assessment documents. In addition, some activities do not yet have dates set for award of construction contracts because the scope and location of the infrastructure have yet to be determined. North Industrial Park long-term power generation: The mission delayed its planned award of a construction contract to increase the power plant’s generation capacity from November 2011 to April 2013. According to a mission official, the dates were revised because the number of future tenants in the industrial park and their electrical power requirements are not yet known. South Industrial Park power generation: As of October 2011, a site for the South Industrial Park had not been established. Therefore, the mission has not yet determined the contract award date for this activity, which will provide power to a new South Industrial Park. State University Hospital in Port-au-Prince (HUEH): The mission delayed by about 5 months its plans to enter into an agreement with the governments of France and Haiti to reconstruct, repair, and modernize earthquake-damaged buildings on Haiti’s State University Hospital campus. The delay is due, in part, to a USAID-funded environmental assessment of the HUEH which is taking longer than planned. Mission officials expect that the assessment will be completed near the middle of November 2011, and estimate that the mission will be able to execute the HUEH agreement by December 2011. Shelter demonstration project: The mission planned to break ground on a “demonstration” settlement of 200 houses and other residential infrastructure on January 12, 2011. According to USAID officials, the mission has faced challenges in obtaining land titles for property on which to build because the government of Haiti lacks a comprehensive, functional system for recording land ownership— resulting in only 40 percent of landowners possessing documentation of title to their land and land records which may be inaccurate—as well as overarching shelter legislation to guide housing, land-use, and urban planning. Development and construction of new residential settlements: The mission’s plan to award contracts to prepare 15,000 housing plots and build 4,000 housing units has been delayed about 7 to 10 months. Originally planned for August 2011, the awards of most of these contracts are now expected to occur between March and June of 2012. It has taken longer than planned to identify the sites, negotiate agreements with the Haitian government on the selection of beneficiaries, and negotiate agreements with nongovernmental organizations building some of the housing. Haitian Government Capacity Is a Key Challenge to Sustainability of U.S. Government Infrastructure Reconstruction Activities The sustainability of USAID-funded infrastructure activities depends, in part, on improvements to Haiti’s long-standing economic and institutional weaknesses, the Haitian government’s political will to implement change, as well as the success of planned capacity-building activities. Based on data from the UN Office of the Special Envoy for Haiti, aid from bilateral and multilateral donors represented approximately 57 percent of the government of Haiti’s total revenue in 2009. Following the earthquake, the percentage increased substantially, to an estimated 80 percent of total government revenue in 2010. Additionally, more than 16,000 civil servants died in the earthquake and, including those who left the country, the Haitian government workforce is now reduced by 33 percent, according to the United Nations Development Program. Further, according to the U.S. government strategy and USAID documents and officials, electricity laws, the structure of the health care system, the customs code, and housing and urban development policy, among other areas, all need reform. As part of its required planning processes, USAID considered various sustainability issues for its infrastructure activities. Some activities face sustainability challenges. In planning and assessing the sustainability of its activities in Haiti, USAID used a variety of resources. These included relevant provisions in the Foreign Assistance Act, guidance from the the agency’s project agency’s Automated Directives System (ADS),design guidance, and studies and assessments by USAID and other donors such as assessments of Haitian government ministries by the Inter-American Development Bank. In addition, for infrastructure activities, the mission relied on general principles for estimating costs, including projecting operations and maintenance costs and estimating basic returns on investments. For four activities, as of September 2011, USAID has also completed certification required by the Foreign Assistance Act indicating that Haiti is capable of effectively maintaining and utilizing the projects. In addition, each AAD contains a section broadly describing the sustainability issues of the activities planned in each sector. The following are examples of some of the challenges identified by USAID and its plans to address those issues. Energy sector: The power utility, Electricité d’Haiti (EDH), faces considerable technical and managerial challenges that have resulted in financial losses. The utility receives an annual subsidy from the Haitian government of approximately $100 million, which, according to USAID’s energy AAD, represents approximately 12 percent of the government of Haiti’s national budget. To assist the EDH, in April 2011 the mission awarded a 2-year contract to improve EDH management while long-term modernization options are developed and evaluated. In addition, USAID plans for the North Industrial Park power generation facility to be managed independently of EDH through September 2016. This facility is expected to generate enough revenue to cover operations and maintenance costs, including the replacement of major equipment. However, modernization of the energy sector is based, among other things, on the assumption that the Haitian government will decide on an appropriate modernization model and take necessary, but politically challenging steps, to implement that model, according to the energy AAD. Health sector: The Haitian government has made efforts to decentralize the health care system since the 1990s, but only 12 of 63 local health networks are currently operational. To support Haitian government efforts to decentralize the health care system, USAID is planning to provide resources, such as repair or construction of health facilities, to 9 to 12 local health networks in the U.S. development corridors. USAID is also requiring business plans in this sector so that USAID will no longer need to contribute to operations and maintenance of new or renovated health facilities within 3 to 5 years of completion. USAID acknowledges that strengthening the Haitian Ministry of Health and a decentralized health care system—both of which would likely enhance sustainability— are complex undertakings that will require long-term efforts. According to USAID, there are also shortages of community health workers, low retention of doctors and nurses who can earn more money in private practice or working for nongovernmental organizations, and a low skill and knowledge base at all levels. Ports sector: According to USAID’s draft ports AAD, the Haitian government does not have a well-defined legal and regulatory framework for ports, and the three Haitian government entities involved in port operations have unclear roles and responsibilities. Furthermore, the sector is characterized by excessively high port charges, unreliably slow port clearance processes, inadequate human and institutional capacities, and poor integration between international and regional ports. To assist with port construction in the Cap-Haïtien development corridor, according to USAID’s draft ports AAD, (1) a public-private partnership will be created and (2) USAID will provide technical assistance to Haitian officials to help them lower port charges in the northern port and introduce processes to expedite port clearance and tax collection. However, the ports sector AAD notes that port construction is a “high-risk” program whose success depends on, among other things, the political will of the Haitian government to make improvements in a sector controlled by a few individuals, tens of millions of dollars in additional funding from other donors, and sufficient demand for port use. Shelter sector: Haiti lacks a single agency for housing and urban development policy resulting in a sector characterized by fragmented authorities and unenforced policies, according to the shelter sector AAD. Among other things, Haiti does not have an effective national land register—known as a cadastre—and system to record land ownership; a policy framework that establishes and streamlines building standards and other controls to promote disaster-resistant housing; and has poor capacity to support the community organizations needed for shelter infrastructure and service delivery. USAID plans to provide assistance in areas such as building standards and enforcement mechanisms; land tenure, land title, and property tax records; and housing finance and mortgage markets. The shelter AAD notes that sustainability depends on the ability of the Haitian government to manage and maintain infrastructure and services over the long term. Policy change requires the political will and dedication of the Haitian government, as well as support from other members of the donor community. Sustainability also depends on economic growth, governance reform, and the ability of Haitians to pay taxes. If economic opportunities at the planned new shelter sites in locations outside of the capital are insufficient, people may return to Port- au-Prince. The success of USAID plans will depend on new economic opportunities that would enable municipalities to pay for road maintenance at the new sites and for housing beneficiaries to pay fees for water services and basic maintenance, according to U.S. officials. Our past work on U.S. assistance in developing countries shows that the U.S. government has previously encountered challenges in ensuring the sustainability of infrastructure activities. For example, following the 2001 earthquakes in El Salvador, we reported on sustainability challenges to USAID-funded shelter and health clinic construction, including residents who could not afford the cost of connecting to the potable water and electricity systems in the new houses. In addition, in July 2011, we reported that the governments of Cape Verde and Honduras faced challenges sustaining several activities funded through the Millennium Challenge Corporation (MCC). For example, in Honduras, uncertainty about government funding for maintenance and other construction activities may jeopardize the sustainability of some major roads built with MCC funds. Conclusion The United States is providing over one billion dollars to Haiti for reconstruction but faces an enormous task in helping the country begin to recover from the 2010 earthquake. A number of factors have delayed USAID’s planning and implementation of projects to rebuild infrastructure in the energy, ports, and shelter sectors—areas where the agency has not previously worked and that require specific skills or expertise. Haiti has always been a challenging operating environment, and the earthquake created additional difficulties. Further, USAID had difficulties securing staff—particularly technical staff such as contracting officers and engineers—who were willing to live and work in the country after the earthquake and who could bring the expertise necessary to plan and execute large, complex infrastructure projects. Such difficulties have contributed to delays in U.S. efforts, with only a few contracts awarded to date. Additional issues, such as difficulties in obtaining land title and the time needed to conduct environmental assessments, have led to delays in planning construction activities, and some activities do not yet have start dates. Further, the Haitian government’s long-standing economic and institutional weaknesses are complicating U.S. efforts and could affect the future of this assistance, and the political will of the recently formed Haitian government to implement reforms and undertake sustainability efforts is unknown. Even though USAID is taking some steps to address the long-term future of U.S.-funded infrastructure, it is unclear at this point in time if those projects will ultimately be sustainable. U.S. activities in Haiti will continue for many years, and USAID staff can be expected to turn over numerous times. It is therefore critical that USAID do as much as possible to match the number of staff with the demands placed on the mission to ensure that the substantial amount of U.S. funding provided for reconstruction is used efficiently and within time frames that address the pressing needs in the country. Recommendation for Executive Action To facilitate USAID’s progress in planning and implementing its many post-earthquake infrastructure construction activities in Haiti over the next several years, particularly those requiring key technical staff such as contracting officers, engineers, and program specialists, we recommend that the USAID Administrator ensure that U.S. direct-hire staff are placed at the mission within time frames that avoid future staffing gaps or delays. Agency Comments and Our Evaluation USAID provided written comments on a draft of this report, which are reprinted in appendix III. State did not provide written comments. USAID agreed with our finding that staffing difficulties were a factor in delaying USAID infrastructure constructions activities in Haiti. In addition, USAID described certain actions it is currently taking, such as providing accelerated one-on-one language training and special bidding incentives for Haiti, that, if continued, could address our recommendation. USAID also emphasized staffing actions that the agency took following the earthquake to quickly deploy staff temporarily to Haiti. Our report recognizes these actions and the contributions they made in helping with the relief and recovery efforts in Haiti. However, we maintain that such efforts did not include bringing on U.S. direct-hire staff to fill permanent positions and provide the continuity needed to manage longer-term reconstruction projects. In particular, USAID noted that its Civilian Response Corps (CRC), part of an interagency initiative, was used to provide a surge capability and quickly deploy staff with specialized knowledge and skills to Haiti. Our report notes that CRC, whose staff are deployed for an average of 90 days, provides the U.S. government with a first responder effort and supports the implementation of longer-term reconstruction activities. State and USAID both provided technical comments. Those comments, along with information contained in USAID's written response, were incorporated into the report where appropriate. We are sending copies of this report to interested congressional committees, the Secretary of State, and the USAID Administrator. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staffs have any question about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contribution to this report are listed in appendix IV. Appendix I: Scope and Methodology We reviewed the infrastructure-related post-earthquake reconstruction efforts of the U.S. Agency for International Development (USAID) and the Department of State (State) in Haiti. This report examines, for infrastructure construction activities only, (1) the amounts of USAID and State obligations and expenditures; (2) USAID’s staffing; (3) USAID’s planning; and (4) potential sustainability challenges USAID faces. In response to a congressional mandate in the Supplemental Appropriations Act of 2010 (the Act) that made funds available for GAO to review U.S. efforts in Haiti, we focused our review on the $918 million in bilateral reconstruction funding provided to USAID and State in the Act as well as other funding provided to USAID and State in annual fiscal year appropriations that support reconstruction activities, including infrastructure construction and rehabilitation, in Haiti. Of the $918 million, $770 million was provided through the USAID-administered Economic Support Fund (ESF) account and $148 million through the State- administered International Narcotics Control and Law Enforcement (INCLE) account. USAID and State have planned to allocate $356.9 million for infrastructure construction activities from the funding appropriated in the Act. In addition, USAID and State supported infrastructure construction activities with almost $54.8 million from regular fiscal year appropriations, such as, for example, fiscal year 2010 funding for the Global Health and Child Survival account. Combined, USAID and State have allocated almost $412 million from both supplemental and regular fiscal year appropriations for infrastructure construction activities. In the Act, Congress provided more than $1.14 billion in reconstruction funds for Haiti available through the end of fiscal year 2012, including about $918 million for USAID and State and about $220 million for the Department of the Treasury for debt relief, a Treasury attaché office in Port-au-Prince, and technical assistance. We did not review the funding directed to Treasury. Additionally, we did not review $1.64 billion in funding provided primarily to reimburse U.S. agencies for their emergency and humanitarian efforts in Haiti immediately following the January 2010 earthquake, nor did we examine the approximately $144 million in U.S. embassy-related funding included in the law. To obtain information on the appropriations, allocations, and planned uses of U.S. reconstruction funding for Haiti, we reviewed the Supplemental Appropriations Act of 2010, enacted by Congress in July 2010; the State and USAID FY 2010 Supplemental Appropriations Spending Plan, issued by State in September 2010; the interagency Post- Earthquake USG Haiti Strategy: Toward Renewal and Economic Opportunity, issued by State in January 2011; and Congressional Research Service reports on Haiti. We also reviewed the Action Plan for National Recovery and Development of Haiti, issued by the government of Haiti in March 2010. In addition, we reviewed the Haiti Reconstruction Grant Agreement, signed by the U.S. and Haitian governments in May 2011. We met in Washington, D.C., and in Port-au-Prince, Haiti, with officials from USAID and State and other organizations implementing U.S. government-funded activities. USAID defines allocation as the identification and setting aside of resources for a specific program action. To determine the amounts of funding obligated from USAID’s supplemental ESF funding and State’s supplemental INCLE funding, as well as funding from other sources for infrastructure construction activities, we analyzed data reported by USAID as of June 30, 2011, and September 30, 2011, and by State as of September 30, 2011. The amounts reported to us by USAID and State include both expenditures and unexpended obligated balances. These data include information on obligations of supplemental appropriation funding overall, as well as amounts provided for particular activities. To assess the reliability of the data on planned allocations, obligations, and expenditures, we conducted follow-up correspondence and interviews with cognizant officials from USAID and State both in Washington, D.C., and in Haiti. We asked them standard data reliability questions—including questions about the purposes for which funding data were collected, the use of the data, how the data were collected and generated, and how the agencies ensured that the data were complete and accurate—and determined the data to be sufficiently reliable for the purposes of this report. To assess USAID’s staffing challenges in Haiti, we reviewed bidding opportunity notices, staffing pattern analyses produced by the mission Executive Office, and interviewed relevant officials at the mission in Haiti and at USAID’s headquarters in Washington, D.C. To determine the status of USAID’s planning activities, we reviewed the agency’s policies for planning as outlined in the Automated Directives System. We also reviewed eight sector-specific Activity Approval Documents (AADs), six of which had been finalized as of October 2011, as well as more detailed planning documents for specific activities with infrastructure components, which we obtained during our visits to the USAID mission in Haiti. To determine challenges USAID faces in implementing activities as planned, we reviewed USAID’s and State’s requests for proposals, amendments, and award documents posted at the U.S. government’s Web site www.fedbizopps.gov. We also reviewed USAID’s AADs, award documents provided by the mission, and USAID’s activity procurement plan. In addition we interviewed relevant officials at the mission’s Office of Acquisition and Assistance and the mission’s Infrastructure, Energy, and Engineering Office to discuss the procurement plan, bid solicitation process, and activity delays. Due to the small amount of INCLE funding allocated for infrastructure activities, we did not review the State Bureau for International Narcotics and Law Enforcement’s planning process. To describe issues related to the capacity of the Haitian government, we reviewed U.S. government documents such as the U.S. government Haiti Strategy and the AADs, and reports from the United Nations Office of the Special Envoy for Haiti and the United Nations Development Program. To discuss USAID’s planning process for sustainability and potential sustainability-related challenges, we interviewed relevant officials both at the mission in Haiti and also at USAID’s headquarters in Washington, D.C., about sustainability and reviewed agency guidance, documents, and legislation. We also reviewed prior GAO reports with sections on sustainability challenges experienced by the U.S. government in other countries. We traveled to Haiti in April 2011 and August 2011 and met with U.S. officials from USAID and State, as well as representatives of some of USAID’s international and local implementing partners for humanitarian, development, and reconstruction programs—including the Cooperative Housing Foundation and the American Institutes for Research. We also met with officials from the Interim Haiti Recovery Commission. We visited sites damaged by the earthquake where USAID and State have ongoing or planned reconstruction efforts, such as the State University Hospital, Haitian National Police Academy, and sites in Port-au-Prince where rubble removal was ongoing. In addition, we visited sites in the Cap- Haïtien development corridor that USAID has targeted for reconstruction assistance. While there, we observed and discussed with USAID officials the preliminary results of U.S.-funded development programs, including a planned residential settlement site, a planned port, and various planned agricultural and watershed management activities. We conducted this performance audit from November 2010 to November 2011 in accordance with generally accepted government audit standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our work objectives. Appendix II: Sector Fact Sheets Appendix II provides information on the funding, status of activities, select challenges, and sustainability plans for the six sectors that include infrastructure construction activities: energy, ports, shelter, health, food security, and governance and rule of law. Introduction Prior to the January 2010 earthquake, less than 30 percent of Haitians had access to electricity, and many of those had access to less than 10 hours per day. This situation negatively affected economic growth because a functioning energy sector and a dependable supply of electricity are key to creating jobs and supporting a healthy economy in Haiti. Funding for Infrastructure USAID allocated $98.3 million for infrastructure activities in the energy sector, including an estimated $85.3 million from fiscal year 2010 Economic Support Fund (ESF) supplemental appropriations and $13.0 million from other sources. The earthquake compounded the lack of access to electricity by damaging substations, distribution networks, and other components of the sector. Currently, the country’s electrical infrastructure cannot meet the needs of industries, businesses, or residents. Obligated funds include both unexpended obligated balances and expenditures. Haiti’s post-earthquake Action Plan for National Recovery and Development of Haiti (Action Plan) states that its priorities are to restore damaged power facilities, repair the electricity transmission and distribution network, and expand access to electricity in regions outside Port-au-Prince. The U.S. government’s Post- Earthquake USG Haiti Strategy Toward Renewal and Economic Opportunity (Haiti Strategy) includes a goal to improve and modernize Haiti’s electrical sector and bring affordable, reliable power to more households and businesses. USAID plans to target projects that will help make the supply of electricity more efficient and the power network more viable. Status USAID plans to rehabilitate five electrical substations in the Port-au- Prince area, provide power generation for the North Industrial Park, and construct other power generation, transmission, and distribution facilities for residential customers. Most of these activities are still in the planning phase. Selected Challenges • Haitian government capacity: The electricity sector in Haiti, which is largely government owned, is characterized by dramatic power shortages and very low coverage of electricity—with only about 12.5 percent of the population having legal regular access to electricity. According to State, the government of Haiti’s public electricity utility suffers from inefficiencies, is poorly maintained due to lack of resources, and relies heavily on subsidies and donor funding. • Completing energy activities as initially scheduled: USAID awarded a contract to construct a power plant for the new North Industrial Park on September 15, 2011, about 6 weeks later than planned; however, according to a USAID official the delay will not affect the opening of the park, because site work being completed by others has already delayed the opening. For activities where an award has not been made, USAID planning documents include a "procurement sensitive" cost estimate which is not shown but is included in the table total. USAID awarded a contract to rehabilitate five electrical substations in July 2011. However, the award was protested and a “stop work” order was issued. As of November 15, 2011, work had not started. Sustainability Plans To enhance overall sustainability, USAID awarded a $10.9 million, 2-year contract in April 2011 to help the Haiti electrical utility manage electricity loss reduction. USAID plans to award a 3-year management contract in early 2012 to support the operation of the North Industrial Park and other utility operations in northern Haiti. However, USAID planning documents state that the sustainability of the electrical system as a whole, and particularly after the loss reduction management contract ends, depends on, among other things, the achievement of important legal and regulatory reforms to improve the commercial viability of the system and provide necessary resources for proper operations and maintenance of the electrical infrastructure. USAID also acknowledged that the government of Haiti must take necessary but politically challenging steps to do so. USAID has delayed its initial award of a contract for the expansion of the North Industrial Park power generation facility from November 2011 until April 2013 to better determine the demand for power at the location and the most efficient means to provide electricity when it is needed. Introduction As an island nation with depleted natural resources, ports are crucial to Haiti’s economic viability and growth. However, for many years prior to the January 2010 earthquake, Haiti’s major international port in Port-au-Prince had deteriorated and lacked the capability to meet the country’s cargo and shipping needs. The earthquake caused significant damage to the Port-au-Prince port, further reducing its capabilities. Funding for Infrastructure USAID allocated $77.7 million in fiscal year 2010 ESF supplemental appropriations to infrastructure activities in the ports sector. Funding of Port Activities, as of September 30, 2011 (Dollars in millions) Obligated funds include both unexpended obligated balances and expenditures. Haiti’s post-earthquake Action Plan states that, as part of its overall effort to deconcentrate economic activities in Port-au-Prince, new deep water ports, including a new large-capacity port and harbor facility, need to be constructed in Haiti’s northern region. The U.S. government’s Haiti Strategy states that the United States is considering investing in the design and development of a major port on the northern coast. Status USAID is conducting a feasibility study and, depending on the results of the study, plans to construct a new port in the Cap-Haïtien development corridor. In addition, USAID plans smaller port rehabilitation and construction in other areas. During our review, we visited an existing port facility in Cap-Haïtien and a potential site in Fort-Liberte. The new port would be less than 20 miles from the site of the North Industrial Park—a post-earthquake public-private project scheduled to begin operation in April 2012. Selected Challenges • Lack of staff with relevant technical expertise: USAID’s program in Haiti has not previously involved port construction activities. With no staff who have port construction expertise or experience, USAID will rely on (1) a private firm to conduct a feasibility study and make recommendations regarding, among other things, port design, economic feasibility, and financial viability; and (2) a public private partnership to construct a new port. For activities where an award has not been made, USAID planning documents include a "procurement sensitive" cost estimate which is not shown but is included in the table total. • Potential funding shortfall: Sustainability Plans According to USAID’s current plans, the new port is to be constructed by a public-private partnership and operated by private firms when completed. These plans assume that the government of Haiti, particularly Haitian port officials, will efficiently collect import duties and that sufficient port revenues will be generated to fund recurring costs for port maintenance. USAID’s current estimated cost of constructing the northern port is $105 million: Since USAID has allocated only about $68 million, the remaining $37 million must be obtained from other sources. According to USAID officials, the northern port feasibility study includes an analysis of financing options under a public-private partnership to provide the additional required funding. When completed, the economic viability of the port will depend on the amount of income generated. According to USAID officials, the income is projected to be derived from two broad sources, the growth in cargo generated by business activity related to the planned North Industrial Park and the increased cargo diverted from other ports to the new port as the result of it being a modern well managed port. In addition, they stated only one manufacturing firm is currently committed to operating in the North Industrial Park, however discussions are underway with other possible tenants. The economic viability of the port is one of the issues being addressed in the feasibility study. USAID, Haiti lacks a comprehensive, functional system for recording land RECONSTRUCTION ownership as well as an effective national cadastre. Prior to the earthquake, Introduction customary arrangements and Prior to the January 2010 knowledge characterized land earthquake, a majority of Port-au- tenure with only 40 percent of Prince’s more than 2 million landowners possessing inhabitants lived in crowded and documentation such as a legal poorly constructed housing, with title or transaction receipt. inadequate basic infrastructure and Furthermore, a large proportion unclear land tenure status. of displaced persons were renting property prior to the The earthquake destroyed earthquake—with the plots of approximately 115,000 homes and land on which they lived often severely damaged more than informally settled and 208,000, displacing over 2 million haphazardly delineated—and people. The International many will have difficulty Organization for Migration proving clear, legal tenancy estimated that, as of July 2011, due to document discrepancies over 590,000 Haitians displaced by or lack of documents. the earthquake continued to live at sites for internally displaced • Haitian government capacity: persons. Developing and Building New Residential Settlements Funding for Infrastructure USAID allocated $55.1 million in fiscal year 2010 ESF supplemental appropriations and other funding for the development and construction of new residential settlements. Other funding includes regular fiscal year appropriations. The government of Haiti’s limited legislative, regulatory, Haiti’s post-earthquake Action Plan and institutional capacity to notes that housing is the sector manage urban development most affected by the earthquake. severely constrains the The Action Plan proposes to establishment of a viable establish new, permanent shelter sector. According to neighborhoods on sites identified USAID, the absence of a single by the Haitian government. national entity responsible for Displaced persons will first move overseeing the shelter into provisional shelters that will be provision process is the most replaced with permanent housing, important limitation. Currently, sustainable infrastructure, and the entities involved in basic services. The Action Plan managing different aspects of also states that the government shelter include the Ministries of intends to set up a security fund to Economy and Finance, Public support reconstruction activities in neighborhoods and communities. Works, Planning, Interior and Social Affairs, as well as the Similarly, the U.S. government’s Central Bank and the inter- Haiti Strategy prioritizes moving ministerial Committee for Land displaced persons to provisional shelters, providing permanent Use. housing and services, and strengthening institutional capacity and access to financing. Status USAID is planning to develop new residential settlements in the Port-au- Prince and Cap-Haïtien development corridors that will provide approximately 15,000 households with plots of land, permanent housing, and access to essential services. USAID will manage construction of 4,000 houses. Nongovernmental organizations will manage construction of 11,000 houses using a combination of USAID and other funding. Up to 12 new settlements in the Port-au-Prince development corridor Development of 10,000 plots for 50,000-60,000 people and construction of 1,500 core houses by USAID in the Croix- des-Bouquets and Cabaret municipalities 0.3 Award date: May 26, Estimated end date: September 15, 2013 $10.1 Award date for engineering and design, additional contracts will be awarded for remaining site development. Sustainability Plans According to USAID’s plans, site locations are being selected in areas where residents have access to jobs and/or transportation. USAID is planning to fund technical assistance programs to improve the capacity of the national government to improve urban management, and will seek possible solutions to policy challenges such as the absence of legislation to guide housing, land-use, and urban planning; lack of an independent regulatory body for the construction sector; and lack of institutional capacity in urban planning. At the local level, USAID plans to provide technical assistance to municipalities in areas such as land use planning, building standards, and enforcement mechanisms; and to establish community management committees to guide community development. USAID recognizes that the sustainability of the new settlements depends on the ability of Haitians to pay taxes and the ability of the Haitian government to manage and maintain infrastructure and services over the long term. This, in turn, depends on whether the Haitian economy can provide adequate job opportunities and whether the Haitian government can effectively govern. Introduction Prior to the January 2010 earthquake, a significant portion of the Haitian population had no access to basic health services. Currently, only 12 of the 63 health networks created by the Haitian government in the 1990s are operational. Funding for Infrastructure USAID allocated $90.5 million for construction in the health sector, of which $50.5 million is allocated from the fiscal year 2010 ESF supplemental appropriations. An additional $40.0 million in funding from other accounts is allocated for work on the HUEH and the National Campus for Health Sciences. Funding of Health Activities, as of September 30, 2011 (Dollars in millions) The earthquake destroyed infrastructure such as hospitals, clinics, medical schools, and Ministry of Health buildings. For example, the State University Hospital (HUEH) was damaged during the earthquake and services such as surgery, and pre- and post-operation work are still being performed in temporary buildings. The earthquake also destroyed 90 percent of the State Faculty of Medicine. Other funding is from the fiscal year 2010 base Global Health and Child Survival—State appropriation. Haiti’s post-earthquake Action Plan emphasizes rebuilding hospitals and training facilities in the earthquake-affected regions and construction of new hospitals in the department capitals. The U.S. government’s Haiti Strategy includes a 5-year goal to rebuild and reform management of public health infrastructure, including earthquake-damaged structures and clinics, dispensaries, and hospitals in 9 to 12 health networks. The Haiti Strategy also specifies that the State University Hospital will be rebuilt and fully operational in 5 years and have clearly defined responsibility for maintenance and operational costs. Status USAID is planning five construction activities in the health sector. Although these activities are still in the planning stages, USAID began needs assessments of health facilities in the Port-au-Prince, Cap-Haϊtien, and Saint-Marc corridors and an environmental assessment of the HUEH campus. The HUEH reconstruction is a joint activity between the U.S., French, and Haitian governments. The three governments signed a memorandum of understanding in September 2010. Status of USAID Health Activities, as of September 30, 2011 (Dollars in millions) Selected Challenges • Haitian government capacity: According to USAID officials, all health planning needs to be done in coordination with the Haitian Ministry of Health, which was consumed for significant parts of 2010 and 2011 by the cholera epidemic and uncertainty about the presidential election. USAID officials are working with the Ministry of Health to identify and prioritize health facilities for repair and construction. • USAID Haiti mission’s lack of staff: USAID’s AAD indicates that several months following the earthquake, the health office was extremely short staffed with only 11 of 21 positions filled. USAID officials also said that delays in staffing the Haiti mission, including the Office of Infrastructure, Energy, and Engineering have been challenging and have resulted in staff responding to demands outside of their usual areas of responsibility. 0 Estimated award date: Estimated end date: For activities where an award has not been made, USAID planning documents include a "procurement sensitive" cost estimate which is not shown but is included in the table total. Sustainability Plans USAID plans to make infrastructure investments only where there is a business plan agreed to with the Haitian Ministry of Health in which the USAID portion of operations and maintenance costs is reduced to zero at the end of 3 to 5 years. USAID’s plans state that U.S. investments in infrastructure will likely include some support for post-construction operation and maintenance costs. Introduction Before the earthquake, Haiti already had one of the heaviest burdens of hunger and malnutrition in the Western Hemisphere: 40 percent of households were undernourished (3.8 million people) and 30 percent of children suffered from chronic malnutrition, according to the U.S. government’s Haiti Strategy. To address this need, USAID had several food security activities under way in Haiti. Funding for Infrastructure USAID allocated $42.6 million for activities with a construction component in the food security sector. Agriculture generates nearly 25 percent of Haiti’s gross domestic product, employs approximately 65 percent of the population, and serves as the primary source of income in rural areas. However, the earthquake exacerbated the already significant challenges in the agricultural sector by damaging distribution centers, food processing facilities, warehouses, irrigation canals, and the Ministry of Agriculture’s Natural Resources and Rural Development headquarters. Haiti’s post- earthquake Action Plan includes five agriculture-related programs, including building irrigation networks and rural roads to open up agricultural areas. Aligned with the Haitian government’s priorities, the U.S. government’s Haiti Strategy includes a goal of agriculture sector growth in the U.S. development corridors through improvements in (1) core infrastructure and management, such as rebuilding canals; (2) on- farm productivity, such as use of commercially produced seeds; and (3) post-harvest and market access support, such as farm-to-market roads. Other funding is regular fiscal year appropriations. USAID plans to provide $55.5 million during fiscal years 2011 through 2014 for these food security activities; however, we did not include this amount because USAID did not provide the funding data by fiscal year. Status USAID plans two new activities with construction components. USAID has additional activities with construction projects in the food and economic security sectors that began implementation before the earthquake. However, one of these activities, the Watershed Initiative for Natural Environmental Resources (WINNER), received fiscal year 2010 supplemental funding. Status of USAID Food Security Activities, as of September 30, 2011 Activity Watershed Initiative for Natural Environmental Resources (WINNER) Erosion prevention, irrigation restoration, and rebuilding and repair of transportation infrastructure. Production Plus Erosion prevention, irrigation construction and repair, and farm-to-market road construction and maintenance. Selected Challenges • Land tenure: According to USAID officials, many large tracts of land are owned by landlords who rent land tracts to farmers. However, many landlords cannot produce proof of ownership. Uncertainty over land ownership may reduce agriculture investment in Haiti due to the risk of losing that investment. Additionally, the renting of land provides little incentive for people to invest in their farms. • Haitian government capacity: amounts are being obligated incrementally. According to USAID’s food security AAD, the role and mission of the Haitian Ministry of Agriculture, Natural Resources, and Rural Development has not been clearly defined, causing confusion over the ministry’s policy goals. Critical policy questions of whether the ministry should be focused on strengthening food security, increasing farm income, building and maintaining infrastructure, or focusing on another priority have not been answered. Sustainability Plans USAID’s plans state that the U.S. government and other donors regularly meet with the Haitian government and private sector to discuss progress and reforms needed to accelerate agriculture investments in Haiti. In addition, the Haitian government has demonstrated commitment and leadership in agriculture, food security, and economic security program planning and coordination, according to USAID’s planning document. Soil Conservation Project in Bassin Zim (Central Plateau) • Local technical capacity: Poor infrastructure, few technically skilled companies and people, lack of a maintenance culture, and health-related issues all pose challenges to successful reconstruction in Haiti, according to USAID officials. Introduction Prior to the January 2010 earthquake, much of Haiti’s justice infrastructure, including police stations, police training facilities, and prisons, were in need of repair. Many facilities were too small to meet the needs of Haitian law enforcement and corrections systems. The earthquake caused additional destruction and damage, increasing the need to rebuild many facilities. Funding for Infrastructure State allocated $45.7 million in fiscal year 2010 INCLE supplemental funds for governance and rule of law construction activities. Haiti’s post-earthquake Action Plan states the need to rebuild correctional facilities, build and expand police stations, and complete construction of the National Police Academy. The Department of State (State) received fiscal year 2010 supplemental funding in the International Narcotics Control and Law Enforcement (INCLE) account to meet these reconstruction needs in Haiti. In the U.S. government’s Haiti Strategy, State presents plans to use some INCLE funds for post- earthquake infrastructure rehabilitation and construction activities in the democracy and governance sector, to be managed by State’s Bureau of International Narcotics and Law Enforcement (State/INL). Status State is planning several construction activities in the governance and rule of law sector. Some activities are under way and others are still being planned. Selected Challenges  Lack of staff with relevant technical expertise: State/INL has been involved in counternarcotics programs in Haiti for decades. However, during that period, it has focused on infrastructure activities and has had no U.S. direct-hire engineers or other personnel with construction expertise on its staff. State/INL’s programs have focused on institutional capacity building, such as training Haiti’s law enforcement and corrections personnel. For its post-earthquake infrastructure activities, State/INL plans to hire (1) private firms to develop infrastructure plans, designs, and estimates for costs, schedules, and scopes of work; and (2) contractors to provide construction; and (3) personal services contractors to monitor and oversee its activities. Sustainability Plans In the January 2011 5-year Haiti Strategy, State includes plans to provide training programs, introduce needed procurement and maintenance systems, and establish an independent Haitian Directorate of Prisons in an effort to achieve sustainability. However, State acknowledges that investments in construction and rehabilitation of infrastructure will require some degree of U.S. commitment to support operations and maintenance costs “in the out years.” Appendix III: Comments from the U.S. Agency for International Development Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Leslie Holen (Assistant Director), Michael Armes (Assistant Director), Sada Aksartova, Lynn Cothern, Rachel Girshick, Leslie Locke, and George Taylor made key contributions to this report. Ashley Alley, Douglas Cole, Martin De Alteriis, Cheron Green, Courtney Lafountain, Jeremy Sebest, and Jena Sinkfield provided technical assistance.
On January 12, 2010, a powerful earthquake struck Haiti, resulting in an estimated 230,000 deaths, including more than 16,000 Haitian government personnel, and the destruction of many ministry buildings. In addition to immediate relief efforts, in July 2010, Congress appropriated $1.14 billion in supplemental funds for reconstruction, most of which was provided to the U.S. Agency for International Development (USAID) and the Department of State (State). USAID and State are administering about $412 million in supplemental and regular fiscal year appropriations for infrastructure construction activities. In May 2011, in response to a congressional mandate, GAO reported on overall U.S. plans for assistance to Haiti. This report addresses infrastructure construction activities, including (1) USAID and State obligations and expenditures; (2) USAID staffing; (3) USAID planning; and (4) potential sustainability challenges USAID faces. GAO reviewed documents and interviewed U.S. officials in Washington, D.C., and Haiti, and visited ongoing and planned construction sites in Haiti.. USAID and State have obligated and expended a small amount of funds for infrastructure construction activities in six sectors: energy, ports, shelter, health, food security, and governance and rule of law. As of September 30, 2011, USAID and State had allocated almost $412 million for infrastructure construction activities, obligated approximately $48.4 million (11.8 percent), and expended approximately $3.1 million (0.8 percent). Of the almost $412 million, about 87 percent was allocated from the 2010 Supplemental Appropriations Act and 13 percent from regular fiscal year appropriations. USAID accounts for about 89 percent of the $412 million, including funds for construction in the energy, ports, shelter, health, and food security sectors. State activities in the governance and rule of law sector account for the remaining 11 percent. USAID had difficulty staffing the Haiti mission after the earthquake, a factor that has contributed to delays in infrastructure construction activities. Soon after the earthquake, 10 of the 17 U.S. citizen Foreign Service Officers, known as U.S. direct-hire staff, in Haiti left. USAID, lacking a process for expediting the movement of staff to post-disaster situations, had difficulty replacing them and recruiting additional staff. These staff included key technical personnel such as engineers and contracting officers needed to plan and implement infrastructure activities in sectors such as energy and ports, where the mission had not previously worked. With limited U.S. direct-hire staff on board, the mission relied heavily on temporary staff, and remaining staff assumed duties outside their normal areas of expertise. The mission plans to have all U.S. direct-hire staff on board by February 2012. Since infrastructure activities will continue until at least 2015, the mission will need to maintain sufficient staff for several years to manage the activities supported by the increase in Haiti reconstruction funds. USAID and State are planning activities in Haiti, but various challenges have contributed to some of USAID's delays. As of October 2011, USAID had drafted eight Activity Approval Documents (AADs) that include planned activities, costs, risks, and assumptions. AADs for the education, energy, food security, governance and rule of law, health, and shelter sectors have been approved. The AAD process has been more comprehensive and involved than is typical for such efforts, according to USAID officials. Although USAID made progress in planning, construction of some activities was delayed for various reasons, and some activities do not yet have planned start dates. For example, the mission was delayed in awarding contracts in the shelter sector due to issues such as identifying sites for shelter and obtaining land title. The sustainability of USAID-funded infrastructure depends, in part, on improvements to the Haitian government's long-standing economic and institutional weaknesses. USAID has considered various sustainability issues and is planning institutional strengthening activities, such as management reform of the power utility, but USAID planning documents acknowledge that these reforms will be challenging and that infrastructure activities face risks. These challenges are consistent with prior GAO reports that address sustainability of U.S. infrastructure projects in other countries.
Background VA Disability Compensation Benefits VA provides monthly disability compensation to veterans with disabling conditions caused or aggravated by their military service. Since 1925, VA has used the Veterans Affairs Schedule for Rating Disabilities (VASRD) to assign disability ratings to veterans based upon the existence and severity of service-connected disabilities. The severity of a disability is based on an average reduction in earning capacity across a group of veterans with similar physical or mental impairments brought on by their service. This degree of severity is expressed as a percentage and is often referred to as a “schedular rating.” For veterans with multiple service- connected disabilities, VA calculates the rating using a table that applies a formula for combining multiple ratings into a single rating. The rating dictates the amount of monthly compensation—set by law—a veteran receives, as shown in table 1. Veterans receiving a 100 percent rating are deemed to have a total disability. History of the TDIU Benefit and the Eligibility Decision-Making Process In 1934, VA revised its disability compensation program to establish TDIU compensation. VA has testified that the TDIU supplemental benefit was created to allow veterans to be deemed totally disabled even if they do not meet the criteria for a 100 percent rating. VA provided the rationale that while the rating schedule is intended to identify impaired earnings reduction for the average veteran, it does not always adequately compensate individual veterans based on their particular circumstances. The TDIU benefit was established during the Depression, when Social Security retirement benefits were passed into law. In 1945, VA established that age was not to be considered a factor in evaluating entitlement to TDIU. Today, TDIU benefits are generally a way VA can increase an eligible veteran’s schedular disability rating to 100 percent based on the veteran’s inability to earn income above the amount set by federal poverty guidelines because of their service-connected disabilities. To be eligible for TDIU compensation, a veteran must have a single service-connected disability rated at least 60 percent or multiple disabilities with a combined rating of at least 70 percent (with at least one disability rated at 40 percent or higher). In addition, the veteran must be unable to obtain or maintain “substantially gainful employment” as a result of these service- connected disabilities. VA generally considers “substantially gainful employment” to be employment above the federal poverty guidelines— $11,490 for an individual with no dependents in 2013. VA refers to the inability to maintain gainful employment as “unemployability.” Because a TDIU award yields a higher rating percentage, it leads to higher levels of compensation above the base schedular rating amount. For example, the rating for a veteran with no dependents could be increased from 60 percent ($1,026 per month) to 100 percent ($2,816 per month), an increase of $1,790 per month—or $21,480 a year. Disability compensation claims processing, including TDIU claims, is performed at 57 VBA regional offices. See figure 1 for a description of how TDIU-related claims processing is performed. Veterans are assigned a VASRD rating through VBA’s review of their service-connected disabilities. When a claim for TDIU benefits is raised, a rating specialist determines if the veteran meets the schedular rating requirements and is unemployable. The rating specialist considers additional evidence— beyond that related to the veterans’ military service and medical diagnosis required to decide a disability compensation claim—to decide whether the veteran’s service-connected disabilities make them unemployable. In determining unemployability, the rating specialist reviews the veteran’s employment history as well as the reason(s), if any, for termination of employment. Unlike other benefit programs, such as Social Security Disability Insurance (SSDI), VA does not consider reaching retirement age as a cause for ineligibility. Once veterans begin receiving TDIU benefits, VBA reviews TDIU beneficiaries’ employment and income annually to determine whether they continue to meet the eligibility requirements. VA terminates the supplemental TDIU benefits for those who do not provide the required information or are determined no longer eligible. VA requires beneficiaries to annually self-certify their employment and income. Specifically, in an Employment Questionnaire required for the continuation of TDIU benefits, the beneficiaries report their employment status during the previous 12 months including the type of work, hours worked, time lost for illness, and highest gross earnings per month. A beneficiary’s income can exceed the income threshold for twelve consecutive months before VA discontinues the TDIU benefit. VBA’s Steps to Measure Accuracy of Disability Compensation Claims Decisions VBA measures the accuracy of disability compensation claim decisions mainly through its Systematic Technical Accuracy Review (STAR), including TDIU claims. Specifically, for each of the 57 regional offices, completed claims are randomly sampled each month and the data are used to produce estimates of the accuracy of all completed claims. VA reports national estimates of accuracy from STAR reviews to Congress and the public through its annual performance and accountability report and annual budget submission. VBA also produces regional office accuracy estimates, which it uses to manage the compensation benefits program. Beginning in October 2012, VBA began using data from STAR reviews to also produce issue-based estimates of accuracy that measure the accuracy of decisions on the individual medical conditions within each claim. VBA also performs local quality reviews conducted by regional office quality review teams (QRT) formed to assess and monitor quality of staff performance and decisions. Specifically, QRTs review completed claims to assess individual rating specialist’s performance. In addition, QRTs review in-process claims, which are claims that the specialists have not yet finalized, to identify common errors and help prevent inaccurate decisions. See appendix II for additional information on these and other VBA quality assurance measures. In November 2014, we issued a report on VBA’s quality assurance efforts. We found, among other matters, that VBA had not always followed generally accepted statistical practices when calculating accuracy rates through STAR reviews, resulting in imprecise performance information. We also identified shortcomings in QRT practices and implementation that could reduce their effectiveness. We made a number of recommendations to VA to improve its measurement and reporting of accuracy, review the multiple sources of policy guidance available to claims processors, enhance local data systems, and evaluate the effectiveness of quality assurance activities. VA concurred with our recommendations. See appendix II for additional information on STAR and QRT reviews. The TDIU Beneficiary Population Is Growing, Especially among Older Veterans The Number of TDIU Beneficiaries and Benefit Costs Increased Over 5 Years In fiscal year 2013, 332,934 veterans received TDIU benefits, an increase of 22 percent since fiscal year 2009, as shown in table 2. In 2013, there were 31,159 veterans who began receiving TDIU for the first time; that is, new beneficiaries. Moreover, the number of new beneficiaries increased in each of the 4 years we compared to the subsequent year and represented about 9 to 10 percent of the overall TDIU population in each of the 5 years we examined. Similar to the number of new beneficiaries, the number of beneficiaries whose benefits were discontinued also increased in each of the 4 years we compared; these discontinued beneficiaries comprised from 4 to 6 percent of the TDIU population. Overall, of the 74,224 beneficiaries whose benefits were discontinued from fiscal year 2009 through fiscal year 2013, 69 percent were discontinued due to the death of the beneficiary. Benefits were discontinued for the remaining 31 percent because beneficiaries generally either (1) earned enough income to exceed the income threshold, (2) failed to submit the required annual Employment Questionnaire for the continuation of benefits, or (3) received a change to their schedular rating. Overall, TDIU beneficiaries make up a substantial portion of the group of veterans who receive benefit payments at the 100 percent disability compensation rate. In fiscal year 2013, there were 3.7 million veterans receiving disability benefits and those with service-connected disabilities who receive benefit payments at the 100 percent disability compensation rate accounted for 712,000 of the 3.7 million, as shown in figure 2. Within the population of veterans whose benefits were paid at the 100-percent rate, TDIU beneficiaries made up 45 percent. According to data provided by VA, TDIU beneficiaries received disability compensation payments totaling approximately $11 billion in fiscal year 2013, as shown in figure 3, which represented a 30 percent increase—or approximately $2.5 billion—since fiscal year 2009. In fiscal year 2013, over two-thirds of TDIU beneficiaries had dependent family members, which increased their benefits payments, while 31 percent were single with no dependents. These TDIU beneficiaries received higher benefit payments depending on (1) whether the beneficiary had a spouse, dependent parent, and/or child and (2) the number of such dependents. For example, when comparing the payments beneficiaries received in fiscal year 2013, a TDIU beneficiary with no dependents received $2,816 per month, a beneficiary with a spouse and no other dependents received $2,973, while a beneficiary with a spouse and one child received $3,088. We estimate that, in fiscal year 2013, the TDIU benefit was a $5.2 billion supplemental payment above what beneficiaries would have received at their regular scheduled rating in the absence of TDIU benefits. VA does not track the overall costs of TDIU benefits, so we used disability compensation payment rate information, data on the TDIU beneficiary population, and data on the population of all new beneficiaries to calculate this estimate. For more information on how we calculated this estimate, see appendix I. The Number of Older TDIU Beneficiaries Has Increased The number of older beneficiaries (age 65 and older) increased for each of the 5 years we examined and by fiscal year 2013, they represented the majority (54 percent) of the TDIU population, as shown in figure 4. By 2013, 180,043 beneficiaries fell within this age group, representing a 73 percent increase from fiscal year 2009. Of these older beneficiaries, 56,578 were 75 years of age and older in fiscal year 2013 while 10,567 were 90 years of age and older. The number of younger beneficiaries (under 40 years of age) increased by 56 percent from fiscal year 2009 through 2013, although they made up a small proportion (5 percent) of the overall TDIU population in fiscal year 2013. In contrast to the growth in the number of older and younger beneficiaries, the number of middle- aged beneficiaries (aged 40 to 64) dropped by 14 percent to about 136,000 beneficiaries in fiscal year 2013. The increase in older beneficiaries, as described above, was largely attributable to older beneficiaries who began receiving TDIU benefits for the first time. Fifty-three percent of the increase in older beneficiaries, from fiscal year 2009 through fiscal year 2013, was attributed to the new older beneficiaries. The rest of the increase in older beneficiaries was attributed to aging of the existing TDIU population, with middle-aged workers (aged 40 to 64) aging into the age 65 and over population. A year-by-year breakdown is shown in figure 5. In comparing the new beneficiary population in fiscal years 2009 and 2013, the number of new older beneficiaries more than doubled to reach 13,259 beneficiaries. Of these new older beneficiaries, 2,801 were aged 75 and over while 408 were aged 90 and over. See appendix III for the breakdown of new older beneficiaries by age groups. VBA’s Guidance, Quality Assurance Approach, and Income Verification Procedures Do Not Ensure That TDIU Decisions Are Well Supported VBA Has Provided Incomplete Guidance on How to Determine a Veteran’s Unemployability VBA provides guidance to rating specialists to help them determine if veterans meet the eligibility requirements for TDIU benefits. This guidance tasks rating specialists, based upon the evidence at hand, to determine veterans’ unemployability; it also recognizes that the process is subjective and involves professional interpretation. The guidance briefly lists factors that rating specialists should consider when deciding if a veteran is unemployable. For example, rating specialists should, as appropriate, consider medical opinions, treatment records, notes from vocational rehabilitation efforts, and receipt of Social Security disability benefits. The guidance also briefly lists factors that rating specialists are to treat as “extraneous” and therefore exclude from their analysis, such as a veteran’s age, the availability of work in the community, and the effects of non-service-connected disabilities on the ability to work. However, the guidance provided by VBA on which factors to consider when determining if a veteran is “unemployable” is incomplete in three ways, creating potential variation in TDIU claim decisions. First, rating specialists in some (5 of 11) of the discussion groups we held at five regional offices disagreed on whether they are permitted to consider additional factors that are not specifically mentioned in VBA’s guidance. Rating specialists held varying opinions on whether factors such as enrollment in school, education level, or prior work history should be used to decide the benefit claim. Some examples of the variability in decisions this incomplete guidance results in are shown in the examples below: A rating specialist recently reviewed a claim for TDIU that was submitted by a veteran suffering from traumatic brain injury. The rating specialist found that the veteran was enrolled in school part time and earning A’s in engineering classes, which the specialist felt clearly demonstrated employability. However, another rating specialist within the group stated that the veteran’s enrollment in classes would not be part of her decision-making. A rating specialist granted benefits to a veteran with an 8th grade education because the specialist felt the veteran was unqualified for work other than the lumberjacking he had performed since leaving the military, despite the fact that the examiner found that the veteran could work in a job with fewer physical demands. A fellow rating specialist agreed that the veteran was qualified for few jobs, but would not have granted the benefit because the veteran’s physical restrictions did not disqualify the veteran from certain other jobs. Another rating specialist denied benefits for a veteran who was a retired dentist. The veteran’s medical examiner submitted a written opinion that the veteran could not perform dental work due to his inability to stand; however, the rating specialist decided the veteran’s prior work in such a high-skilled career was an indication that he could engage in a different line of work. Yet, another rating specialist stated that he would have instead relied solely on the opinion of the medical examiner and consequently granted the TDIU benefit. Second, rating specialists noted that for those factors that rating specialists can consider in their decision-making process, the guidance is silent on which factors, if any, should be given greater priority or weight. We confirmed that this information was not in the manual or guidance provided by VBA. As a result, during 5 of our 11 discussion groups with rating specialists, we heard differences in opinion about the primacy of factors rating specialists applied when making a decision on unemployability. The majority of rating specialists in the discussion groups (7 of 11) specifically noted that they could come to an opposite decision when reviewing the same evidence if the evidence were weighed differently. For example, during a few of these discussion groups, rating specialists told us they relied heavily on medical opinions while others considered Social Security Disability Insurance (SSDI) payments as the strongest marker of a veteran’s inability to work. In another instance, a rating specialist told us that a medical opinion was always weighted more heavily than all other evidence in the veteran’s file while another specialist expressed a hesitancy to rely too much on the examiner’s opinion. Third, the guidance does not provide instruction on how to separate extraneous factors from allowable ones. Some of the discussion groups (6 of 11) told us that not having this guidance was a significant challenge for them. Findings from our case file reviews also illustrate this issue: one file described a 77-year-old veteran claiming TDIU benefits for blindness that was caused by (1) a service-connected disability, (2) glaucoma, and (3) macular degeneration. However, because all three conditions related to the veteran’s quality of vision, the rating specialist noted in the file her difficulty separating the effect of the service-connected disability from the non-service-connected glaucoma and macular degeneration due to the man’s age. Rating specialists also told us that despite guidance to the contrary, they still consider age as a factor. At one end of the age spectrum, specialists in the majority (7 of 11) of the discussion groups told us that they have difficulty rationalizing granting benefits to veterans beyond 65 years of age. In each of these groups, at least one rating specialist provided an example of when they may consider age as a factor in the TDIU benefit decision. For example, one rating specialist shared a case of an older veteran who retired from police service more than 10 years before applying for TDIU benefits. He specifically had a concern with program rules which did not allow him to consider the veteran’s age and retirement status. At the other end of the age spectrum, rating specialists in four of the discussion groups described difficulties in granting TDIU benefits for younger veterans because they do not want these veterans, in the future, to be discouraged from attempting a return to work for fear of losing the benefit. Format and Delivery of TDIU Guidance Does Not Support Efficient Claims Decision-Making Rating specialists in the majority (7 of 11) of our discussion groups at five regional offices reported that VBA’s guidance for reviewing TDIU claims is formatted and delivered in ways that make it difficult for them to complete their decision-making responsibilities in an efficient manner. Federal internal control standards highlight the need for collecting, consolidating, and distributing pertinent information in a form that allows employees to perform their duties efficiently. For several reasons, VBA’s guidance falls short of this standard. First, TDIU guidance is delivered using multiple formats, including manuals, policy and procedure letters, summaries of relevant legal decisions, frequently-asked-question responses, monthly bulletins, feedback from quality assurance reviews, e- mails, and internal webpages. However, the information provided in these various forms can also vary, making it challenging for rating specialists to have a definitive source for TDIU benefit decision guidance. Moreover, some of the guidance, for example the information provided to frequently- asked-questions, is sent only to the regional office which submitted the question, according to rating specialists in a couple of the 11 discussion groups. Second, VBA rating specialists in 8 of the 11 discussion groups told us they have difficulty finding the most current guidance. While VBA has a manual for TDIU benefit decisions, officials at all six regional offices told us that the manual is outdated. VBA officials acknowledged this condition and stated they issue interim guidance in many forms between manual updates because such updates are time-consuming and difficult to do on a regular basis. Third, rating specialists in a couple of the discussion groups told us that the guidance they receive typically lacks search features, which may allow them to readily find the most current TDIU guidance. For example, the rating specialists described having to read through numerous bulletins to find guidance on TDIU and then look for the specific guidance they need as opposed to being able to use a key word search to capture all related information. Some VBA central and regional office efforts address the disparate nature of the guidance. To locate guidance more readily, two of the six regional offices we visited had developed “cheat sheets,” which they said captured all of the guidance into a single searchable document. VBA officials also told us they are taking steps to develop an electronic manual that is intended to consolidate and replace many other forms of guidance, including manuals and memoranda of policy changes, for processing all claim types and will include a search feature. VBA is initially creating a web portal to house all existing guidance and subsequently will consolidate the guidance into one processing manual. VBA has completed two of the four stages for the web portal and is in the process of rewriting the manual. Officials told us they plan to complete the consolidation by the end of fiscal year 2015. VBAs’ Quality Assurance Approach Does Not Provide a Comprehensive Assessment of TDIU Decisions VBA’s quality assurance approach—accomplished mainly through its Systematic Technical Accuracy Review (STAR)—may not be providing a comprehensive assessment of TDIU claim decisions. The agency’s current approach does not allow it to identify variations in these decisions or ascertain the root causes of variation which may exist. Federal internal control standards state that agencies should assess performance using control activities, such as quality assurance checks, and that performance information should provide agency officials with information on the extent to which claims decisions are complete, accurate, and consistent. However, VBA’s quality assurance standards indicate that a quality assurance officer reviewing TDIU decisions for errors cannot substitute his or her professional opinion with the opinion of the rating specialist who made the original decision. The officer cannot substitute, for example, his or her interpretation of the medical and vocational evidence, as well as an interpretation of the underlying regulations governing the benefit. For the quality assurance officer to decide that the rating specialist made an error, it must be clear and undebatable. Because of this high standard, a STAR review of a sample of claims finalized during the first three quarters of fiscal year 2014 determined that nearly 95 percent of TDIU claims (872 of 920) were error-free. Of the 48 claims found to contain an error, all the errors were found to be “procedural,” such as an incorrect date for the onset of unemployability. No “decisional” errors were found, which are errors in the decision to grant or deny the benefit. According to VBA officials, it is unlikely that they will find many decisional errors because there is so much individual judgment allowed in TDIU claim decisions, and VBA’s quality assurance standards do not allow for the reevaluation of the professional opinion of the original rating specialist. Beyond STAR, the regional offices also conduct quality review team (QRT) reviews for disability compensation claim decisions in general, but these reviews may not be providing much insight into the completeness, accuracy, and consistency of TDIU claim decisions. The regional offices use QRT reviews to identify trends in error and review individual rating specialist’s performance (for past claim decisions as well as claims still in review). However, VBA officials in almost all (5 of 6) of the regional offices told us that these reviews generally include very few TDIU claims. Moreover, QRT reviews apply a similar approach to calculating errors as STAR reviews. For example, QRT reviews give deference to professional opinion and officials we spoke with noted that questionable decisions made by rating specialists are typically coded as a “training comment” rather than as an “error.” In such instances, QRT officials discuss the claim with the rating specialist. Only one of the regional offices we visited was systematically tracking the type of TDIU errors in its QRT review process. Quality assurance approaches that VBA has used with non-TDIU disability compensation claims suggest that other options may be available to generate additional insight into a comprehensive assessment of the completeness, accuracy, and consistency, as well as possible enhancements, of TDIU claims decisions. For example, as we reported in 2014, VBA conducted a targeted review of military sexual trauma claims using a consistency questionnaire to test rating specialists’ understanding and interpretation of policies in response to concerns that post-traumatic stress disorder claims related to military sexual trauma were not being accurately decided. Performing a similar review to gauge the degree of consistency in TDIU claim decisions could help VBA identify differences— perhaps beyond reasonable limits—in how rating specialists interpret the guidance and apply their professional judgment. Further, measuring consistency of decisions for specialists working in different regional offices could provide additional insight. VBA has used other quality assurance approaches to assess non-TDIU disability compensation claim decision-making, which are discussed in greater detail in appendix II. While we recognize that TDIU benefit decisions have an inherently subjective component, the current approach that VBA uses cannot ensure that the decisions are comprehensively assessed for completeness, accuracy, and consistency. VBA Does Not Verify Self- Reported Income Eligibility Information While VBA requires TDIU claimants and beneficiaries to provide information on their employment earnings, VBA places the benefits at risk of being awarded to ineligible veterans by not using third-party data sources to independently verify self-reported earnings. To begin receiving and remain eligible for TDIU benefits, veterans must meet the income eligibility requirements. VBA first determines a claimant’s income by requesting information on the last 5 years of employment on the claim form and subsequently requires beneficiaries to annually attest to any income changes. Rating specialists use the information provided by claimants to request additional information from employers and, when possible, verify the claimant’s reported income, especially for the year prior to applying for the benefit. In order to receive verification, VBA sends a form to the employers identified on the veteran’s benefit claim and asks them to provide the income earned by the veteran. However, VBA officials indicated that employers provide the requested information only about 50 percent of the time. This estimate was consistent with what we heard from our discussion groups of rating specialists and the number of completed forms found in the files we reviewed. Rating specialists in 5 of the 11 discussion groups told us the low response rate is often due to businesses having closed, lack of incentives to return the form, and staff at human resources offices being unfamiliar with the veteran. If VBA does not receive verification from a veteran’s employer after multiple attempts, it accepts the veteran’s claimed earnings. None of the rating specialists in the 11 discussion groups we held in five regional offices discussed other ways of verifying the self-attested income that beneficiaries’ report on the initial TDIU claim or the annual form that must be submitted to continue to receive benefits. However, one rating specialist explained that in the rare instances when beneficiaries indicate earning above the income threshold, the specialists review additional documents, such as tax documents, submitted by the veteran. VBA previously conducted audits of beneficiaries’ reported income by obtaining income verification matches from Internal Revenue Service (IRS) earnings data through an agreement with the Social Security Administration (SSA), but is no longer doing so. For example, VBA obtained reported income annually and compared lists of TDIU beneficiaries to their self-reported income. VBA officials told us that the agency is not performing income verification matches for TDIU claims despite having standing agreements with the IRS and SSA to do so. In 2012, VBA suspended income verification matches to allow for the development of a new system that would allow for more frequent, electronic information sharing. VBA officials told us that they plan to roll out a new electronic data system that would allow for compatibility with SSA data sources in fiscal year 2015. They noted that they plan to use this system to conduct more frequent and focused income verifications to ensure beneficiaries’ continued entitlement. VBA officials also anticipate being able to use the system to conduct income verifications for initial TDIU applicants beginning in fiscal year 2016. However, they did not provide us with a plan or timeline for implementing this verification system. In addition, VA has not fully taken advantage of previous opportunities to use the National Directory of New Hires (NDNH) to verify self-reported income information for the TDIU program. The NDNH, which is maintained by the Department of Health and Human Services (HHS), was established in part to help states enforce child support orders against noncustodial parents and contains more timely state wage information. The NDNH also includes data from state directories of new hires, state records of unemployment insurance benefits paid, and federal agency payroll data, all of which can be used to help establish a picture of a claimant’s work history and earnings. NDNH data are updated at least quarterly, providing a more recent snapshot of earnings than the IRS wage information that SSA obtains. However, access to the NDNH is limited by statute. In our 2006 report on VA’s TDIU benefits, we recommended that VA use this directory to enforce earnings limits in their programs. VA was subsequently granted temporary statutory authority, twice, to access the NDNH for use in employment and income verification. Specifically, VA had such access from December 26, 2007, through November 18, 2011, and again for a period of 180 days beginning on September 30, 2013. VA, however, never reached an agreement with HHS to use the data during these time periods due to its limited financial and workforce resources at the time, according to officials. VA no longer has statutory authority to access the NDNH data. SSA, on the other hand, does have statutory access to NDNH data, and SSA’s Office of the Inspector General (OIG) recently reviewed the accuracy and effectiveness of NDNH data in identifying overpayments for SSA benefit programs including SSDI. The OIG concluded that NDNH’s quarterly data specifically aided SSA in identifying $141 million in improper payments in fiscal year 2009. Options for Revising TDIU Eligibility Requirements We Identified Seven Options That Have Been Proposed by Others to Revise TDIU Eligibility Requirements and the Benefit Structure Based on a review of literature, we identified a number of options proposed by others for revising the TDIU benefit: six focused on revising eligibility requirements and one that would change the benefit structure. More specifically, the six eligibility-related options involve changing existing requirements in various ways, for example, setting age limits, lowering the disability rating requirement, or increasing the income thresholds. The seventh option we identified would affect the benefit structure by lowering, but not immediately eliminating, the TDIU benefit payments as beneficiaries earned income beyond the eligibility limit. Based on interviews with selected experts and representatives of veterans service organizations (VSO), we identified a range of potential strengths and challenges associated with each option. When discussing the potential strengths and challenges of each option, the experts and VSO representatives commonly mentioned the equity of the proposed change, an increase or decrease of VA’s management and administration efforts and cost, and the effect on veterans. There was a range of opinions from the experts and VSO representatives across the options’ strengths and challenges. Moreover, some stakeholders expressed opposite views, which could be attributable to differences in the interpretation of the material as well as differing policy stances. We did not independently assess the individual merits or accuracy of the views expressed by these experts and VSO representatives; however, we recognize that weighing and possibly implementing any of these revisions would be a complex process. For example, a couple of the options present possible opportunities for VA to better target TDIU benefits to veterans who are unemployable, but implementation of these options could pose challenges in ensuring that all veterans are treated equitably. Each of the seven options and the potential strengths and challenges identified by stakeholders that we interviewed are summarized below. Discontinue beyond retirement age: Discontinue the TDIU payment when the veteran reaches Social Security’s full retirement age (65 to 67, depending on birth year). This option was proposed by the Congressional Budget Office (CBO) in 2013. CBO based this option on the rationale that most veterans who are older than Social Security’s full retirement age would not be in the labor force because of their age, and that a lack of earnings among them would probably not be attributable to service- connected disabilities. CBO also noted that veterans over age 65 who currently receive TDIU benefits—most of whom began receiving them in their 50s—would likely have income from other sources, including the regular VA disability compensation benefit and Social Security retirement benefits. CBO also provided an argument for retaining the current policy, noting that the benefit should be based solely on an ability to work and that using age as a factor would be unfair. CBO stated that some disabled veterans would not be able to replace the TDIU supplement and that their Social Security retirement benefit and personal savings are low. If the option to discontinue TDIU payments when the veteran reaches Social Security’s full retirement age were implemented, the veteran would continue to receive disability compensation payments at a dollar amount appropriate to the underlying VA disability rating. Veterans beyond Social Security’s full retirement age at the time of their initial application would be ineligible for TDIU. CBO did not address how the discontinuance of TDIU benefits could possibly interact with Social Security retirement benefits. The age restriction proposed in this option has been implemented in the federal Social Security Disability Insurance (SSDI) program. In SSDI, once program beneficiaries reach full retirement age, their benefit converts to a Social Security retirement benefit, although the amount of their benefit payment remains the same. Potential Strengths Identified by Stakeholders Could better target the intended population—older veterans might not be likely to work past retirement age. Benefit costs might be reduced due to the reduction in payments to older veterans. Potential Challenges Identified by Stakeholders Some veterans might not have income replacement available— especially those who had been on TDIU in advance of reaching retirement age. Could be unfair to veterans—older individuals might have the option of working past the retirement age, but older veterans whose service-connected disabilities stop them from working cannot. Consider vocational assessment: Consider the results of a mandatory vocational assessment before granting TDIU benefits. The Institute of Medicine of the National Academies (IOM) has reported that there are vocational counselors with the appropriate education and training to assess employability, but not all veterans who claim TDIU receive such an evaluation. The vocational assessment would address whether the veteran could be rehabilitated in order to maintain employment. In addition, rating specialists working on TDIU claims would receive training in how to interpret the findings from the vocational assessment. Rating specialists would then be able to use this assessment, along with the results of medical reports and other information, to help determine the veterans’ ability to engage in work activities. Potential Strengths Identified by Stakeholders Could help provide a more complete appraisal of the veteran’s ability to work. Potential Challenges Identified by Stakeholders Could require VA to expand its vocational rehabilitation program to address the increase in required assessments. Could cause delays in benefit decisions. Rating specialists and vocational rehabilitation counselors might need to receive additional training on how to assess the vocational rehabilitation findings. Could increase the burden on veterans as they would likely need to submit to an additional assessment. By adding a new factor to consider, could possibly increase the subjectivity of claim decision-making, thereby possibly creating more variation in decisions. Gradually reduce payment: Implement a gradual reduction in the TDIU payment as the veteran, in returning to work, exceeds the maximum income that determines eligibility for TDIU, which was $11,888 per year for an individual in fiscal year 2013. The existing TDIU regulations call for a discontinuation of a TDIU benefit once a veteran has income above the maximum after having worked for more than a year. IOM found that, as a consequence, the beneficiary’s total monthly disability payment could be reduced by anywhere from 40 percent to 64 percent. This reduction represented the drop in disability pay from the 100 percent disability rating, which the veteran received with TDIU, to the pay at their regular scheduled disability rating. They noted that this drop in disability payments might deter some veterans from trying to return to work. The option to implement a gradual reduction in the TDIU payment is similar to the design of a program SSA is testing for SSDI beneficiaries, which reduces payments by $1 for every $2 of income earned beyond the maximum allowed to maintain eligibility. Participants in the program have also received benefits counseling with the goal of helping them to return to work. Potential Strengths Identified by Stakeholders Beneficiaries could have an incentive to return to work since the beneficiary would continue to earn benefits even as they earned income beyond the maximum allowed instead of losing the benefits entirely. Could reduce the total amount of benefits paid out by VA since some beneficiaries would receive reduced benefit payments as they earn income beyond the maximum income. Could provide positive mental health improvements as the beneficiaries return to work. Potential Challenges Identified by Stakeholders Beneficiaries could have a disincentive to work if the resulting earned income above a certain threshold begins to reduce the benefit payment amount. Increase earnings limit: Increase the maximum earnings limit for TDIU eligibility to match that used in the SSDI, which was $12,480 per year (for a non-blind individual) in fiscal year 2013. Economic Systems, Inc. (Econosys) noted in its study on VA compensation payments that very poor veterans with disabilities would also be eligible for SSDI and, therefore, the maximum earnings limits for TDIU and SSDI should be aligned. Potential Strengths Identified by Stakeholders Could reduce any confusion for veterans and VA regarding the maximum income allowed to qualify for SSDI and TDIU. Beneficiaries could have more earnings and still qualify for TDIU. Could be easy to implement the increase. Potential Challenges Identified by Stakeholders Likely increases VA benefit costs and workload as more veterans qualify for TDIU under the higher earnings limit. Lower disability rating criteria: Lower the TDIU eligibility criteria for veterans with multiple disabilities to a combined scheduled disability rating of 60 percent. The existing TDIU regulation states that a veteran with multiple disabilities is eligible for TDIU if the combined rating is at least 70 percent so long as one of the multiple disabilities is rated at least 40 percent. The change in the multiple disability ratings threshold would also eliminate the requirement that one of the disabilities have a minimum rating of 40 percent. Potential Strengths Identified by Stakeholders Lowering the criteria could make it easier for veterans to qualify for TDIU if they did not have any disabilities above 40 percent, but were still considered unemployable. Could provide consistency in the eligibility criteria since, instead of requiring a 70 percent rating for veterans with multiple disabilities and a 60 percent rating for veterans with a single disability in order for the veteran to be eligible for TDIU, the minimum required rating of a 60 percent disability would be the same for veterans regardless if they had a single or multiple disabilities. Potential Challenges Identified by Stakeholders Could increase the benefit costs and the workload for VA as more veterans would qualify. May overemphasize the effects of a disability rating comprised of multiple disabilities, which may not be as severe as the effects of a single disability with the same rating. Add new unemployability criteria: Amend the criteria for assessing “unemployability” to include the veteran’s education, work history, and the medical effects of an individual’s age on his or her potential employability. For example, an older veteran with a college education could have the appropriate education and training needed to make it easier to transition to a new type of employment. As noted by IOM, shifts in the labor market have reduced the physical demands for labor, potentially making inclusion of these additional criteria for TDIU decisions appropriate. However, an older veteran with less education and whose experience is limited to a more physically demanding trade might not be able to find alternative employment. The option to amend criteria for assessing “unemployability” is similar to the criteria used to assess certain SSDI applicants, which considers an individual’s education and work history, along with age, to assess the capacity of the applicant to work in other jobs. Potential Strengths Identified by Stakeholders The new criteria would add factors that could be relevant for VA to consider when determining whether a veteran is employable. Potential Challenges Identified by Stakeholders Could be unfair to veterans—veterans who are otherwise similar might not be treated equally when deciding eligibility. By adding multiple new factors to consider, could possibly increase the subjectivity of claim decision-making, thereby possibly creating more variation in decisions. Rating specialists might need additional training and guidance to ensure consistency in their TDIU benefit decisions. VA could incur additional administrative costs as claims could require additional documentation before a decision is made. Use patient-centered work disability measure: Adopt a “patient- centered work disability measure” to evaluate TDIU eligibility. In addition to assessing the veteran’s work history, as currently performed, VA would consider other factors, including motivation and interests. To be sensitive to the veteran’s unique circumstances and areas of concern, VA staff would measure multiple factors—impairments, functional limitations, and disability—relevant to health-related work disability. Particular care would be taken to include measures of physical, psychological, and cognitive function. Potential Strengths Identified by Stakeholders Could provide a more complete appraisal of the veteran’s employability. By applying the assessment consistently to all veterans using the prescribed measures, the results could be standardized— potentially improving the consistency of the decisions. Potential Challenges Identified by Stakeholders VA could incur additional administrative cost as the measure would require collecting additional information. Could delay the benefit decisions while rating specialists collect the additional information required for the measure. Rating specialists could need additional training and guidance regarding how to apply the measure. This could require VA to make changes to how the agency measures disability, such as through the inclusion of their motivations and interests. Advisory Committee Recommended Revisions to TDIU, but VA Has Not Taken Action In 2012, the Advisory Committee on Disability Compensation made recommendations to VA regarding potential revisions to the TDIU benefit, and while VA concurred with those recommendations, it has yet to take actions in response to them. The advisory committee is composed of experts with experience in the provision of VA disability compensation or who are leading medical or scientific experts in relevant fields. The committee, when consulted by the Secretary of Veterans Affairs, is required to provide, among other things, an ongoing assessment of the effectiveness of the schedule for rating disabilities and advice on the most appropriate means of responding to the needs of veterans with respect to disability compensation. Taking the committee’s advice into consideration could better position the agency to meet federal internal control standards. For example, in its 2012 report, the committee noted the increase in the number of veterans receiving TDIU benefits as well as concerns regarding potential internal inconsistencies in TDIU decisions. As a consequence, the committee recommended that the agency (1) study whether age should be considered when deciding if a veteran is unemployable and (2) require a vocational assessment for all TDIU applicants. VA concurred with the recommendation to study whether age should be considered. The agency also concurred with the recommendation that called for it to require a vocational assessment, though VA noted that before it could proceed, it needed to complete a study on whether it was possible to disallow TDIU benefits for veterans whose vocational assessment indicated they would be employable after rehabilitation. To date, VA officials have told us, without explanation, that no such studies or analyses, in either area, have been planned or initiated. Conclusions The benefits veterans are entitled to, as well as VA’s decisions on what constitutes a work disability, are in need of constant refinement to keep pace with changes in medicine, technology, and the modern work environment. Within this broad context, VA can position itself to better manage the TDIU benefit and look for opportunities to strengthen the assessments of its eligibility decisions. Unfortunately, the integrity of the decision-making process for the TDIU benefit is at risk due to incomplete guidance that leaves much open to interpretation. As a consequence, rating specialists may be using and interpreting evidence differently to determine unemployability—this could mean that benefits could be granted for one veteran and possibly denied for another veteran with similar circumstances and impairments. Within this environment, VA’s quality assurance approach may not be fully identifying the extent to which errors made by rating specialists are occurring or providing adequate assurance on the overall soundness of the TDIU benefit decision-making process. Specifically, little if anything is known about the consistency of TDIU decision-making across individuals in the same regional office as well as across regional offices. We heard about the subjectivity and challenges in the claims decision-making process throughout our visits to VBA regional offices, which elevate the need to use a thorough process to ensure that rating specialists are bringing reasonably consistent judgments to benefit decisions—which for some individual veterans, could result in 50 or more years of benefits. In addition, VA does not use available third-party earnings data to verify veterans’ self-attested employment history and income information. Without such verification, VA cannot adequately ensure that the eligibility standards are being met, which places these benefits at risk of being awarded to ineligible veterans. These deficiencies also have the potential of increasing the cost of TDIU benefits because some ineligible veterans may be receiving benefits and other deserving veterans could be denied benefits. Having a strong framework for program integrity is important for any federal program, and in light of the multi-billion dollar—and growing— TDIU benefit, taking steps to ensure payments are properly awarded to veterans is essential. VA is at a junction where it is revising its complex and multifaceted disability compensation benefits program. As VA works on adjusting the eligibility criteria and management of the compensation benefits program, the agency must balance such things as improvements in the assistance available to those with disabilities, the increasing number of veterans, and fiscal constraints. Concurrent with this effort, VA has the opportunity to benefit from the amount of attention the TDIU benefit has received by various experts. Notably, VA’s own advisory committee has pointed out the need for VA to study, for example, age as a possible decision-making factor. The committee members’ expertise and familiarity with VA compensation benefits and veteran needs are meant to aid VA in identifying what additional information they need to effectively and efficiently review the TDIU benefit. By concurring with the committee’s recommendations, but not taking action, VA has delayed the timely analysis of the benefit. The options and the potential strengths and challenges identified by experts and VSO representatives may warrant consideration in any broader benefit refinement discussions and efforts to improve the TDIU benefit design and eligibility criteria going forward. Recommendations for Executive Action To help ensure that TDIU decisions are well supported and TDIU benefits are provided only to veterans whose service-connected disabilities prevent them from obtaining or retaining substantially gainful employment, we recommend the Secretary of Veterans Affairs direct the Under Secretary for Benefits to: 1. Update the TDIU guidance to clarify how rating specialists should determine unemployability when making TDIU benefit decisions. This updated guidance could clarify whether factors such as enrollment in school, education level, and prior work history should be used and if so, how to consider them; and whether or not to assign more weight to certain factors than others. Updating the guidance would also give VBA the opportunity to re-examine the applicability, if at all, of other factors it has identified as extraneous. 2. Identify other quality assurance approaches that will allow the agency to conduct a comprehensive assessment of TDIU benefit claim decisions. The approach should allow VBA to assess if decisions are complete, accurate, and consistent, and ascertain the root causes of any significant variation so that VBA can take corrective actions as appropriate. This effort could be informed by the approaches VBA uses to assess non-TDIU claims. 3. Verify the self-reported income provided by veterans (a) applying for TDIU benefits and (b) undergoing the annual eligibility review process by comparing such information against IRS earnings data, which VBA currently has access to for this purpose. VA could also explore options to obtain more timely earnings data from other sources to ensure that claimants are working within allowable eligibility limits. 4. In light of VA’s agreement with the recommendations made by the Advisory Committee on Disability Compensation, develop a plan to study the complex TDIU policy questions on (1) whether age should be considered when deciding if veterans are unemployable and (2) whether it is possible to disallow TDIU benefits for veterans whose vocational assessment indicated they would be employable after rehabilitation. Agency Comments and Our Evaluation We provided a draft of this report to VA for review and comment. In its written comments, reproduced in appendix IV, VA generally agreed with our conclusions and concurs with all of our recommendations. The agency outlined how it plans to address our recommendations as follows: Regarding our recommendation to update the TDIU guidance to clarify how rating specialists should determine unemployability when making TDIU benefit decisions, VA stated that VBA will review current TDIU policies and procedures to identify necessary improvements including developing new policies and procedures that provide clear guidance on deciding these claims. The updated guidance will address the extent to which age, education, work history, and enrollment in training programs are factors that claims processors must address. VA anticipates that Compensation Service will complete this review and provide options to VBA for a decision by the end of January 2016. Regarding our recommendation to identify other quality assurance approaches that will allow the agency to conduct a comprehensive assessment of TDIU benefit claim decisions, VA stated that the Compensation Service Quality Assurance Staff will add TDIU- specific questions to the In-Process Review checklist at the regional offices between July and September 2015. Based on the results of the reviews, VBA will determine the most effective approach for assessing the accuracy and consistency of TDIU decisions by October 31, 2015. Regarding our recommendation to verify the self-reported income provided by veterans (a) applying for TDIU benefits and (b) undergoing the annual eligibility review process by comparing such information against IRS earnings data, VA stated that VBA is developing an upfront verification process. This involves expanding the data sharing agreement with SSA, which enables VBA to receive federal tax information via an encrypted electronic transmission through a secure portal. VBA expects to implement this upfront verification process for TDIU claimants by January 31, 2016. Regarding our recommendation to develop a plan to study (1) whether age should be considered when deciding if veterans are unemployable and (2) whether it is possible to disallow TDIU benefits for veterans whose vocational assessment indicated they would be employable after rehabilitation, VA stated that Compensation Service initiated a review of TDIU policies and procedures in April 2015. Compensation Service is considering both the use of age and vocational assessments in TDIU benefit claim decisions and will develop a plan to initiate any studies, legislative proposals, or proposed regulations deemed necessary. VBA expects to complete an action plan by July 31, 2015. VA also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and the Under Secretary for Benefits. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report: (1) examines age-related trends in the population of Total Disability Individual Unemployability (TDIU) beneficiaries and benefit payments, (2)assesses how the Department of Veteran Affairs’ (VA) procedures position the agency to ensure that TDIU benefit decisions are supported, and (3) describes suggested options for redesigning TDIU benefits and eligibility requirements. To examine these objectives, we reviewed prior GAO, disability commission, and expert reports; relevant federal laws, regulations, and procedures for reviewing new and continuing claims; and program documentation, including procedure manuals, training materials, and supporting documents. We also conducted interviews with VA and its Veterans Benefits Administration (VBA) officials in their central and regional offices; disability experts; and representatives of veterans groups. We collected and analyzed data from VA on TDIU benefit claims and beneficiaries covering fiscal years 2009 through 2013. The data included information such as the beneficiaries’ age and disability ratings as well as total amounts paid in disability benefits. To obtain information and views from VBA regional office officials involved in claim reviews, we also visited six regional offices where we held discussion groups with rating specialists and interviewed quality review team (QRT) coaches, QRT members, and regional office management. In addition, we conducted a non-generalizable file review from each of the selected regional offices. We identified options proposed for revising the TDIU benefit and obtained experts’ views on the opportunities and challenges the options posed. Analysis of VA Data To examine the age-related trends in the population of TDIU beneficiaries and benefit payments, we analyzed data provided by VA for the fiscal years 2009 through 2013. VA provided data from its Veterans Service Network, which is the agency’s data entry platform for benefits tracking as well as its Beneficiary Identification Records Locator Subsystem, a system used to verify that an applicant is a veteran. Prior to 2009, VA also used an older system, Compensation and Pension Master Record, for tracking benefits. Agency officials told us that, while this system collected similar information to the newer databases, there was some variation. As a consequence, we limited the years we examined to those covered by the newer databases in order to ensure the consistency of the data. The data included information such as the beneficiaries’ age, schedular disability ratings, benefit discontinuations, and total amounts paid in disability benefits. To assess the reliability of the data, we conducted multiple interviews with knowledgeable agency officials. During these interviews we obtained detailed information on the methods used to generate the data requested, including limitations and assumptions made by VBA officials. In addition, we performed electronic logic testing of the program used to extract the data provided by VA. Based on these efforts, we found the data to be sufficiently reliable for our purpose. As part of our data analysis, we estimated the cost of the TDIU benefit payments in fiscal year 2013. We defined the cost of the TDIU benefit payments as the difference between the disability payments VA made at the 100 percent disability rate—which beneficiaries would have received due to the TDIU designation—in comparison to the amount beneficiaries would have been paid based on their regular disability rating. We estimated the TDIU benefit cost because we did not have complete information on the disability compensation payments made to individual beneficiaries. For example, the data provided by VA did not include information such as when a new beneficiary began receiving TDIU benefits or data on how much each beneficiary received in monthly disability payments for the full fiscal year—both of which would affect how much VA paid in benefits for the fiscal year. Due to these limitations, we made a number of assumptions for the estimate. We assumed that the population of new beneficiaries had only been receiving payments at the 100 percent disability rate or at their regular disability rate for 6 months and that the rest of the beneficiary population had been receiving payments at the 100 percent rate or at their regular disability rate for the full fiscal year. We also assumed that the beneficiaries’ dependent status and, when applicable, their regular schedular disability rate remained constant and that beneficiaries had no more than three children and that all children were under age 18. As part of the TDIU cost estimate, we removed the estimated benefit payments that would have been made to beneficiaries whose benefits were discontinued. We assumed that the benefit payments had been discontinued for this population for 6 months, on average. In addition, we calculated a range for the benefits paid to the discontinued beneficiaries at their regular disability rating. One estimate in the range assumed the discontinued beneficiaries were all rated as 60 percent disabled, the lowest disability rating at which a veteran would be eligible for TDIU. The second estimate in the range assumed the discontinued beneficiaries were all 90 percent disabled, which was the highest rating a veteran could receive before experiencing an increase in payments due to TDIU. We also assigned dependent categories to the discontinued beneficiaries where we assumed that the distribution of this population was proportional to the population of (1) TDIU beneficiaries as a whole, (2) beneficiaries who were rated as 60 percent disabled, or (3) beneficiaries who were rated as 90 percent disabled. Selection of VA Regional Offices for Interviews and Review of TDIU Claims We selected six regional offices we would visit to gather additional information on the TDIU claims review and quality assurance processes, including the challenges associated with such reviews. These six selected regional offices were located in Boston, Massachusetts; Denver, Colorado; Manchester, New Hampshire; New Orleans, Louisiana; Portland, Oregon; and Seattle, Washington. We selected these regional offices based on the following criteria: TDIU caseload size: We used data provided by VA on the caseloads of the 57 different regional offices. Such data included total caseloads, completed and pending, by regional office for fiscal year 2013. We selected regional offices to represent varying caseloads by selecting from regional offices we categorized as having high, medium, and low caseloads. Percentage of TDIU claims resulting in granted benefits: We used data provided by VA on the percentage of TDIU benefits granted and denied at each regional office in fiscal year 2013. We selected regional offices to represent variation of percentages of claims resulting in granted benefits. We sorted the data by the percentage of claims resulting in granted benefits and then divided the sorted list into two main groups representing approval rates of 21 to 40 percent and 41 to 60 percent. The majority of regional offices fell evenly within these two groups. A third group contained three outlying offices that granted benefits in over 60 percent of TDIU claims reviewed. We selected regional offices from each of these three groups. Through our review of the data and interviews with VA, we found these data to be sufficiently reliable for comparing approval rates the different regional offices. Accuracy of eligibility reviews: We reviewed quality assurance data, provided by VA officials, on the accuracy levels of TDIU eligibility decisions made at each regional office. We used quality assurance data for claims reviewed from October 1, 2013, through January 31, 2014. Most accuracy ratings fell between 95 and 100 percent. As a result, we selected regional offices with as much relative variation in accuracy ratings as possible, given the other selection criteria, by selecting a regional office with ratings outside the 95 to 100 percent range. We interviewed VBA and found these data to be sufficiently reliable for comparing accuracy rates at the different regional offices. Geography: We selected regional offices in as many different VA regions as possible, given other selection criteria, to capture a variation in geographical location. We included regional offices in three of the four VA regional areas (Western, Central, Eastern, and Southern) in our review. Our selection did not include a regional office in the Southern region. See table 3 for details about the characteristics for each of the regional offices we visited. Interviews and Discussion Groups with Regional Office Officials To better understand how TDIU claims decisions are made and reviewed for accuracy, as well as the challenges associated with these duties, we interviewed officials in the office of Compensation Service including quality assurance officials, regional office managers, quality review team (QRT) coaches, and QRT members. In addition, we conducted a total of 11 in-person discussion groups with 2 to 3 rating specialists each across five of the regional offices after conducting initial interviews in the Denver regional office. The 31 rating specialists that participated were selected by VBA and were intended to represent different experience levels with TDIU claim decisions and lengths of tenure at VA. During each discussion group we followed an interview protocol to collect information on TDIU claim decisions. Specifically, we used the following steps to ask rating specialists about the top challenges in making TDIU claim decisions, the impact of the challenges, and possible solutions to these challenges. (1) We asked the rating specialists to brainstorm about challenges they had experienced while making TDIU claim decisions. We documented all challenges mentioned and discussed any contradictory points. (2) We asked the rating specialists to independently identify, in any order, the five challenges from the brainstormed list they felt were the most significant. (3) As the rating specialists shared their independent lists, we tallied their responses and developed a list of the top five challenges, ensuring agreement of all the members of the discussion group. (4) Using this list of top challenges, we asked the rating specialists to brainstorm and share what they felt were the impacts of these challenges on their ability to make TDIU claim decisions and (5) to brainstorm possible solutions that would address the challenges. (6) We asked rating specialists to describe one or two claims decisions that they felt illustrated the challenges discussed. Review of VA Files with TDIU Claims Across the six regional offices we visited, we reviewed a total of 34 case files that contained TDIU claim determinations resulting in both granted and denied benefits made between April 2012 and April 2014. We reviewed at least five files from each regional office. We reviewed the claims files using a standardized checklist we developed using the procedural guidance and forms for TDIU claims. We used the checklist to determine if the required documents were included in the file and if the rating specialist followed the guidance. We also collected information about the veteran and the procedures for reviewing TDIU claims. For example, we reviewed initial application forms and information; supporting documentation including medical opinions, work histories from employers, disability ratings, and vocational rehabilitation services or Social Security Disability Insurance benefits received; rationale for claim determination; and continuation of benefit forms and reviews when applicable. While we reviewed files in the Denver regional office on site, we reviewed the files from the other regional offices from a single location (VBA’s regional office in Providence, Rhode Island) using VA’s Veterans Benefit Management System database. To ensure an unbiased selection, we selected files with TDIU claims from a randomly selected sample of files and followed a methodology so that files from the different field offices were selected in the same way. For the files with claims determined at five of the regional offices, we selected files from a list, provided by VBA, of 200 randomly selected files composed of 40 TDIU claims from each regional office, 20 of which were granted and 20 denied. All 200 claims had been decided within the last 2 years. For our review of files in Denver, we selected files from a separate list of 100 files with claims from only the Denver regional office. We selected files by sorting the list by field office and then approved and denied claims. We choose every other approved claim and every other denied claim for each regional office. In the event that there was a problem with the file, we noted the problem and moved down the list following the above procedure for the duration of the review. In total, we selected and reviewed 34 files, 17 with granted TDIU claims and 17 with denied claims, across the regional offices using this process. The results of the file reviews are not generalizable to all 57 regional offices or all TDIU claims. Review of Options to Revise TDIU Eligibility Requirements and Benefit Structure In order to describe options that had been presented to revise the TDIU benefit and eligibility requirements, we conducted a literature search which included identifying relevant reports by disability compensation committees and research organizations. Our search covered reports from 2004 through 2014 using databases such as ProQuest, CQ.com, and the Defense Technical Information Center. We identified any options in the reports that either revised the TDIU benefit or the benefit’s eligibility requirements. We selected and summarized the options for revising TDIU eligibility requirements or benefit structure from the six reports along with options presented in a Federal Register notice from 2001. Where similar options were found in more than one report, we summarized the common themes across the reports while also including details describing the proposed option that might be specific to one report’s presentation of the option. The six reports and Federal Register notice, where the options were presented, are as follows: Center for Naval Analyses, Final Report for the Veterans’ Disability Benefits Commission: Compensation, Survey Results, and Selected Topics (August 2007). Congressional Budget Office, Options for Reducing the Deficit: 2014 to 2023 (November 2013). Department of Veterans Affairs Advisory Committee on Disability Compensation, 2012 Report to the Secretary of Veterans Affairs (October 31, 2012). Department of Veterans Affairs, Total Disability Ratings Based on Inability of the Individual to Engage in Substantially Gainful Employment. 66 Fed. Reg. 49,886 (Oct. 1, 2001), subsequently withdrawn by 70 Fed. Reg. 76,221 (Dec. 23, 2005). Economic Systems, Inc. (Econsys), A Study of Compensation Payments for Service Connected Disabilities (September 2008). Institute of Medicine of the National Academies, A 21st Century System for Evaluating Veterans for Disability Benefits (2007). Veterans’ Disability Benefits Commission, Honoring the Call to Duty: Veterans’ Disability Benefits in the 21st Century (October 2007). To identify the potential strengths and challenges related to implementing each option, we conducted semi-structured interviews with experts and representatives from veterans service organizations (VSO). The experts and VSO representatives were provided with a written description of each option as well as the definition for potential strengths and challenges in advance of the interviews. An option was considered to have potential strengths if it provided fair and equitable benefits for veterans or, for VA, if it allowed for administrative improvement or clarity in eligibility criteria. Additionally, an option was considered to have potential challenges if implementation faced concerns (for example, among the veteran community) or impediments (for example, to VA). During the interviews, we obtained experts’ and VSO representatives’ views on the potential strengths and challenges of each option. We then categorized the responses according to their similarity. In order to provide an independent assessment, we did not obtain VA officials’ views on challenges in implementing the proposed options. We identified the experts through the individual’s contributions to the reports containing the alternative options or someone associated with the issuing organization. In addition, we identified the VSOs through our prior knowledge of their work with related topics and the organizations’ participation in Congressional testimonies related to TDIU. Table 4 identifies the five experts and two VSO representatives, including their respective titles and professional affiliation. Appendix II: Veterans Benefits Administration’s Approaches to Assess Disability Compensation Claims Decisions Additional information is provided below on the two approaches Veterans Benefits Administration (VBA) uses to review Total Disability Individual Unemployability (TDIU) claims—the Systematic Technical Accuracy Review (STAR) and quality review team (QRT) reviews. In addition, information is provided on other approaches VBA has used to assess non-TDIU claims. Since fiscal year 1999, VBA has used its STAR review to measure the accuracy of disability compensation claims decisions. Through the STAR process, VBA reviews a stratified random sample of completed claims, and certified reviewers use a checklist to assess specific aspects of each claim. Specifically, for each of the 57 regional offices, completed claims are randomly sampled each month and the data are used to produce estimates of the accuracy of all completed claims. VA reports national estimates of accuracy from its STAR reviews to Congress and the public through its annual performance and accountability report and annual budget submission. VBA also produces regional office accuracy estimates, which it uses to manage the program. Regional office and national accuracy rates are reported in a publicly available performance database, the Aspire dashboard. Prior to October 2012, VBA’s estimates of accuracy were claim-based; that is, claims free of errors that affect veterans’ benefits were considered accurate and, conversely, claims with one or more errors that affect benefits were considered inaccurate. Beginning in October 2012, VBA also began using STAR data to produce issue-based estimates of accuracy that measure the accuracy of decisions on the individual medical conditions within each claim. For example, a veteran could submit one claim seeking disability compensation for five disabling medical conditions. If VBA made an incorrect decision on one of those conditions, the claim would be counted as 80 percent accurate under the new issue-based measure. By comparison, under the existing claim- based measure, the claim would be counted as 0 percent accurate unless the error did not affect benefits when considered in the context of the whole claim. VBA uses these STAR review results to guide other quality assurance efforts. According to VBA officials, the agency has used STAR data to identify error trends associated with specific medical issues, which in turn were used to target efforts to assess consistency of decision- making related to those issues. In March 2012, VBA established QRTs with one at each regional office. Although regional offices were previously responsible for assessing individual performance, QRTs represent a departure from the past because QRT personnel are dedicated primarily to performing these and other local quality reviews. The QRT reviews individual rating specialist performance across the parts of claims that an individual processed (individual quality reviews). Also, QRTs review claims the regional office is still processing (in-process reviews) to help prevent inaccurate decisions. In-process reviews specifically aim to help prevent inaccurate decisions by identifying specific types of common errors and serve as learning experiences for staff members and are not used to assess individual performance. Quality reviewers are also responsible for providing feedback to claims processors on the results of their quality reviews, typically as reviews are completed, including formal feedback from the results of individual quality reviews and more informal feedback from the results of in-process reviews. In addition, according to VBA, the focus of in-process reviews performed by QRTs has been guided by STAR review error trend data. Originally, VBA established these reviews to help the QRTs identify and prevent claim development errors related to medical examinations and opinions, which it described as the most common error type. More recently, VBA has added two more common error types—incorrect rating percentages and incorrect effective benefit dates—to its in-process review efforts. VBA officials stated that they may add other common error types based on future STAR error analyses. Other VBA Approaches VBA has used other approaches to assess the consistency and accuracy of other types of non-TDIU compensation benefit claims. These approaches include: Special accuracy reviews: VBA periodically conducts special reviews of claims decisions to assess the accuracy in processing specific types of claims. For example, in our 2014 report on military sexual trauma, we reported on a special accuracy review VBA conducted in response to concerns that adjudicators were not making accurate decisions for post-traumatic stress disorder claims related to military sexual trauma. The VBA review found errors in 98 of 385 randomly selected claims that had been denied (about 25 percent). In particular, the review identified 61 cases where the adjudicators should have identified markers and ordered medical exams rather than denying the benefit. The VBA reviewers made several recommendations for improving the adjudication process such as to: (1) clarify VA policies on markers, (2) build expertise in adjudicating such claims, and (3) develop training. Inter-rater reliability (IRR) studies: Since fiscal year 2008, VBA has used IRRs to assess the extent to which a cross-section of rating specialists across all regional offices agree on an eligibility determination when reviewing the entire body of evidence from the same claim. These studies are, however, time intensive and only review one claim. The process was administered by proctors in the regional offices and the results were hand-graded by national VBA staff. Given the resources involved, IRR studies have been typically limited to 300 to 500 (about 25 to 30 percent) claims processors, randomly selected from the regional offices. Consistency questionnaires: As of 2013, VBA has used questionnaires as its primary means for assessing consistency of decision-making across individual rating specialists. A questionnaire includes a brief scenario on a specific medical condition for which a rating specialist must correctly answer several multiple-choice questions. The questionnaires are intended to test for understanding and interpretation of policy and test takers receive feedback. The questionnaires are administered electronically through the VA Talent Management System, removing the need to proctor or hand-grade the tests, which has allowed VBA to significantly increase employee participation. For example, a recent consistency questionnaire was taken by about 3,000 claims processing employees—representing all employees responsible for rating claims. Further, VBA now administers these studies more frequently, from about 3 to 24 per year. According to VBA officials, they plan to further expand the use of consistency studies from two questionnaires per month to six to eight per month, pending the availability of additional funding. Regional offices receive national results; regional office-specific results; and, since February 2014, individual staff results. Appendix III: Number of All Total Disability Individual Unemployability (TDIU) Beneficiaries and New TDIU Beneficiaries by Age, Fiscal Years 2009-2013 Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brett Fallavollita (Assistant Director), Melissa Jaynes (Analyst-in-Charge), and David Reed made contributions to this report. Sheranda Campbell, David Chrisinger, A. Nicole Clowers, Beryl Davis, Alexander Galuten, Kirsten Lauber, Sheila McCoy, Philip McIntyre, Lorin Obler, and Greg Whitney also contributed to this report.
VA generally provides Individual Unemployability benefits to disabled veterans of any age who are unable to maintain employment with earnings above the federal poverty guidelines due to service-connected disabilities. Because the population of veterans who receive these supplemental benefits has been growing, GAO was asked to review VA's management of these benefits. This report (1) examines age-related trends in the population of Individual Unemployability beneficiaries and benefit payments; (2) assesses the procedures used for benefit decision-making; and (3) describes suggested options for revising the benefit. GAO analyzed fiscal year 2009 through 2013 data provided by VA—the most recent years available; reviewed applicable federal laws, regulations, and program policies; visited six regional offices selected for their differing accuracy rates, workload, and geography; reviewed a non-generalizable sample of claims; and spoke with rating specialists, experts, and VSO representatives. The number of older veterans receiving Individual Unemployability benefits, a disability supplement, has been increasing, as has the total amount of benefit payments. In fiscal year 2013, 330,000 veterans received this benefit, which the Department of Veterans Affairs (VA) provides to disabled veterans of any age who are unemployable because of service-connected disabilities. From fiscal years 2009 through 2013, the most recent data available, there was a 22 percent increase in the number of veterans receiving these benefits, and a 73 percent increase in the subgroup of beneficiaries aged 65 and older. Moreover, among new beneficiaries in 2013, about 2,800 veterans were 75 and older, of which more than 400 were 90 and older. These trends have given rise to questions about what constitutes “unemployability.” Only a small proportion, 4 to 6 percent, of beneficiaries had benefits discontinued during these years—about 70 percent of which were due to the death of the beneficiary. During the 5-year study period, disability payments to those receiving Individual Unemployability—the base payment plus the supplement—increased by 30 percent (to $11 billion in fiscal year 2013). For that year, GAO estimated $5.2 billion for the supplement alone. VA's procedures do not ensure that Individual Unemployability benefit decisions are well-supported. For example, contrary to federal internal control standards, the guidance on determining unemployability is incomplete for ensuring consistency. In discussion groups with GAO, VA's rating specialists said they disagreed on the factors they need to consider when determining unemployability, weighed the same factors differently, and had difficulty separating allowable from non-allowable factors. Some specialists said these challenges create the risk that two raters could examine the same evidence and reach an opposite decision to approve or deny a claim. Also, VA's quality assurance approach primarily checks the procedural accuracy of decisions and does not ensure a comprehensive assessment of whether decisions are complete, accurate, and consistent. In addition, VA does not independently verify self-reported earnings information supplied by applicants and beneficiaries, although the agency has access to Internal Revenue Service data for this purpose. VA officials said they are waiting for a data system, expected in 2016, to conduct verifications. However, by postponing verification of self-reported earnings, the benefit is at risk of being awarded to ineligible veterans. Based on a review of literature, GAO identified various options for revising eligibility requirements and the structure of the Individual Unemployability benefit. Six options focus on eligibility requirements, such as considering additional criteria when determining unemployability and applying an age cap of 65. The seventh option would change the benefit structure by reducing payments as beneficiaries earn income in excess of the poverty threshold. Experts and representatives of veterans service organizations (VSO) that GAO interviewed identified the potential strengths of each option (such as improved decision accuracy) and potential challenges (such as increased need for fiscal and administrative resources). In addition, VA's advisory committee recommended in 2012 that the agency study age and require vocational assessments when weighing veterans' unemployability; VA agreed to study both, but has not yet taken action.
Background In 1974, Congress passed ERISA to protect the rights and interests of participants and beneficiaries of private sector employee benefit plans. It outlines the responsibilities of employers and administrators who sponsor and manage these plans. ERISA also defines fiduciaries as persons who (1) exercise discretionary authority or control over the management of a private sector employee benefit plan or the plan’s assets, (2) render investment advice for a fee or other compensation with respect to plan assets, or (3) have any discretionary authority or responsibility to administer the plan. Under ERISA, fiduciaries are required to act prudently and exclusively in the interest of plan participants and beneficiaries. ERISA also describes the types of pension plans that private sector employers may sponsor, which include defined benefit and defined contribution plans. In 1980, defined benefit plans covered approximately 38 million participants, while some 20 million individuals participated in defined contribution plans. By 2002, the numbers had changed, with roughly 42 million participants covered by defined benefit plans and approximately 65 million participants in defined contribution plans. Figure 1 shows the shift in participation from defined benefit to defined contribution plans since 1980. According to experts, the fact that more workers are now covered by defined contribution plans rather than defined benefit plans is significant because the risk associated with providing retirement income is shifting toward workers and away from employers. Under defined benefit plans, the employer is typically responsible for funding the plan to cover promised benefits—accounting for any shortfalls due to market fluctuations, poor investment decisions, or changing interest rates. In contrast, under a defined contribution plan, participants are generally responsible for ensuring that they have sufficiently saved for retirement and generally make their own investment decisions. As a result, much of the risk has moved from the employer to the plan participants. Today, with about one-fifth of Americans’ retirement wealth invested in mutual funds, pension and retirement savings plans have become more dependent on the investment services industry. These plans now include new investment vehicles and financial instruments that are more complex and require specialized knowledge and expertise for prudent decision making. EBSA Shares the Responsibility for Enforcing ERISA with Other Agencies EBSA shares responsibility for enforcing ERISA with the IRS and PBGC. EBSA enforces Title I of ERISA, which specifies, among other standards, certain fiduciary and reporting and disclosure requirements, and seeks to ensure that fiduciaries operate their plans in the best interest of plan participants. EBSA conducts investigations of plan fiduciaries and service providers and seeks appropriate remedies to correct violations of the law, and pursues litigation when they determine necessary, as shown in figure 2. IRS enforces Title II of ERISA, which provides, among other standards, tax benefits for plan sponsors and participants, including participant eligibility, vesting, and funding requirements. IRS audits plans to ensure compliance and can levy tax penalties or revoke tax benefits, as appropriate. In contrast, PBGC, under Title IV of ERISA, insures benefits for defined benefit pension plans when companies default on promised pension benefits. To do so, PBGC collects premiums from plan sponsors and administers payment of pension benefits in the event that these plans terminate without sufficient assets to pay all benefits accrued under the plan to date. Finally, while SEC does not draw authority from ERISA, it is responsible under securities laws for regulating and examining entities registered with SEC, such as investment advisers, managers, and investment companies that often provide services to plans. Additional information on selected agencies’ authorities and enforcement practices is contained in appendix II. According to 2002 data, EBSA’s oversight authority covers approximately 3.2 million private sector pension and health benefit plans with assets over $5 trillion and covering more than 150 million participants. Of the 3.2 million plans, EBSA reported that approximately 730,000 are pension plans with assets totaling roughly $4.9 trillion and covering over 100 million participants. EBSA’s 385 frontline investigators are primarily responsible for overseeing these employee benefit plans. In contrast, IRS and SEC have oversight responsibility for a smaller number of entities. Specifically, IRS’s 389 agents conduct oversight for some 1.3 million pension, profit-sharing, and stock bonus plans, and the SEC’s 1,953 investigators and examiners oversee 17,337 registrants, such as investment advisers and investment companies. Table 1 shows the ratio of investigators, examiners, or agents to the number of plans and entities that EBSA, IRS, and SEC regulate. EBSA’s field offices conduct investigations to detect and correct violations of Title I of ERISA and related criminal laws. In fiscal year 2005, EBSA had roughly 7,800 ongoing investigations, of which approximately 3,400 were newly opened as a result of various source leads, such as participant complaints, computer targeting, and other agency referrals. EBSA closed about 4,000 investigations during that year. EBSA’s Participant Assistance staff supplements EBSA’s enforcement activities by helping plan participants obtain retirement and health benefits that have been improperly denied. In fiscal year 2005, this office conducted roughly 2,000 outreach events to educate participants, beneficiaries, plan sponsors, and members of Congress about pension plan rights and obligations, among other topics. In addition, during the same time, the office reported that its benefits advisers closed about 160,000 inquiries and complaints, some of which resulted in monetary recoveries. In those instances where a complaint was not informally resolved, EBSA officials said that it was referred to the enforcement staff in the field offices for possible investigation. As a result of such referrals, EBSA data showed that its investigators closed almost 1,200 investigations in fiscal year 2005 with monetary results of $130.24 million. Additionally, EBSA’s Office of the Chief Accountant is concerned with employee benefit plans’ annual reporting and audit requirements and enforces those provisions through civil penalties under ERISA. Through their combined efforts, EBSA data indicate that the agency reviewed over 36,000 private sector pension plans in fiscal year 2005. Table 2 shows the number of plans investigated or contacted by each office. DOL’s Office of the Solicitor supports EBSA regional offices by litigating civil cases and providing legal support. In fiscal year 2005, the office litigated 178 of the 258 civil cases referred to it by EBSA. In addition, EBSA conducts criminal investigations in consultation with the U.S. Attorneys’ offices and in many cases, conducts joint enforcement actions with other federal, state, and local law enforcement agencies. EBSA conducted about 200 criminal investigations in fiscal year 2005. As a result, over 100 plan officials, corporate officers, and pension plan service providers were indicted. EBSA Has Made Improvements to Its Enforcement Program, but Challenges Remain In 2002, we identified several weaknesses in EBSA’s management of its enforcement program, including the lack of a centrally coordinated quality review process, better coordination needed among its investigators, the lack of data to assess the nature and extent of noncompliance, and limited attention to its human capital management, despite the agency’s actions to strengthen the program in prior years. Since our 2002 review, EBSA has improved its enforcement program. However, several challenges remain. The agency has promoted coordination among regional investigators, implemented quality controls, and developed strategies to address its workforce needs. To promote compliance, EBSA has increased its educational outreach to plan participants, sponsors, and service providers, and increased participation in its voluntary correction programs. However, the agency has not fully addressed concerns from our prior reviews. Specifically, EBSA still has not (1) developed complete data on the nature and extent of plans’ noncompliance, (2) established a formal coordination protocol with SEC within its regional offices, and (3) formally evaluated the factors affecting staff attrition. EBSA Has Made Some Progress in Improving Its Enforcement Program In recent years, EBSA has addressed many of the concerns we raised in our 2002 review. As shown in table 3, such improvements include promoting coordination among regional investigators, implementing quality controls, and developing strategies to meet its workforce needs. As part of its workforce efforts, EBSA has recruited investigators with advanced skills in accounting, finance, banking, and law that EBSA believes are required because of the technical aspects of ERISA and the changing nature of benefit plans. As of September 2005, EBSA employees were among some of the highest educated within DOL, and EBSA staff data indicated that investigators have wide-ranging skills and backgrounds similar to those investigators at IRS and SEC. For example, EBSA reported that 46 percent of its investigators hold law degrees, with some of these staff also holding additional degrees or certificates in accounting or business administration as well as other subject areas. Also, EBSA reported that 27 percent of its investigators or auditors had undergraduate degrees in accounting, with several also having skills in forensic accounting or fraud examination. Several investigators and auditors had other advanced degrees, such as master’s degrees in business administration, law, and public policy, as well as backgrounds in securities, taxation, banking, insurance, and employee benefits. Recognizing a need for fraud examination skills, EBSA now includes a course on forensic accounting in its basic training of newly hired investigators, and EBSA data showed that the agency also sent many of its investigators to the Federal Law Enforcement Training Center over the last several years to take courses in fraud examination as well as money laundering and health care fraud. Since 2002, EBSA has also used several initiatives to recruit its staff. EBSA recruiters attend a variety of job fairs, college campuses, and other events to identify and contact applicants with necessary skills. Further, to provide national office directors and regional directors additional tools to recruit for all occupations, authority has been delegated to approve certain human capital flexibilities, such as advances in pay and payment of travel expenses for employment interviews. In addition to attending recruitment events, EBSA uses three principal programs to recruit students from law schools, business schools, and other specialized disciplines. These programs are the Student Career Experience Program (SCEP): designed for students to work in positions related to their academic field of study while enrolled in school. Upon graduation, interns may convert to full-time career employees. Since 2002, EBSA has employed roughly 100 SCEP participants. As of July 2006, EBSA reported that 28 students were participating in the program. Student Temporary Employment Program (STEP): designed for the temporary employment of students ranging from a summer internship to a period generally not to exceed 1 year. According to officials, some STEP interns join the SCEP program after the summer internship ends. Since 2002, EBSA has employed 115 interns in the STEP program. As of July 2006, EBSA reported that it had 4 participants. Federal Career Intern Program: a 2-year internship program that can result in conversion to career employment. EBSA just recently began using this program to recruit full-time employees who have recently obtained an undergraduate or graduate degree. According to EBSA, the program, which allows the agency to recruit students outside of the normal hiring process, is much faster and more streamlined, enabling EBSA to better target candidates. As of July 2006, EBSA reported that 24 students were participating in EBSA’s program and were not yet eligible for conversion. Furthermore, DOL offers an agencywide Masters of Business Administration Fellows Program, which is used to recruit business school graduates. This is a 2-year rotational program, at the end of which fellows may be converted to career employees. As of 2006, 76 fellows had taken part in the program across all DOL agencies, including EBSA. In addition to addressing our prior concerns on the management of the enforcement program, EBSA has established formal criminal coordinator positions for each regional office and increased funds returned to participants through its assistance. With regard to its criminal coordinators, EBSA created a new position in each regional office, modeled after its national office coordinator position, to facilitate relationships with law enforcement agencies at the regional level. The position works with law enforcement agencies and prosecutors at all levels to improve the likelihood that criminal violations will be recognized and appropriately investigated. Regional office officials believe that the position expands their opportunities for criminal prosecutions. For example, one regional official said that if the U.S. Attorney’s office did not believe it was cost-effective to prosecute an alleged violation, the regional coordinator would refer cases to the local district attorney’s office for prosecution. Additionally, several regional office officials believed that the new position would help them better coordinate their criminal investigations, ultimately increasing criminal prosecutions. EBSA also continues to provide education to plan participants, sponsors, and service providers to promote compliance. EBSA’s education program is designed to increase plan participants’ knowledge of their rights and benefits under ERISA. For example, EBSA anticipates that through education, participants will become more likely to recognize potential problems and notify EBSA when issues arise. The agency also conducts outreach to plan sponsors and service providers, in part, about fiduciary responsibilities and obligations under ERISA. For example, EBSA’s benefit advisers speak at conferences and seminars sponsored by trade and professional groups and participate in outreach and educational efforts in conjunction with other federal or state agencies. Some outreach activities include briefings to congressional offices, state insurance commissioners, and other federal, state, and community organizations; fiduciary compliance assistance seminars for employers, plan sponsors, and practitioners; and on-site assistance to dislocated workers facing job loss as a result of plant closure or layoffs. EBSA has also increased funds returned to participants through its assistance. For example, for fiscal year 2002, the Office of Participant Assistance reported that it had recovered approximately $49 million on behalf of participants. As of fiscal year 2005, the office reported that it had increased that amount to about $88 million. At the same time, EBSA has increased its enforcement results since 2002. According to EBSA data, in fiscal year 2002, for every dollar invested in EBSA, the agency’s investigators produced about $7.50 in financial benefits, or roughly $830 million in total monetary recoveries. As of fiscal year 2005, they were producing just over $12 for every dollar—a total of $1.6 billion. EBSA officials said that the agency has achieved these results, in part, because of recent program improvements and with relatively small increases in staff. Full-time equivalent (FTE) authorized staff levels increased from 850 in fiscal year 2001 to 875 FTEs in fiscal year 2006. As of August 2006, 385 of the 875 FTEs were frontline field investigators. In addition, EBSA has increased compliance through its Voluntary Fiduciary Correction Program (VFCP) and its Delinquent Filer Voluntary Compliance (DFVC) Program. The VFCP allows plan officials to disclose and correct certain violations without penalty. The program is designed to protect the financial security of workers by encouraging employers and plan officials to voluntarily comply with ERISA and allows those potentially liable for some fiduciary violations under to ERISA to apply for relief from enforcement actions and certain penalties, provided they meet specified criteria and follow program procedures. Specifically, plan officials can correct 19 types of transactions, such as the remittance of delinquent participant contributions and participant loan repayments to pension plans. If the regional office determines that the applicant has met the program’s terms, it will issue a “no action” letter to the applicant— avoiding a potential civil investigation and penalty assessment. As a result of the program, in fiscal year 2005, EBSA reported that $7.4 million was voluntarily restored to employee benefit plans. Furthermore, the DFVC program is designed to encourage plan administrators to comply with ERISA’s filing requirements. According to EBSA data, the program has increased the number of unfiled annual reports received from about 3,000 in fiscal year 2002 to over 13,000 in fiscal year 2005. EBSA Still Does Not Estimate Overall Industry Compliance, Regularly Confer With SEC Staff on Industry Trends, and Address Retention of Investigators Despite improvements in its enforcement efforts, EBSA has not completely addressed several weaknesses we previously identified. Specifically, EBSA has not systematically estimated the nature and extent of pension plans’ noncompliance, a fact that limits the agency’s ability to assess overall industry compliance with ERISA and measure the effectiveness of its enforcement program. In 2002, we recommended that EBSA take steps to develop a cost-effective strategy for assessing the level and type of noncompliance among employee benefit plans. In response, EBSA stated that it had established its ERISA Compliance Assessment Committee and had embarked on a statistical study to gauge health plans’ noncompliance with the provisions of Part 7 of ERISA, dealing with group health plan requirements. Although EBSA has conducted and continues to generate some statistical studies to measure noncompliance in the pension and health care industries, its pension compliance data remain limited, focusing on information such as the timeliness and full remittance of employee contributions to defined contribution plans. However, as of June 2006, EBSA officials could not provide an estimated time frame for results of its timeliness and remittance study. Although EBSA has taken steps, the agency still did not know the nature and extent of noncompliance within the pension industry, and its ERISA Compliance Assessment Committee had not yet planned any additional pension compliance baseline studies. EBSA’s limited noncompliance information may also prevent EBSA from effectively measuring the overall performance of its enforcement program. The Government Performance and Results Act of 1993 requires that executive agencies demonstrate effectiveness through measurable result- oriented goals. According to the Office of Management and Budget, DOL has selected output measures as proxies to compensate for the difficulty in measuring overall performance. Since our 2002 review, EBSA’s enforcement program continues to use performance measures that generally focus on how well the agency is managing and using its resources—such as the number of specific investigations closed with results—rather than on its overall impact on the security of employee benefits. Some regional office officials we visited raised concerns that the current measures and expected increases to EBSA’s performance goals in the coming years would likely result in an inability to review and conduct more complex cases, given each office’s limited resources and the need to close cases with results. For example, one of EBSA’s performance goals is to close 69 percent of its civil investigations with results in 2006, with planned increases to that goal of 3 percent per year until 2008—to 75 percent. Some regional officials stated that meeting the revised performance goal encourages a focus on cases that are more obvious and easily corrected, such as those involving employee defined contribution plans, rather than on investigations of complex and emerging violations where the outcome is less certain and may take longer to attain. Without data to assess the extent and nature of noncompliance, as we recommended in 2002, EBSA will continue not to have effective measures for assessing the overall effectiveness of its enforcement program. In a 2005 testimony, we also noted that EBSA needed to better coordinate with the SEC on issues related to the securities and pension industries. Although the two agencies periodically share information, we found that EBSA has not yet established a systematic procedure by which its investigators in all its regional offices can regularly confer with their respective SEC regional office. Under the securities laws, SEC is subject to confidentiality restrictions with respect to information it can disclose to EBSA pertaining to an ongoing investigation, even if the information pertains to possible violations of ERISA. For example, if SEC investigates a securities trading firm and has reason to believe that information discovered during the investigation might be of interest to EBSA investigators, SEC may alert EBSA to their findings. Likewise, EBSA investigators can alert SEC to information that is discovered during an ERISA investigation that might be of interest to SEC. However, unlike EBSA, SEC may not share documentation associated with its findings unless EBSA submits a written request for information which, if approved, allows access to any evidence that SEC has obtained during the course of its investigation. In an attempt to expedite the information-sharing process, certain EBSA regional offices, but not all, have established informal working groups of investigators that regularly meet with SEC investigators to exchange information. For example, one region has established an “SEC Group,” which regularly meets with SEC investigators to develop case information and potential leads. In contrast, another region stated that it has very little contact with SEC and only learns about SEC investigations through the media. While not all EBSA regional and district offices may have the same need to interact with SEC because of the nature of the private sector companies within their jurisdiction, EBSA may not have knowledge of an SEC investigation involving the same entity in those offices where no working group exists unless such knowledge is disclosed to the public, therefore limiting its awareness of potential violations. Further, EBSA has not developed initiatives to ensure retention of its investigative staff, despite its improvements in human capital management. In 2002, we reported that EBSA had one of the highest attrition rates within DOL. Since our review, we found that EBSA’s overall attrition rate remained high, and in recent years, attrition rates for EBSA’s investigators appear to have risen. Table 4 shows the attrition rates of EBSA investigators including students that occupy investigator positions in the GS-1801 series, as compared to the attrition rates of similar groups. Specifically, data suggest that EBSA’s attrition rates for investigators have climbed since 2002, and as of 2005, EBSA investigators were leaving at twice the rate of other federal investigators. In fact, as of fiscal year 2005, EBSA had lost 102 investigators since fiscal year 2002 for various reasons, such as resignations and retirement. For example, in fiscal year 2005, EBSA lost 52 investigators, of which 34 left for employment outside of the federal government. While this may be due in part to EBSA employing temporary students as entry-level investigators, between fiscal year 2002 and fiscal year 2005, 58 investigators had left EBSA for employment outside of the federal government. According to regional office officials in several offices we visited, particularly in major urban areas, they had difficulties retaining newly hired investigators because of insufficient compensation, and some believed that these staff used EBSA as a training ground for the private sector employee benefit plan industry where they could earn higher salaries. For example, in the San Francisco regional office, officials reported that the investigator attrition rate has averaged about 13 percent per year, and as of April 2006, officials reported that 50 percent of their staff had less than 3 years of experience. While other agencies may face similar attrition problems in such urban areas, EBSA has taken limited steps to evaluate the impact such attrition has on its operations. Officials from EBSA’s Office of Program Planning, Evaluation and Management reported that the agency dropped earlier considerations for retention strategies, such as student loan repayment and retention bonuses, in view of data that suggest investigators are usually leaving for much higher salaries elsewhere. Although EBSA has employed exit surveys, the agency has limited processes to evaluate why its investigators are leaving the agency, nor has the agency evaluated the extent to which other retention initiatives may be useful. While EBSA may be able to recruit new investigators and to fill vacant positions, the continued turnover requires additional resources for training new staff. Further, the relative inexperience of new staff may have an adverse effect on EBSA’s enforcement program’s efforts. Unlike Other Agencies, EBSA Does Not Conduct Routine Compliance Examinations or Comprehensive Risk Assessments Although EBSA regularly targets violations, it does not conduct routine compliance examinations or comprehensive risk assessments to direct its enforcement practices, as do other federal agencies that share similar responsibilities. Rather, the agency relies on various sources for case leads, such as outside complaints and informal targeting of plans, to focus its enforcement efforts. While these leads are important, in addition to undertaking such activities, agencies such as IRS and SEC have developed routine compliance programs to detect violations and identify emerging trends that may warrant further examination by enforcement staff. Moreover, SEC and PBGC have dedicated staff to perform broad risk assessments by analyzing information from multiple sources in order to anticipate, identify, and manage risks to investors and to the pension insurance system. EBSA Does Not Conduct Routine Compliance Examinations EBSA does not conduct routine compliance examinations—evaluations of a company’s books, records, and internal controls—limiting its ability to detect and deter violations. Rather than conduct such examinations, EBSA relies on several sources for case leads. For example, EBSA uses participant complaints and other agency referrals as sources of investigative leads and to detect potential violations. Moreover, EBSA identifies leads, in part, through informal targeting efforts by investigators, primarily using data reported by plan sponsors on their Form 5500 annual returns. While these sources are important, such methods are generally reactive and may reveal only those violations that are sufficiently obvious for a plan participant to detect or those disclosed by plan sponsors on their Form 5500s, and not those violations that are possibly more complex or hidden. Nevertheless, EBSA officials raised concerns that conducting such examinations would divert resources from EBSA’s current enforcement practices. In contrast, IRS and SEC use such examinations in an effort to detect violations or identify weaknesses that could lead to violations. IRS’s Office of Employee Plans administers a compliance examination program to detect violations of tax laws related to pension plans. According to agency officials, IRS dedicates eight staff members for selecting entities for examinations, and IRS uses a risk-based process for selecting and scoping such examinations. If a violation is detected during an examination, IRS can subsequently levy penalties and excise taxes on the violators. In fiscal year 2005, the Office of Employee Plans closed 8,230 examinations. Similarly, SEC’s Office of Compliance Inspections and Examinations (OCIE) detects violations of securities laws through its examination program. OCIE examines advisers, investment companies, broker- dealers, and other registered entities to evaluate their compliance with the federal securities laws, to determine if they are operating in accordance with disclosures made to investors, and to assess the effectiveness of their compliance control systems. SEC conducted 2,056 examinations of investment advisers and investment companies in fiscal year 2005. IRS also uses examinations in an attempt to identify emerging areas of noncompliance and analyze compliance risk levels among specific types of pension plans. IRS plans to use this information in its risk-based examination selection process, similar to recommendations that we made to EBSA in 2002. As part of this effort, IRS, which has a similar resource level to EBSA, is in the process of conducting examinations to develop compliance baselines for 79 market segments it identified based on business sector and plan type. For example, IRS is developing separate baseline compliance levels for 401(k) plans, defined benefit plans, employee stock ownership plans, and profit-sharing plans in the construction industry. IRS officials expect the baselines to be completed by the end of fiscal year 2007. Likewise, SEC, which has fewer entities to oversee and more resources than EBSA, attempts to use its examination program to identify emerging trends. In addition to its other examination types, SEC conducts sweep examinations—compliance examinations that focus on specific industry issues among a number of registrants—to remain informed of securities industry developments. For example, SEC initiated a sweep examination of several pension plan service providers to identify conflicts of interest between the providers and the plan sponsors. Furthermore, because of the number of EBSA investigators relative to employee benefit plans, EBSA’s presence in the pension industry is limited, therefore decreasing the possibility that a plan may be investigated. A compliance examination program, in part, is designed to establish a presence by regularly reviewing entities’ operations, thereby likely creating a deterrent to noncompliance. For example, IRS officials said that they believe that their program deters violations from occurring because they select many plans for review each year based on established risk criteria. Because fiduciaries are unsure when IRS’s agents may review their activities, IRS officials believe that the agency has created an environment that encourages compliance. Likewise, EBSA officials believe that their voluntary compliance programs are also successful at deterring violations, because employers and fiduciaries want to disclose and correct violations instead of being investigated and prosecuted. However, given the ratio of employee benefit plans to investigators, EBSA’s limited presence may create an incentive for fiduciaries or plan sponsors to take compliance lightly, even though EBSA attempts to deter violations through its correction programs and publicizing its enforcement results. EBSA Has Not Dedicated Staff to Formalized Risk Assessment Although EBSA’s enforcement strategy emphasizes targeting violations and protecting plan participants at risk, EBSA has no staff dedicated to conduct broad risk assessments of multiple sources of information, including, but not limited to, investigations, academic research, compliance studies, and other market data. While the agency attempts to identify areas of risk through its efforts in establishing its national priorities and projects, this effort ultimately relies on regional investigators to identify developing problems—generally in the course of their existing investigations. EBSA’s Strategic Enforcement Plan directs EBSA to establish national investigative priorities to ensure that its enforcement program focuses on areas critical to the well-being of employee benefit plans. On the basis of these priorities, EBSA annually develops national and regional projects based on unique or problematic issues identified within a region’s geographic jurisdiction in accordance with its strategic plan. Depending on the prevalence of a specific problem across regions, it can be elevated to a national project. For example, EBSA has recently implemented a national project focusing on pension consulting services, called the Consultant/Advisor Project, which is aimed at identifying plan service providers, particularly investment advisers, who may have a conflict of interest that could affect the objectivity of the advice they provide their pension plan clients. However, because EBSA relies primarily on identifying risk through its investigations and targeting, which offer no systematic, analytic process for anticipating new types of violations before they become pervasive, its risk assessment approach may be limited. Unlike EBSA, some federal agencies, such as SEC and PBGC, have dedicated staff to analyzing information from multiple sources to assess external risk within their regulated industries. Once risks are identified, the agencies develop and focus their enforcement strategies to mitigate and manage them. In 2004, SEC established the Office of Risk Assessment (ORA) to coordinate the SEC’s risk management program. While relatively small, ORA serves as the agency’s risk management resource and works with other SEC departments to identify and manage risks. According to ORA officials, the office’s five staff identify and assess areas of concern through expert analysis, such as new and resurgent forms of fraud and illegal activities. For instance, ORA worked in conjunction with OCIE to develop a database to collect and catalog such issues within the securities industry in order to evaluate risk to investors. OCIE then uses this database to select cases for its examination program. Also, PBGC has dedicated one employee—supported by staff in various departments—for risk assessment within its Department of Insurance Supervision and Compliance. PBGC officials believe this has strengthened its operational capability to identify and monitor risks to its pension insurance program, including macroeconomic factors, industry-specific risks, and matters relating to specific plan sponsors. PBGC officials also stated that these efforts play a role in PBGC’s financial reporting processes, including valuing its benefit liabilities and determining whether liabilities associated with distressed plans should be classified as liabilities in PBGC’s financial statements, as required by generally accepted accounting principles. Statutory Obstacles May Limit EBSA’s Ability to Oversee Pension Plans Effectively Certain statutory obstacles may limit EBSA’s effectiveness in overseeing private sector pension plans. First, the restrictive legal requirements of the 502(l) penalty under ERISA have limited EBSA’s ability to assess penalties and restore plan assets. According to EBSA officials, the penalty discourages parties from quickly settling claims of violations, thereby impeding the restoration of plan assets. Further, EBSA officials stated that in some instances, the penalty can also reduce the amount of money restored to plan participants when a plan sponsor is unwilling to or cannot fully restore assets and pay the penalty. Second, investigators’ access to timely plan data for targeting new case leads is limited by ERISA filing deadlines. As a result, the data can be several years old. In fact, in some cases, investigators were relying on data up to 3 years old to target potential violators. While EBSA is constrained by ERISA’s filing requirements, the agency has taken steps to address processing delays in an effort to provide more timely data to investigators and to improve its targeting efforts. Restrictive Statutory Requirements Can Impede the Restoration of Plan Assets Restrictive legal requirements have limited EBSA’s ability to assess penalties against fiduciaries or other persons who knowingly participate in a fiduciary breach, and the penalty provision under Section 502(l) of ERISA has delayed and in certain instances prevented the restoration of funds to pension plans. Under ERISA, EBSA must assess penalties based on monetary damages, or more specifically, the restoration of plan assets. Section 502(l) of ERISA requires EBSA to assess a 20 percent penalty against a fiduciary who breaches a fiduciary duty under, or commits a violation of, Part 4 of Title I of ERISA or against any other person who knowingly participates in such a breach or violation, and the penalty is 20 percent of (1) the “applicable recovery amount,” (2) the amount of any settlement agreed upon by the Secretary, or (3) the amount ordered by a court to be paid in a judicial proceeding instituted by the Secretary. However, the penalty can only be assessed against fiduciaries or knowing participants in a breach by court order or settlement agreement. Therefore, if there is no settlement agreement, or court order, or if someone other than the fiduciary or knowing participant returns plan assets, EBSA cannot assess the penalty. In those instances where EBSA does pursue formal settlement, officials stated that the penalty can discourage parties from quickly settling claims of violations, because violators almost always insist on resolving all of EBSA’s claims in one settlement package, including both the amount to be paid to the plan and the amount paid in the form of a penalty. In many of these cases, violators have contested the penalty, in turn delaying settlement and impeding restoration of plan assets. In addition, officials stated that the penalty can, in some instances, reduce the amount of money restored to the plan participant when a plan sponsor is unwilling to or cannot fully restore assets and pay the penalty. Currently, EBSA has limited discretion to waive or reduce the 20 percent penalty in situations where it reduces the funds returned to the plan. Because ERISA requires the penalty to be paid to the U.S. Department of the Treasury, if insufficient funds exist to restore plan assets and pay the penalty, plan assets may not be completely restored. For example, if a plan sponsor is found to have breached its fiduciary duty and the amount involved is $1,000,000 and the sponsor has only $900,000 left in its possession, the amount returned to the plan participants will be $720,000 (80 percent), and a penalty of $180,000 (20 percent) will be paid to the U.S. Treasury. Investigators’ Access to Timely Data Limited by ERISA Filing Deadlines Under ERISA, plan sponsors have up to 285 days to file their annual Form 5500 reports, limiting EBSA investigators’ access to timely information necessary for targeting new case leads. In addition, as we reported in 2005, processing delays and the time necessary to correct errors can result in a further delay of up to 120 days after a plan’s year end—increasing the potential delay to over 400 days. As a result, in 2006, EBSA investigators were generally relying on information from 2003 and 2004 to target violations. Because of these delays, fiduciaries may have more time to misappropriate plan assets, causing harm to participants for long periods before violations are identified. Unlike IRS, which supplements its 5500 reviews with risk-based compliance examinations, EBSA relies primarily on the 5500 data maintained in its ERISA Data System (EDS) for performing its targeting efforts. According to officials, EDS provides EBSA investigators with about 30 pre-designed, standard programs as well as an ad hoc query capability to target pension plans that are perceived to have an increased likelihood of violations. For example, investigators stated that, historically, some construction contractors have established pensions for workers involved with a particular project and then abandoned the plan at the project’s completion without fully funding the plan. In this scenario, investigators can use EDS ad hoc query capability to obtain data on such plans. However, because of untimely information, plans could already be abandoned before EBSA investigators identified these types of violations. While EBSA is constrained by ERISA’s filing requirements, the agency has taken steps to address processing delays in an effort to obtain more timely information to improve its targeting efforts. In its fiscal year 2007 appropriation request, DOL requested funding for an updated electronic filing system—known as EFAST2—with the goal of expediting the Form 5500 filing process in two ways. First, EFAST2 is designed so that it will not accept Form 5500 data submissions unless they pass a series of edit checks. EBSA officials stated that the change should reduce errors and processing times. Second, EFAST2 should capture data from prior year filings in a manner that officials believe will be more conducive to analysis than the current ERISA Filing Acceptance System (EFAST). This new system is intended to replace the current process, where approximately 98 percent of Form 5500s are filed using paper forms, with the remainder filed electronically through EFAST. EBSA officials stated that the current paper filings take more than three times longer to process than electronic filings and have nearly twice as many errors. To address these issues, EBSA recently issued a regulation requiring the electronic filing of all Form 5500s for plan years beginning on or after January 1, 2008. EBSA officials believe that the new requirements and system features will provide EBSA with more timely data. Conclusions EBSA is a relatively small agency facing the daunting challenge of safeguarding the retirement assets of millions of American workers, retirees, and their families. Since our 2002 review, EBSA has taken a number of steps to strengthen its enforcement program and leverage its resources in an effort to implement its enforcement strategy. The agency has directed the majority of its resources toward enforcement and has decentralized its investigative authority to the regions, allowing its investigators more flexibility to focus on issues pertinent to their region. Yet despite these improvements, EBSA’s ability to protect plan participants against the misuse of pension plan assets is still limited, because its enforcement approach is not as comprehensive as those of other federal agencies, and generally focuses only on what it derives from its investigations. While it has employed some proactive measures, such as computerized targeting of pension plan documents, EBSA remains largely reactive in its enforcement approach, thus potentially missing opportunities to address problems before trends of noncompliance are well established. Currently, EBSA does not have the institutional capacity to comprehensively identify and evaluate evidence of potential risk to participants before emerging violations become pervasive. Although EBSA evaluates risk through the development of its annual national and regional projects, the agency does not conduct routine compliance examinations, which could add a key piece to the foundation on which to base its broad risk analyses. Further, the agency does not systematically draw on outside sources of information, such as academic studies and industry experts, nor does it formally assess risk on an ongoing basis, as similar agencies do. As a result, EBSA is restricted in its ability to detect new and emerging trends or weaknesses that may occur throughout the entire pension industry. However, even if EBSA were to conduct such examinations and collect additional information, it would not be in a position to identify overarching problems from these data, because it does not have a dedicated workforce for such efforts. We understand that dedicating staff for the purpose of identifying risks may require trade-offs among EBSA’s competing priorities. Given that EBSA investigators are tasked with the responsibility for overseeing roughly 3.2 million private pension and health benefit plans, such trade- offs must be considered carefully, and may involve the inclusion of other offices within the agency. Nevertheless, a formal risk assessment function can be conducted with modest staff allocations, as demonstrated by the PBGC and SEC risk assessment functions. Furthermore, if EBSA officials believe that these trade-offs would adversely affect its enforcement operations, the agency has the option of seeking additional resources from Congress, if necessary. However, such a request should only occur after the agency has explored and achieved all available efficiencies within its existing resource allocations. Whatever approach is ultimately taken, it is critical that EBSA take steps to employ a more assertive enforcement approach, or a portion of the pension industry will, in essence, continue to lack effective oversight. While EBSA is considering such options, it is vital that the agency further explore opportunities to strengthen its existing enforcement program. Although EBSA and SEC periodically coordinate efforts on multiple issues, the agencies must explore opportunities to identify questionable activities through a more systematic coordination effort throughout their regional offices. While we recognize that not all EBSA regional and district offices may have the same need to interact with SEC, access to information that SEC has obtained about potential violations could save investigative resources for both agencies and may also expedite the prosecution of fiduciaries who are violating the law. EBSA must also explore all possibilities to retain skilled staff so that it does not have to spend its limited resources on training new staff, and minimize the loss of institutional experience. Additionally, even though EBSA has taken steps to address the Form 5500 processing delays, EBSA investigators’ access to timely plan information necessary for targeting new case leads is still limited by ERISA’s filing deadline. Moreover, opportunities to expedite settlements and restore funds to pension plans may be lost by the fact that EBSA has little authority, under current law, to waive a mandatory penalty when it prevents fully restoring assets to participants. At a time when the retirement of millions of Americans is imminent, it is more important than ever to take all possible measures to protect their pension assets. Matter for Congressional Consideration To strengthen DOL’s ability to protect pension plan assets, Congress should consider amending section 502(l) of ERISA to give DOL greater discretion to waive the civil penalty assessed against a fiduciary or other person who breaches or violates ERISA in instances where doing so would facilitate the restoration of plan assets. Recommendations for Executive Action To improve overall compliance and oversight, we recommend that the Secretary of Labor direct the Assistant Secretary of Labor, EBSA, to evaluate the extent to which EBSA could supplement its current enforcement practices with strategies used by similar enforcement agencies, such as routine compliance examinations and dedicating staff for risk assessment, and conduct a formal review to determine the effect that ERISA’s statutory filing deadlines have on investigators’ access to timely information and the likely impact if these deadlines were shortened. Direct the Office of Enforcement to establish, where appropriate, formal SEC coordination groups in the regional offices, similar to those already in place in some EBSA regions. Direct the Office of Program Planning, Evaluation and Management to evaluate the factors affecting staff attrition and take appropriate steps, as necessary. Such an effort might include a market-based study to assess comparable private sector compensation within specific geographic locations and include recommendations for modifying pay structures, if appropriate. Agency Comments and Our Evaluation We obtained written comments on a draft of this report from the Acting Assistant Secretary for the Employee Benefits Security Administration, Department of Labor, and from the Director of Enforcement, for the Securities and Exchange Commission. EBSA and SEC’s comments are reproduced in appendix III and appendix IV, respectively. EBSA and SEC, as well as IRS and PBGC, also provided technical comments, which were incorporated in the report where appropriate. EBSA agreed with three of the four recommendations we made to the Secretary of Labor to strengthen EBSA’s enforcement program. EBSA disagreed with our recommendation to evaluate the extent to which the agency could supplement its current enforcement practices with other enforcement strategies, such as conducting routine compliance examinations and dedicating staff for risk assessment. While EBSA agreed that it should continue to evaluate its enforcement practices on an on- going basis, the agency stated that it would be premature to emulate the SEC and IRS models because GAO did not assess the effectiveness of these models. However, our report does not suggest that EBSA copy the IRS, PBGC, or SEC models; rather, we suggest that EBSA consider incorporating enforcement strategies that are standard practice at many federal financial regulators, such as the federal banking regulators that constitute the Federal Financial Institutions Examination Council as well as at IRS and SEC. Further, we have highlighted the potential benefit of these enforcement strategies in prior GAO work. We recognize and would expect that EBSA’s implementation of these standard practices could vary from other regulatory models, given the nature of its responsibilities. We continue to believe that these practices could have merit for EBSA and therefore deserve further consideration. In addition, EBSA commented that our recommendation to evaluate the extent to which it could supplement its investigations with routine compliance examinations appeared to be premised on the assumption that “some number of completely random investigations would have a significant deterrent effect and could better enable to identify emerging areas of noncompliance.” We do not believe that completely random investigations are appropriate, nor do we recommend that EBSA conduct them. Rather, EBSA should consider developing a compliance examination program that uses risk-based criteria to target larger or higher-risk pension plans with the goal of examining these plans more frequently. Based on these criteria, EBSA could select a sample of plans to review each year which may identify emerging areas of noncompliance with modest resource allocations. EBSA noted that it has conducted routine compliance examinations in the past as part of its investigative process, an action that it concluded resulted in a low number of cases with violations. We believe that examinations and investigations are two distinct enforcement practices. Specifically, compliance examinations should not only detect potential violations and deter noncompliance, but also identify mismanagement or questionable practices that may warrant additional scrutiny by investigators. Investigations are generally conducted in response to possible violations, which can be identified through compliance examinations and other sources. We believe that when used together, routine compliance examinations and investigations can provide a better enforcement capability than investigations alone. EBSA commented that the process it uses to identify risk has many of the same characteristics as the risk assessment process described in our report, and that EBSA investigators gather valuable information from employee benefit professionals. Our report recognizes that EBSA evaluates risk through its efforts in annually establishing its national priorities and projects by reviewing its investigations. However, we believe that EBSA’s risk assessment efforts fall short of practices used by other agencies because the agency lacks staff dedicated to continuously monitoring the private sector pension industry and bases its current risk assessment approach primarily on its investigative findings. According to GAO’s Standards for Internal Controls, agencies should establish an assessment of the risks the agency faces from both internal and external sources. For example, agencies should have mechanisms in place to anticipate, identify, and react to risks presented by changes, including economic, industry, and regulatory changes, that can affect the achievement of agency goals and objectives. Although EBSA has taken some steps to do this, certain patterns of risk may go undetected because EBSA does not have staff dedicated to evaluating risk across the entire industry, even though such an effort would not require extensive resources as our report highlights. If EBSA were to supplement its existing enforcement efforts with staff dedicated to continuously reviewing information from multiple sources, such as its investigators’ interviews with employee benefits professionals, findings by other agencies, compliance studies, and academic research, the agency could better anticipate, identify, and react to risk as it emerges, rather than after established patterns of risk are detected during its annual planning process. We continue to believe that by relying primarily upon the identification of risks through its investigations and the existing targeting process, some emerging trends or abuse could go undetected. As we agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor, the Commissioner of the IRS, the Chairman of the SEC, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7215. Key contributors are listed in appendix V. Appendix I: Scope and Methodology To determine the steps that the Employee Benefits Security Administration (EBSA) has taken in recent years to enforce and promote Employee Retirement Income Security Act of 1974 (ERISA) compliance, we collected and documented information on EBSA’s enforcement strategy, operations, and human capital management practices. We reviewed EBSA’s efforts to address recommendations from our prior work, focusing on the agency’s management of its enforcement program. To document the management of EBSA’s enforcement program, we collected and reviewed EBSA’s policies, such as its Strategic Enforcement Plan, Enforcement Manual, and regional Program Operating Plans. In addition, we obtained EBSA’s enforcement results for fiscal years 2001- 2005. EBSA maintains these results in its Enforcement Management System. This system was designed to support not only strategic policy decisions, but also day-to-day management of investigator inventories and activities. To verify the reliability of EBSA’s enforcement results data, we interviewed officials from EBSA’s Office of Technology and Information Services and corroborated the data with system documentation and the systems that produced the data. We reviewed the data for obvious inconsistency errors and completeness. From this review, we determined that the EBSA-supplied data were sufficiently reliable for the purposes of this report and account for EBSA’s enforcement results. We also used data from the 2002 and 2004 waves of the Health and Retirement Study to examine retirement income by source at the median because of the presence of extreme outliers. The rank order of Social Security and pensions and annuities is the same when evaluated at the mean or median. We also interviewed officials from the Department of Labor’s (DOL) Office of the Solicitor, and Office of Inspector General, as well as EBSA’s Office of Enforcement, Office of Participant Assistance, and Office of the Chief Accountant. In addition, we selected and visited EBSA’s regional and district offices in Atlanta, Boston, Chicago, Kansas City, Philadelphia, San Francisco, Seattle, and Washington, D.C., where we interviewed EBSA field office management, regional solicitors, staff, and investigators. We selected these offices based on geographic location and the number and types of investigations conducted. Further, we met with representatives from professional organizations that represent entities regulated by EBSA and plan participants and conduct audits of pension plans. In addition, we collected and examined information on EBSA enforcement initiatives, the results of its prior internal reviews, and studies performed by the DOL Office of Inspector General (OIG). To determine the statutory restrictions that limit the sharing of information between EBSA and the Securities and Exchange Commission (SEC), we interviewed EBSA investigators, managers, and attorneys. We also interviewed officials at SEC and reviewed the applicable securities laws that govern the sharing of information related to SEC investigations. Finally, we reviewed past GAO work on SEC and consulted the teams within GAO that regularly review SEC operations. Moreover, to verify claims by regional offices that offices were experiencing high rates of attrition, we analyzed data from the Office of Personnel Management’s Central Personnel Data File (CPDF). Using these data, we identified the newly hired investigators and followed them over time to see how many left EBSA. We identified all new hires for fiscal years 2000 through 2005 by using personal action codes for accessions and career conditional positions. Next, we determined whether these individuals had personnel activity indicating they had separated from EBSA. Separations (attritions) included resignations, retirements, terminations, and deaths. For more on the reliability of the CPDF, see GAO’s report on the topic. To determine the overall attrition rates for EBSA investigators (not just new hires), we analyzed data from the CPDF for fiscal years 2000 to 2005. For each fiscal year, we counted the number of employees with personnel actions indicating they had separated from EBSA. We did include investigators in training, who are classified as GS-1801 investigators, because these individuals draw down on EBSA’s overall full-time equivalents and play an important part of its hiring process. We divided the total number of separations for each fiscal year by the average of the number of permanent employees in the CPDF as of the last pay period of the fiscal year before the fiscal year of the separations and the number of permanent employees in the CPDF as of the last pay period of the fiscal year of separations. To place the attrition rates for EBSA investigators in context, we compared EBSA’s attrition rates to those for employees in other occupations and agencies (EBSA employees, all other DOL, and all other employees in the executive branch of the federal government.) To identify how EBSA practices compare to those of other agencies, we interviewed officials from SEC, the Internal Revenue Service (IRS), and the Pension Benefit Guaranty Corporation. We selected these agencies given their responsibilities in regulating different segments of the private sector pension industry. To identify the types of authorities and practices that these agencies used, we collected and reviewed documentation from ERISA, the Securities Exchange Act of 1934, the Investment Advisors Act of 1940, and the Investment Company Act of 1940, as well as prior GAO reports. However, we did not evaluate the effectiveness of these agencies’ compliance examination, enforcement, or risk assessment programs. From this review, we conducted a comparative analysis to identify what types of authorities and practices other agencies might have that EBSA did not—a detailed comparison can be found in appendix II. Furthermore, we identified statutory obstacles within ERISA that limit EBSA’s ability to enforce ERISA—the inefficient nature of Section 502(l) of ERISA and the lack of timely information for investigators resulting from annual reporting deadlines. To identify these obstacles, we interviewed several former and current EBSA investigators, reviewed past GAO and DOL OIG reports on ERISA enforcement, and collected and reviewed various documents to corroborate the testimonial evidence obtained. Specifically, to determine EBSA’s authority to waive a penalty that, in certain situations, reduces the amount of assets returned to plan participants, we interviewed EBSA investigators and other officials that assess and collect the penalty. We also reviewed the relevant section of ERISA, which requires the Secretary of Labor to assess the penalty under Section 502(l). We obtained and reviewed information regarding the number of times the penalty was assessed and the total amount collected as a result of the penalty. Finally, we obtained and reviewed court decisions that involved the assessment of the 502(l) penalty. Furthermore, to determine the timeliness of the information—provided on the Form 5500—that EBSA investigators use for targeting purposes, we interviewed EBSA investigators and management to identify the ways in which 5500 data are used to identify potential violations. We also reviewed a past GAO report that thoroughly reviewed the Form 5500 and the processes that contribute to the length of time between a plan’s year end and the time when the information is available for use by investigators. Additionally, we obtained and reviewed system documentation on the ERISA Data System (EDS)—the system that EBSA uses to store and query the 5500 information. Finally, we interviewed EBSA personnel that are involved in developing EFAST2, a new electronic filing system that will purportedly enable all 5500s to be filed electronically for reporting years beginning on or after January 1, 2008. Appendix II: Comparison of Selected Federal Agencies’ Authorities, Enforcement Practices, Results, and Resources The Employee Benefits Security Administration, the Internal Revenue Service, and the Securities and Exchange Commission are responsible for enforcing laws designed to protect pension plan participants and other securities investors. A comparison of the agencies’ authorities, responsibilities, and enforcement practices shows that EBSA lacks certain authorities compared to those of other agencies and uses different practices. Title I of ERISA provides the Secretary of Labor, through EBSA, the authority to investigate and enforce the requirements and standards of Title I. Civil penalties of up to $1,100 per day may be assessed for certain violations of reporting and disclosure obligations and a 20 percent penalty on an applicable recovery amount may be assessed related to a fiduciary breach. There are a number of fairly particularized penalties under ERISA that EBSA can impose. Unlike IRS and SEC, EBSA does not have the enforcement authority to disband, suspend, or take any effective action against a plan auditor for substandard audits of employee benefit plan, because plan auditors are not considered fiduciaries under ERISA. Title II of ERISA, which amended the Internal Revenue Code (the Code) to parallel many of the Title I rules, is administered by IRS. The principal responsibility under the Code for IRS is to determine that plans meet certain tax qualification requirements as specified in the Code. IRS has broad authority to revoke certain tax benefits to plan sponsors if they do not meet these requirements. IRS can also assess certain penalties for failure to file or furnish certain information required to be filed with the agency pertaining to plans. SEC, under federal securities laws, has broad authority to enforce and regulate the sale of securities and disclosure of information concerning these securities. SEC has authority, under its regulations, to maintain fair and orderly securities markets and requires specified disclosures of corporate financial statements. SEC, through civil penalties and fines, may enforce the securities laws to ensure compliance and may impose penalties ranging from $5,000 to $500,000 per violation, or in some cases the amount of pecuniary gain to the defendant as a result of the violation. Also, if SEC finds substandard audit work, it has the authority to bar, censure, or suspend auditors responsible for such work. Total registered securities entities: 17,337 724,000 (5500 filers) 221,000 (5500 EZ filers) 353,000 (non-5500 filers) Office of Enforcement (OE) Office of Employee Plans (EP) Office of Participant Assistance (OPA) Office of the Chief Accountant (OCA) (R&A) Office of Compliance Inspections and Examinations (OCIE) Unit (EPCU) Office of Risk Assessment (ORA) Responding to participant complaints (OPA) Investigations (OE) Establish compliance baselines for risk assessment (Examinations) Investigations (Enforcement) Compliance examination programs (OCIE) Voluntary compliance programs (OE, OCA) Centralized case selection process (Examinations) Formalized risk assessment (ORA) Reporting and disclosure audits (OCA) Compliance examinations (Examinations) “Soft contact” compliance programs (EPCU) Voluntary compliance programs (R&A) Determinations (R&A) Appendix III: Comments from Employee Benefits Security Administration Appendix IV: Comments from Securities and Exchange Commission Appendix V: GAO Contacts and Acknowledgments GAO Contact Acknowledgments The following team members made key contributions to this report: David Lehrer, Assistant Director; Jason Holsclaw; David Eisenstadt; Joe Applebaum; Kevin Averyt; Susan Bernstein; Sharon Hermes; Annamarie Lopata; Jean McSween; Michael Morris; Lisa Reynolds; Roger Thomas; Dayna Shah; and Gregory Wilmoth. Related GAO Products Mutual Fund Industry: SEC’s Revised Examination Approach Offers Potential Benefits, but Significant Oversight Challenges Remain. GAO-05-415 (Washington, D.C.: August 2005). Private Pensions: Government Actions Could Improve the Timeliness and Content of Form 5500 Pension Information. GAO-05-491 (Washington, D.C.: June 2005). Employee Benefits Security Administration: Improvements Have Been Made to Pension Enforcement Program but Significant Challenges Remain. GAO-05-784T (Washington, D.C.: June 2005). Mutual Fund Trading Abuses: Lessons Can Be Learned from SEC Not Having Detected Violations at an Earlier Stage. GAO-05-313 (Washington, D.C.: April 2005). Securities and Exchange Commission Human Capital Survey. GAO-05-118R (Washington, D.C.: November 2004). Pension Plans: Additional Transparency and Other Actions Needed in Connection with Proxy Voting. GAO-04-749 (Washington, D.C.: August 2004). Mutual Funds: Additional Disclosures Could Increase Transparency of Fees and Other Practices. GAO-04-317T (Washington, D.C.: January 2004). Answers to Key Questions about Private Pension Plans. GAO-02-745SP (Washington, D.C.: September 2002). Private Pensions: IRS Can Improve the Quality and Usefulness of Compliance Studies. GAO-02-353 (Washington, D.C.: April 2002). Pension and Welfare Benefits Administration: Opportunities Exist for Improving Management of the Enforcement Program. GAO-02-232 (Washington, D.C.: March 2002). Securities and Exchange Commission: Human Capital Challenges Require Management Attention. GAO-01-947 (Washington, D.C.: September 2001). Financial Services Regulators: Better Information Sharing Could Reduce Fraud. GAO-01-478T (Washington, D.C.: March 2001)
The Department of Labor's (DOL) Employee Benefits Security Administration (EBSA) enforces the Employee Retirement Income Security Act of 1974 (ERISA), which sets certain minimum standards for private sector pension plans. On the basis of GAO's prior work, the Senate Committee on Health, Education, Labor and Pensions asked GAO to review EBSA's enforcement program. Specifically, this report assesses (1) the extent to which EBSA has improved its compliance activities since 2002; (2) how EBSA's enforcement practices compare to those of other agencies; and (3) what obstacles, if any, affect ERISA enforcement. To do this, we reviewed EBSA's enforcement strategy and operations, and interviewed officials at EBSA, the Internal Revenue Service (IRS) and the Securities and Exchange Commission (SEC), among others. In March 2002, we identified weaknesses in EBSA's enforcement program, despite the agency's actions to strengthen it. Since that time, EBSA has, among other things, promoted coordination among regional investigators and increased participation in its voluntary correction programs, as we recommended. EBSA also has recruited investigators with advanced skills in accounting, finance, banking, and law that officials believe are necessary due to ERISA's technicalities. Yet some weaknesses identified in 2002 remain. Specifically, EBSA still has not adequately assessed the nature and extent of ERISA noncompliance, even though it has taken steps to do so. Without these data, EBSA is not positioned to focus its resources on key areas of noncompliance nor have adequate measurable performance goals to evaluate its impact on improving industry compliance. We also found that while some regional offices did routinely attempt to confer with their respective regional office of the SEC--the agency that oversees many of the same pension service providers under the securities laws--for case leads or to consider trends in potential pension violations, others did not. Lastly, EBSA's overall attrition rates remain high, with many investigators leaving for employment outside the federal government, yet EBSA has taken limited steps to evaluate the effect such attrition has on its operations. EBSA does not conduct routine compliance examinations and broad, ongoing risk assessments to focus its enforcement efforts like other agencies. Rather, investigators rely on various sources for case leads, such as participant complaints, agency referrals, and computer targeting. While such sources are important, this approach generally limits EBSA to leads discerned by participants and other government agencies or those disclosed by plan sponsors, and not those more complex or hidden. Further, EBSA also has not established a comprehensive risk assessment function. Instead of broad risk assessments, EBSA's annual risk evaluations are generally limited to a risk analysis of frontline investigators' case loads. In contrast, in addition to such activities, IRS and SEC incorporate routine compliance programs in an attempt to detect violations and identify emerging trends that may warrant enforcement action. Also, the SEC and Pension Benefit Guaranty Corporation have dedicated staff to regularly analyze information from various sources, such as investigations and academic research. Certain statutory obstacles also limit EBSA's oversight of private sector pension plans. First, restrictive legal requirements have limited EBSA's ability to assess penalties against fiduciaries and can impede the restoration of plan assets. DOL officials said that the 502(l) penalty under ERISA discourages quick settlement and can reduce the amount of funds returned to pension plans. Second, EBSA investigators' access to timely information necessary for identifying potential violations is limited by ERISA's filing requirements. Even though EBSA is taking steps to address processing delays, in 2006, investigators were relying on information up to 3 years old to target new case leads in some cases.
Background DOD acquires, operates, and maintains a vast array of physical assets, ranging from aircraft, ships, and land vehicles to buildings, ports, and other facilities. Corrosion is an extensive problem that affects these assets and has an impact on military funding requirements, readiness, and safety. It is estimated that the direct costs to DOD of corrosion on military equipment and infrastructure is between $10 billion and $20 billion annually. In our prior work, we reported in July 2003 that, although the full impact of corrosion could not be quantified because of the limited amount of reliable data that DOD and the military services had available, corrosion has a substantial impact in terms of cost, readiness, and safety on military equipment and facilities. Moreover, we found that DOD and the military services did not have an effective management approach to mitigate and prevent corrosion. As a result, we recommended, and DOD concurred, that it should develop a departmentwide strategic plan with clearly defined goals, measurable outcome-oriented objectives, and performance measures. In recognizing the extent of DOD’s corrosion problem, Congress enacted legislation as part of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 that directed the Secretary of Defense to designate an officer or organization to be responsible for the prevention and mitigation of corrosion of military equipment and infrastructure. The legislation also required the Secretary to develop a long-term strategy to reduce corrosion and the effects of corrosion on military equipment and infrastructure, and submit the report to Congress no later than 12 months after the date of the enactment of the Act. The mandate required that the strategy include, among other things, policy guidance, performance measures and milestones, and an assessment of the necessary personnel and funding to accomplish the long-term strategy. The mandate also required that DOD include an assessment of these elements for four specific initiatives. These initiatives are: (1) expansion of the emphasis on corrosion prevention and mitigation within DOD to include coverage of infrastructure; (2) application uniformly throughout DOD of requirements and criteria for the testing and certification of new corrosion-prevention technologies for equipment and infrastructure with similar characteristics, similar missions, or similar operating environments; (3) implementation of programs, including supporting databases, to ensure that a focused and coordinated approach is taken throughout DOD to collect, review, validate, and distribute information on proven methods and products that are relevant to the prevention of corrosion of military equipment and infrastructure; and (4) establishment of a coordinated research and development program for the prevention and mitigation of corrosion for new and existing military equipment and infrastructure that includes a plan to transition new corrosion prevention technologies into operational systems. To prepare a strategy, DOD established a corrosion policy and oversight task force. The task force is located in the Office of the Under Secretary of Defense for Acquisitions, Technology, and Logistics and reports to the Principal Deputy Under Secretary for Acquisitions, Technology, and Logistics. The task force consists of seven working groups responsible for addressing seven corrosion focus areas: (1) policy and requirements; (2) impact, metrics, and sustainment; (3) science and technology, (4) communication and outreach; (5) training and doctrine; (6) facilities; and (7) specifications or standards and product qualification. According to DOD officials, these seven areas were identified to address the congressional concerns that led to the mandate and the issues discussed in our 2003 report. These officials said that because the key elements of the mandate (funding and personnel, performance measures and milestones, and policy guidance) are comprehensive, they each apply one way or another to the seven focus areas in the strategy. Corrosion Strategy Shortcomings May Hinder Successful Implementation While the long-term corrosion strategy generally addresses the mandate’s requirements, several shortcomings are likely to hamper the successful implementation of DOD’s long-term corrosion strategy. The strategy (1) does not identify the level of funding and personnel resources needed to tackle corrosion problems; (2) does not provide outcome-oriented performance measures and a baseline study to measure progress; and (3) strengthens existing policy guidance, but some improvements can be made. In addition, we recommended in our July 2003 corrosion report, and DOD concurred with our recommendation, that a long-term strategy should include elements compatible with the Government Performance and Results Act of 1993. Among these elements were the level of resources needed to accomplish the strategy’s goals and objectives and performance measures, such as the expected return on investment and realized net savings of prevention projects that show progress toward achieving the strategy’s objectives. Strategy Does Not Identify Specific Funding and Personnel Resources While DOD’s corrosion strategy generally addresses the issue of funding, it does not include any estimates of the specific dollar amounts that are needed for its near- or long-term implementation. According to the strategy, the newly formed Corrosion Policy and Oversight task force will develop inputs to the Future Years Defense Program based on corrosion requirements and projects. DOD corrosion officials told us, however, that funding estimates were not included in the strategy because DOD and the military services are still in the process of determining the requirements. The officials said they expect to have firm estimates by December 2004. In a separate study during the preparation of the strategy, however, DOD’s corrosion task force developed a preliminary schedule of funding requirements for corrosion reduction efforts. These estimates projected that DOD and the military services would need a total of about $1.9 billion in departmentwide corrosion prevention and mitigation resources for fiscal years 2004 through 2009. DOD corrosion officials said that the task force’s figures represent an initial attempt to estimate DOD’s and the military service’s funding needs. Table 1 shows the task force’s estimated funding requirements for corrosion prevention and mitigation efforts for both military equipment and infrastructure for the period from fiscal year 2004 through fiscal year 2009. The task force’s estimates indicated that the services would need about $74.4 million in fiscal year 2004 for corrosion prevention and mitigation projects, but this funding has not been allocated or obligated. The task force identified 93 projects that had high potential returns on investment and were ready to be undertaken immediately. These projects included, for example, the installation of sensors to monitor fuel tanks and pipes for corrosion and the use of corrosion-inhibiting lubricants for avionics equipment on military aircraft. Corrosion officials told us that the $74.4 million was not included in DOD’s fiscal year 2004 budget request because the task force developed the estimate too late to be incorporated in the budget request. Corrosion officials said they hoped to obtain funding that would become available during fiscal year 2004, but, as of April 2004, DOD and the services had not allocated or obligated these funds. The task force also estimated that the services would need about $312 million for equipment and infrastructure corrosion projects in fiscal year 2005. However, DOD's Comptroller officials told us that the services included only $27 million, less than 10 percent of the projected amount for departmentwide corrosion prevention and mitigation projects in their fiscal year 2005 budget request. To fund these projects, DOD Comptroller officials approved a budget change of $27 million from a special project designed to counter threats to the Civil Reserve Fleet and other aircraft to the services’ operation and maintenance accounts ($9 million each for Army and Air Force, $7 million for the Navy, and $2 million for the Marine Corps). DOD corrosion officials told us that they are using these service accounts because DOD does not have an account that is dedicated to departmentwide corrosion reduction. These officials also said that, after the funds are appropriated, they plan to issue a letter of instruction to the services requiring them to obtain approval from DOD’s corrosion office for the use of these funds. Of the $27 million, DOD corrosion officials said they expect to use $24 million for corrosion projects (e.g., for rinse facilities for the services’ helicopters and other aircraft and temporary shelters for military equipment and vehicles), $2.5 million to begin a corrosion impact baseline study; and $500,000 for the corrosion task force’s operating expenses. DOD corrosion officials told us that, while the $27 million falls far short of the amount needed to fully implement the strategy, it represents the first time that DOD expects to use funds for corrosion reduction on a departmentwide basis, and it demonstrates DOD’s commitment to augment the funding resources that have previously been under the purview of the military services. DOD Comptroller officials told us that, in future fiscal years, corrosion reduction efforts would likely continue to be funded on a year-to-year basis by program offsets, such as those used for 2005. They said they eventually expect that departmentwide funding will no longer be needed as the military services assume a greater role in funding their own corrosion reduction projects. Comptroller officials said that the services have the knowledge and expertise to manage their own corrosion control projects and, therefore, are in a much better position to identify and allocate funding for these efforts. However, DOD corrosion officials said that the services are not in a position of knowing which corrosion projects have the best potential to provide departmentwide benefits and, furthermore, that these projects are not well coordinated within and among the military services. DOD’s corrosion officials said that the corrosion reduction strategy may continue to be underfunded because of the lack of an effective long term funding mechanism that would better ensure that corrosion reduction projects have sustained funding over a period of years. At the present time, the corrosion prevention program is being supported piecemeal through budget change proposals or offsets. Corrosion officials told us that with a long-term funding mechanism dedicated to departmentwide corrosion prevention and mitigation, the program might be able to secure a commitment for funding these projects for future years. Such a mechanism could also fund projects that crosscut the services and that have the greatest potential for cost savings. Corrosion officials said that they prefer to have a long-term funding mechanism, such as a program element, but the DOD Comptroller does not think that this is necessary at this time. As we reported in July 2003, the corrosion mitigation program may continue to be underfunded because DOD and the military services continue to give corrosion prevention a lower priority than other requirements. According to DOD corrosion officials, corrosion reduction projects must compete with other operation and maintenance programs. Because DOD and the military services give higher priority to projects that show immediate results, they have limited funding for corrosion reduction efforts whose benefits may not be apparent for many years. Corrosion officials told us that one of the biggest challenges to getting needed funding is to change DOD and military service personnel attitudes—from thinking that money spent on corrosion prevention detracts from other projects to realizing that it saves money in the long run. According to DOD corrosion officials, if DOD and the services do not request more funding for corrosion prevention projects, DOD may lose or delay the opportunity to realize savings amounting to billions of dollars in avoidable maintenance costs for military equipment and facilities now and in the future. According to corrosion officials, the average potential return on investment for a corrosion prevention project is about 10 to 1, with some projects showing a return as high as 80 to 1, and with the savings realized about 5 years after funding begins. DOD corrosion officials said that this means, for example, that if DOD invests $500 million in a corrosion project today, it could realize a potential savings of about $4.5 billion 5 years from now. In terms of personnel resources, the strategy generally provided an assessment of the personnel necessary to manage the corrosion program effectively in DOD and the services, but the strategy did not identify the level of personnel resources needed to implement the strategy. The strategy noted the establishment of an Office of Corrosion Policy and Oversight that is responsible for developing and implementing the corrosion strategy and specified that the office would have a director. DOD corrosion officials told us the office also includes a deputy director and engineer and that these positions are temporary. The strategy also indicated that a corrosion prevention and control working group, consisting primarily of corrosion professionals from DOD, would provide support for the corrosion office. DOD corrosion officials said these individuals are not permanently assigned to the office but serve on a part- time basis. These officials added that, because the strategy was recently established, DOD and the military services have had little time to determine the number of personnel needed to implement it. These officials told us that the requirements would likely be minimal and they expect to have a firmer estimate by December 2004. The strategy does not identify the specific amount of funding or personnel needed to move ahead with the four initiatives specified in the congressional mandate. While the strategy includes descriptions of military equipment and facilities projects that address in varying ways these four areas, it states that these projects require an assessment of funding and other resources needed to support them. DOD corrosion officials told us that they plan to systematically evaluate each project and that this assessment will include determining the resources needed to implement the effort. Lack of Outcome-Based Performance Measures and Baseline Study Hamper Tracking Progress and Setting Priorities While DOD’s corrosion strategy includes performance measures and milestones, they are not the outcome-oriented metrics that are needed to successfully monitor the department’s progress in mitigating corrosion and its impacts. Instead, the strategy contains output-oriented metrics that measure the number of program activities. For example, DOD plans to measure progress toward achieving the strategy’s goals by counting the number of major acquisition programs that have developed corrosion prevention plans, tracking the number of injury-related incidents related to corroding equipment or facilities, and recording the number of maintenance personnel enrolled in corrosion-mitigation training modules. By contrast, an outcome-oriented performance metric would allow DOD to determine how much corrosion-prevention projects have reduced the amount of maintenance costs for Navy aircraft carriers, decreased the failure rates for the Army’s 155 millimeter medium-towed howitzer, or decreased Air Force Base fuel pipeline ruptures—all within a certain timeframe. In addition, the development of meaningful performance metrics will be hampered until a baseline study of the costs and the extent of corrosion problems departmentwide is completed. In our July 2003 report, we indicated that the lack of reliable data made it difficult to adequately assess the overall impact of the corrosion problem. A baseline study would identify the cost of corrosion on military equipment and facilities across the services as well as corrosion’s impact on military personnel safety and operational readiness. Such a study would document where corrosion problems exist, identify their causes, and prioritize them according to their relative severity. However, while the long-term strategy acknowledges the critical importance of developing a baseline of corrosion costs, including those related to safety and readiness, DOD does not plan to complete such a baseline until 2011. DOD corrosion officials told us they plan to allocate $2.5 million of the $27 million provided for fiscal year 2005 corrosion- related projects to begin such a study. DOD corrosion officials told us that the task force estimated that it would take an additional $1.25 million for each of the next 6 fiscal years (2006 through 2011) to complete the study, for a total cost of $10 million. They said that it would take that long primarily because of the limited funding available for the strategy, which has forced them to stretch out funding for the baseline over a period of several years. The officials also said that the study would take some time to complete because of data reliability issues, the lack of consistency in corrosion data within and among the military services, and the incompatibility of information systems that contain the data. Without a corrosion baseline, DOD will not be able to develop adequate performance metrics to measure—or report on—its initial progress toward reducing corrosion and its impacts. Furthermore, DOD will not have an overall picture of the extent of corrosion problems, making it difficult to effectively identify areas that are most severely impacted by corrosion and that require high-priority attention and resources. While DOD’s corrosion strategy includes some performance measures and milestones for the four initiatives, the metrics are not the results-oriented performance measures needed to successfully implement the strategy. Strategy Strengthens DOD’s Corrosion Mitigation Policy Guidance but Could Be Improved As part of the long-term corrosion strategy, DOD strengthened its policy guidance for corrosion prevention and control activities, but there are opportunities to build on these improvements. The new guidance explicitly calls for the consideration of corrosion prevention and control planning during the earliest stages of the acquisition process for military weapon systems and military infrastructure programs; earlier guidance did not single out the need for such planning. DOD also included the need to consider corrosion prevention and control in an existing guidebook for weapons systems program managers. While the strategy contains a policy memorandum that sets up a review process for corrosion-related issues for major weapon systems programs (e.g., Joint Strike Fighter), it does not extend this review to non-major weapon systems (e.g., Torpedo Defense System Program) and infrastructure programs. The guidance directs the corrosion prevention and control working group to regularly review the adequacy of corrosion prevention plans of all weapon system programs subject to Defense Acquisition Board review. If they identify an issue, the product group will bring it to the attention of the board. Furthermore, the policy memorandum states that the Acting Undersecretary of Defense for Acquisitions, Technology, and Logistics will personally evaluate the corrosion plans for programs subject to board review. According to DOD corrosion officials, the guidance did not extend this review to the non-major weapons programs, which are under the responsibility of individual military services. The corrosion officials said this was done so that the services could retain flexibility in managing their own programs. Military service officials told us that they have not established a corrosion prevention plan review process for their programs because the policy memorandum is relatively new, and they prefer to wait to see how the process works before they establish a similar review process. However, these service officials and DOD officials said that they recognize that all programs, both major and non-major weapon systems and infrastructure, experience significant corrosion impacts and that all of their corrosion prevention plans would benefit from a review process. In addition, DOD’s new corrosion strategy does not include any corrosion planning or review requirements for the Chairman, Joint Chiefs of Staff’s Focused Logistics Capabilities Board. However, Joint Chiefs of Staff officials said they will include corrosion prevention planning in the board’s sustainability assessments of military weapon systems. DOD corrosion officials told us that this effort by the Joint Chiefs of Staff would support the strategy and enhance DOD’s overall corrosion reduction programs. While the strategy provides general policy guidance, it does not specifically provide policy guidance for the four initiatives. Conclusions By focusing attention on the extensive and costly problem of corrosion and its debilitating impact on military equipment and facilities, DOD’s new long-term corrosion strategy is a step in the right direction. However, because the strategy falls short of providing the basic elements of an effective management plan, DOD’s ability to implement it successfully remains at risk. Because of the strategy’s limited assessment of funding and personnel needs, lack of a baseline study, and weak performance measures, it is not certain that DOD’s corrosion prevention and mitigation efforts will be adequately funded, monitored, or thoroughly evaluated. Without a sufficient assessment of the funding and personnel resources required to reduce the effects of corrosion, Congress does not have the information it needs to make informed, corrosion-related funding decisions in the future. In addition, if DOD and the services do not adequately fund corrosion prevention efforts in the near term, they will lose or delay the opportunity to realize billions of dollars in avoidable maintenance costs over the long term. They will also face increasing degradation in the safety and readiness of military equipment and personnel. Furthermore, without establishing a departmentwide corrosion baseline, DOD cannot reliably estimate its overall resource needs, determine which ones have the highest priority, and track and measure its progress toward meeting these needs. Moreover, without good results- oriented performance metrics, DOD cannot adequately measure its progress in reducing the impact of corrosion. Finally, without expanding its policy guidance to require a review of all corrosion prevention planning, DOD will not be able to ensure that all new programs and activities—including non-major weapon systems and infrastructure—are thoroughly evaluated. As a result, some acquisition and construction programs could slip by without effective planning to prevent and control corrosion. In addition, DOD will miss an opportunity to strengthen its efforts to reduce the impact of corrosion on all new acquisitions and facilities in the future. Without fully addressing the strategy’s weaknesses, the effects of corrosion will continue to exact a tremendous toll on the financial and operational condition of the military. Recommendations for Executive Action To provide better assurances that the Department of Defense’s long-term corrosion strategy is successfully implemented as envisioned by Congress, we are making five recommendations. We are recommending that the Secretary of Defense instruct the Undersecretary of Defense for Acquisition, Technology and Logistics, in consultation with the DOD Comptroller, take the following actions: Establish a date to complete the corrosion baseline study well before its original estimated completion date of 2011 in order that cost-effective resource priorities and results-oriented performance measures can be established to monitor progress in reducing corrosion and its impacts on equipment and infrastructure; Establish a funding mechanism to implement the corrosion strategy that would be consistent with the strategy’s long-term focus; and Submit to Congress, as part of the fiscal year 2006 budget submission, a report identifying the long-term funding and personnel resources needed to implement the strategy, a status report of corrosion reduction projects funded in fiscal year 2005, and the status of a baseline study. In addition, we recommend that the Secretaries of the military services establish policy guidance that would include the review of the corrosion prevention and control plans of non-major weapons systems and infrastructure programs. Finally, we recommend that the Chairman, Joint Chiefs of Staff direct the Focused Logistics Capabilities Board to include corrosion prevention issues in its sustainability assessments of military weapon systems. Agency Comments and Our Evaluation In commenting on a draft of this report, the Director of Defense Procurement and Acquisitions Policy concurred with all five of our recommendations. The comments are included in appendix II of this report. In concurring with our recommendation to complete a corrosion baseline study as soon as possible, DOD noted that, as part of the long-term strategic plan, it would continue its efforts to evaluate corrosion costs. However, DOD did not indicate when it would complete the overall, departmentwide baseline study of corrosion costs that we believe is essential for establishing cost-effective resource priorities and tracking progress towards reducing corrosion and its impacts on equipment and infrastructure. We continue to believe that this baseline study should be completed as soon as possible. Therefore, we have modified our recommendation to be more specific and stated that DOD should establish a date to complete the corrosion baseline study well before its original estimated completion date of 2011. In concurring with our recommendation to establish a funding mechanism to implement the corrosion strategy that would be consistent with the strategy’s long-term focus, DOD stated that the corrosion office would submit funding requests through the Planning, Programming, Budgeting, and Execution process. In addition, DOD noted that funding requests for corrosion prevention would compete for funds with other DOD programs based on need priorities and fiscal constraints. Although DOD did not provide specific details, we would expect that funding requests for corrosion would be made during the budget submission process and be included in DOD’s submission to Congress rather than be made through budget change proposals or offsets after funds are obligated. We would also expect that corrosion prevention funding estimates would be included in the Future Years Defense Program. Unless DOD adopts these types of approaches, corrosion prevention funding will continue to receive a lower priority than other DOD efforts, and as a result, DOD will lose the opportunity to save billions of dollars in avoidable maintenance costs and to improve the safety and readiness of military equipment and infrastructure. In concurring with our recommendation that the Secretaries of the military services establish policy guidance calling for reviews of corrosion prevention and control plans of non-major weapons systems and infrastructure programs, DOD indicated that it would encourage the Secretaries to implement such reviews. DOD also stated that non-major programs are reviewed subject to the requirements of different acquisition authorities within the military services. We do not believe that DOD’s comments are fully responsive to our recommendation. We continue to believe that non-major weapons systems experience corrosion problems similar to those experienced by major weapons systems and that they would benefit from the same kind of corrosion prevention plan review. Our recommendation also applies to infrastructure programs that are primarily managed by the military services. We recognize that the authority to manage the activities of non-major weapons systems and infrastructure programs lies, for the most part, with the military services and that is why our recommendation is directed to the Secretaries of the services. As a result, we would expect the Secretaries to implement the recommendation by establishing policy guidance appropriate to their respective services. We are sending copies of this report to the Secretary of Defense; the Director, Office of Management and Budget; and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 if you or your staff have any questions concerning this report. Key contributors to this report were Lawson Gist, Jr., Allen Westheimer, Hector Wong, Nancy Benco, and Katherine Lenane. Appendix I: Scope and Methodology To assess each of the three key areas of the report, we held numerous discussions with officials of DOD’s Corrosion Policy and Oversight task force and reviewed relevant DOD documents, including the final strategy report to Congress. Furthermore, to determine the adequacy of each key area, we applied internal control tools and results-oriented performance standards that are necessary components for successful management activities in departments and, by extension, individual programs. To assess whether the DOD’s corrosion strategy identified and obtained resources to prevent and mitigate corrosion on equipment and infrastructure, we reviewed funding requirements and cost estimates for DOD and the military services and spoke to DOD officials about unfunded corrosion prevention project requirements, the identification of funding resources, and future-year funding requirements. We also reviewed the unfunded service requirements list and the fiscal year 2004 corrosion prevention projects list. We interviewed DOD Comptroller officials and discussed the fiscal year 2005 budget request and the prospect for future years funding. We also discussed our review of DOD’s Program Budget Directive document to understand why the task force did not have its own budgeted account. To determine whether the strategy’s performance measures and baseline data were adequate to prevent and mitigate corrosion DOD-wide, we interviewed the leader of the task force working group for Metrics, Impacts, and Sustainment about the development of the strategy’s performance measures, barriers to gathering cost data across the military services, and plans to develop a corrosion cost baseline. We analyzed the costs used to prepare existing cost impact studies, particularly studies the metrics working groups plan to use to help establish the baseline. We observed meetings and internal discussions of the working group for Metrics, Impacts, and Sustainment at four separate corrosion forums sponsored by the task force. We also reviewed corrosion prevention documents related to the development of performance metrics and the baseline study. To assess the adequacy of the strategy’s policy guidance for preventing and mitigating corrosion, we met with the Office of the Deputy Undersecretary of Defense for Defense Systems, the Joint Chiefs of Staff for Logistics, and members of the task force’s working group for Policy and Requirements. To determine how the corrosion policy affected military infrastructure, we interviewed officials in the Office of the Deputy Under Secretary of Defense for Installations and Environment, and members of the task force’s working group for Facilities. We also attended the TriService Corrosion Conference, the Army Corrosion Conference, and all four Corrosion Forums sponsored by the corrosion task force to better understand the role of policy and its impact on military equipment and infrastructure. We also reviewed relevant policy documents, memos, instructions, and regulations. To assess the reliability of the estimated funding needs for corrosion prevention projects for fiscal years 2004 through 2009 by the military services we (1) interviewed officials knowledgeable about the data and (2) assessed related funding requirements studies and reports. We determined that the data were sufficiently reliable for the purposes of this report. We conducted our review between November 2003 and April 2004 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Defense GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Each year, the Department of Defense (DOD) spends an estimated $20 billion to repair the damage to military equipment and infrastructure caused by corrosion. Furthermore, corrosion profoundly impacts military readiness as well as the safety of military personnel. In the Bob Stump National Defense Authorization Act for Fiscal Year 2003, Congress directed that DOD develop a long-term corrosion strategy, including specific requirements, and that GAO assess it. DOD submitted its strategy in December 2003. This report assesses the potential of the corrosion strategy (in terms of three elements--resources, performance metrics, and policy guidance) to effectively prevent and mitigate corrosion and its effects on military equipment and infrastructure. While DOD's new long-term corrosion strategy generally addresses the requirements in the congressional mandate, it falls short of representing a comprehensive plan needed to implement successfully the strategy and manage DOD's extensive corrosion problems in the future. An effective, results-oriented strategy identifies resources required to achieve its goals and outcome-based performance metrics that can measure progress toward achieving those goals. Without addressing certain key elements, the strategy is unlikely to serve as an effective tool in preventing and mitigating corrosion and its effects on military equipment and infrastructure. These shortcomings could lead to the loss of billions of dollars in avoidable maintenance costs and the degradation of safety and readiness. GAO's review of three key elements showed the following. Funding and personnel resources--The strategy does not identify the level of funding and personnel resources needed to implement the corrosion reduction plan in the near- or long-term. Officials in DOD's corrosion office said that resource needs are still being determined and firm estimates should be available in December 2004. However, preliminary projections made by the corrosion task force indicated that the DOD-wide corrosion reduction program would require about $1.9 billion for fiscal years 2004 through 2009. DOD and the services, however, have not included any funds for fiscal year 2004 and less than 10 percent of the task force's fiscal year 2005 estimates. While the strategy calls for a mechanism that ensures sustained, long-term funding, DOD has been using a year-by-year funding approach. Performance measures and milestones--While the strategy includes some performance measures and milestones, they are not the resultsoriented metrics needed to successfully monitor the program's progress. In addition, DOD does not plan to complete a critically needed, corrosion cost baseline study until 2011 because of limited funding. Without results-oriented metrics and a baseline, DOD will not be in a sound position to establish cost-effective resource priorities or monitor progress toward corrosion reduction. Policy guidance--While the strategy strengthens DOD's policy guidance on corrosion prevention and mitigation, improvements can be made. The new guidance establishes a review process for corrosion prevention plans for major weapon systems programs, such as the Joint Strike Fighter. However, the guidance does not extend the review to non-major weapons systems and infrastructure programs, which are under the purview of the military services. The guidance also does not require the Chairman, Joint Chiefs of Staff's Focused Logistics Functional Capabilities Review to consider corrosion prevention planning when it reviews project requirements.
Background Executive Order 12898 stated that to the extent practicable and permitted by law, each federal agency, including the EPA, “…shall make achieving environmental justice part of its mission by identifying and addressing, as appropriate, the disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations in the United States…” In response to the 1994 order, among other things, the EPA Administrator issued guidance the same year providing that environmental justice should be considered early in the rule-making process. EPA continued to provide guidance regarding environmental justice in the following years. For example, in 1995, EPA issued an Environmental Justice Strategy that included, among other provisions, (1) ensuring that environmental justice is incorporated into the agency’s regulatory process, (2) continuing to develop human exposure data through model development, and (3) enhancing public participation in agency decision making. The Office of Environmental Justice, located within EPA’s Office of Enforcement and Compliance Assurance, provides a central point for the agency to address environmental and human health concerns in minority communities and/or low-income communities. However, the agency’s program offices also play essential roles. As such, the key program office dealing with air quality issues is the agency’s Office of Air and Radiation. In fulfilling its Clean Air Act responsibilities, the Office works with state and local governments and other entities to regulate air emissions of various substances that harm human health. It also sets primary national ambient air quality standards for six principal pollutants (carbon monoxide, nitrogen oxides, sulfur dioxide, particulate matter, ground level ozone, and lead) that harm human health and the environment. These standards are to be set at a level that protects human health with an adequate margin of safety which, according to EPA, includes protecting sensitive populations, such as the elderly and people with respiratory or circulatory problems. The Office of Air and Radiation has a multistage process for developing clean air and other rules that it considers a high priority. Initially, a workgroup chair is chosen from the lead program office—normally the Office of Air and Radiation in the case of clean air rulemakings. The workgroup chair assigns the rule one of the three priority levels, and EPA’s top management makes a final determination of the rule’s priority. The priority level assigned depends on such factors as the level of the Administrator’s involvement and whether more than one office in the agency is involved. The gasoline, diesel, and ozone implementation rules were classified as high-priority rules on the basis of these factors. They were also deemed high priority because they were estimated to have an effect on the economy of at least $100 million per year or were viewed as raising novel legal and/or policy issues. For high-priority rules, the workgroup chair is primarily responsible for ensuring that the necessary work gets done and the process is documented. Other workgroup members are assigned from the lead program office and, in the case of the two highest priority rules, from other offices. Among its key functions, the workgroup (1) prepares a plan for developing the rule, (2) seeks early input from senior management, (3) consults with stakeholders, (4) collects data and analyze issues, (5) analyzes alternative options, and (6) recommends one or more options to agency management. In addition, a workgroup economist typically prepares an economic review of the proposed rule’s costs to society. According to EPA, the “ultimate purpose” of an economic review is to inform decision makers of the social welfare consequences of the rule. After approval by relevant offices within EPA, the proposed rule is published in the Federal Register, the public is invited to comment on it, and EPA considers the comments. Comments may address any aspect of the proposed rule, including whether environmental justice concerns are raised and appropriately addressed in the proposed rule. Sometimes, prior to the publication of the proposed rule, EPA publishes an Advanced Notice of Proposed Rulemaking in the Federal Register. The notice provides an opportunity for interested stakeholders to provide input to EPA early in the process, and the agency takes such comments into account to the extent it believes is appropriate. As required by the Clean Air Act, when finalizing a rule, EPA must respond to each significant comment raised during the comment period. In addition, EPA’s public involvement policy states that agency officials should explain how they considered the comments, including any change in the rule or the reason the agency did not make any changes. After these tasks are completed, the rule, if it is significant, is sent to OMB for approval. Once OMB approves the final rule and the Administrator signs it, it is published in the Federal Register. After a specified time period, the rule takes effect. EPA Generally Devoted Little Attention To Environmental Justice in Drafting Three Rules and Considered it to Varying Degrees in Finalizing Them When drafting the three clean air rules, EPA generally devoted little attention to environmental justice. We found, for example, that while EPA guidance states that workgroups should consider environmental justice early in the rulemaking process, this was accomplished only to a limited extent. Key contributing factors included a lack of guidance and training for workgroup members on identifying environmental justice issues. In addition, while EPA officials stated that economic reviews of proposed rules considered potential environmental justice impacts, the gasoline and diesel rules did not provide analyses of such impacts, nor did EPA identify all the types of data that would have been needed to perform such analyses. In finalizing the three rules, EPA considered environmental justice to varying degrees although, in general, the agency rarely provided a clear rationale for its decisions on environmental justice-related matters. For the three rules we examined, concerns about whether environmental justice was being considered sufficiently early in the rulemaking process first became evident by its omission on the agency’s “Tiering Form.” Once a workgroup chair is designated to lead a rulemaking effort, the chair completes this key form to alert senior managers to potential issues related to compliance with statutes, executive orders and other matters. In each case, however, the form did not include a question regarding the rule’s potential to raise environmental justice concerns, nor did we find any mention of environmental justice on the completed form. Beyond this omission, EPA officials had differing recollections about the extent to which the three workgroups considered environmental justice at this early stage of the rulemaking process. The chairs of the workgroups for the two mobile source rules told us that they did not recall any specific time when they considered environmental justice while drafting the rules. Other EPA officials associated with these rules said environmental justice was considered, but provided no documentation to this effect. Similarly, the chair of the ozone workgroup told us that his group considered environmental justice, but could not provide any specific information. He did, however, provide a document stating that compliance with executive orders, including one related to low-income and minority populations, would be a part of the economic review that would take place later in the process. Overall, we identified three factors that may have limited the ability of workgroups to identify potential environmental justice concerns early in the rulemaking process. First, each of the three workgroup chairs told us that they received no guidance in how to analyze environmental justice concerns in rulemaking. Second, as a related matter, each said they received little, if any, environmental justice training. Two chairs did not know whether other members of the workgroups had received any training, and a third chair said at least one member did receive some training. Some EPA officials involved in developing these three rules told us that it would have been useful to have a better understanding of the definition of environmental justice and how to consider environmental justice issues in rulemaking. Finally, the Office of Air and Radiation’s environmental justice coordinators—whose full-time responsibility is to promote environmental justice—were not involved in drafting any of the three rules. As required, an economic review of the costs, and certain other features, was prepared for all three rules. According to EPA officials, however, the economic review of the two mobile source rules did not include an analysis of environmental justice for various reasons, including the fact that EPA did not have a model with the ability to distinguish localized adverse impacts on a specific community or population. EPA’s economic review of the 2004 ozone rule did discuss environmental justice, claiming that the rule would not raise environmental justice concerns. However, it based this claim on an earlier analysis of a 1997 rule that established the 8- hour ozone national ambient air quality standard. Yet rather than indicating that the 1997 ozone rule did not raise environmental justice concerns, this earlier economic review said it was not possible to rigorously consider the potential environmental justice effects because the states were responsible for its implementation. Hence, the inability of EPA to rigorously consider environmental justice in the economic review of the 1997 rule appears to contradict EPA’s subsequent statement that there were no environmental justice concerns raised by the 2004 ozone implementation rule. In finalizing each of the three rules, EPA considered environmental justice to varying degrees, but the gasoline rule in particular provided a questionable example of how comments and information related to environmental justice were received and handled. As noted earlier in this testimony, the Clean Air Act requires that a final rule must be accompanied by a response to each significant comment raised during the comment period. In addition, according to EPA’s public involvement policy, agency officials should explain how they considered the comments, including any change in the rule or the reason the agency did not make any changes. In the case of the gasoline rule, representatives of the petroleum industry, environmental groups, and others had asserted during the comment period that the proposed rule did in fact raise significant environmental justice concerns. One commenter claimed that inequities arose from the fact that while the national air quality benefits were broadly distributed across the country, higher per capita air quality costs were disproportionately confined to areas around refineries. Despite comments such as these, EPA’s final rule did not state explicitly whether it would ultimately raise an environmental justice concern, although EPA officials told us in late 2004 that it would not. Furthermore, EPA did not publish the data and assumptions supporting its position. In fact, an unpublished analysis EPA developed before finalizing the rule appeared to suggest that environmental justice may indeed have been an issue. Specifically, EPA’s analysis showed that harmful air emissions would increase in 26 of the 86 counties with refineries affected by the rule. According to EPA’s analysis, one or both types of emissions—nitrogen oxides and volatile organic compounds—could be greater in the 26 counties than the rule’s benefit of decreased vehicle emissions. In one case involving a Louisiana parish, EPA estimated that net emissions of nitrogen oxides could increase 298 tons in 1 year as a result of the rule to refine cleaner gasoline. Under EPA’s rulemaking process, the agency prepares a final economic review after considering public comments. EPA guidance indicates that this final economic review, like the economic review during the proposal stage, should identify the distribution of the rule’s social costs across society. In the case of the three air rules, however, EPA completed a final economic review after receiving public comments but performed no environmental justice analyses. The publication of the final rules gave EPA another opportunity to explain how it considered environmental justice in the rule’s development. When EPA published the final rules, however, two of the three rules did not explicitly state whether they would raise an environmental justice concern. Only the ozone rule stated explicitly that it would not raise an environmental justice concern. GAO’s Recommendations and EPA’s Response We made four recommendations to help EPA resolve the problems identified by our study. In its June 10, 2005 letter on a draft of our report, EPA initially said it disagreed with the recommendations, saying it was already paying appropriate attention to environmental justice. However, EPA responded more positively to each of these recommendations in an August 24, 2006 letter. The first recommendation called upon EPA rulemaking workgroups to devote attention to environmental justice while drafting and finalizing clean air rules. EPA responded that to ensure consideration of environmental justice in the development of regulations, the Office of Environmental Justice was made an ex officio member of the agency’s Regulatory Steering Committee, the body that oversees regulatory policy for EPA and the development of its rules. The letter also said that (1) the agency’s Office of Policy, Economics and Innovation (responsible in part for providing support and guidance to EPA’s program offices and regions as they develop their regulations) convened an agency- wide workgroup to consider where environmental justice might be considered in rulemakings and (2) it was developing “template language” to help rule writers communicate findings regarding environmental justice in the preamble of rules. Second, to enhance workgroups’ ability to identify potential environmental justice issues, we called on EPA to (a) provide workgroup members with guidance and training to help them identify potential environmental justice problems and (b) involve environmental justice coordinators in the workgroups when appropriate. In response to the call for better training and guidance, EPA said it was supplementing existing training with additional courses to create a comprehensive curriculum that will meet the needs of agency rule writers. Specifically, it explained that its Office of Policy, Economics, and Innovation was focusing on how agency staff can best be trained to consider environmental justice during the regulation development process; while the Office of Air and Radiation had already developed environmental justice training tailored to the specific needs of that office. Among other training opportunities highlighted in the letter was a new on-line course offered by the Office of Environmental Justice that addresses a broad range of environmental justice issues. EPA also cited an initiative by the Office of Air and Radiation’s Office of Air Quality Planning and Standards to use a regulatory development checklist to ensure that potential environmental justice issues and concerns are considered and addressed at each stage of the rulemaking process. In response to our call for greater involvement of Environmental Justice coordinators in workgroup activities, EPA said that as an ex officio member of the Regulatory Steering Committee, the Office of Environmental Justice will be able to keep the program office environmental justice coordinators informed about new and ongoing rulemakings with potential environmental justice implications. It said that the mechanism for this communication would be monthly conference calls between the Office of Environmental Justice and the environmental justice coordinators. Third, we recommended that the Administrator improve assessments of potential environmental justice impacts in economic reviews by identifying the data and developing the modeling techniques needed to assess such impacts. EPA responded that its Office of Air and Radiation was reviewing information in its air models to assess which demographic data could be introduced and analyzed to predict possible environmental justice effects. It also said it was considering additional economic guidance on methodological issues typically encountered when examining a proposed rule’s impacts on subpopulations highlighted in the executive order. Finally, it noted that the Office of Air and Radiation was assessing models and tools to (1) determine the data required to identify communities of concern, (2) quantify environmental health, social and economic impacts on these communities, and (3) determine whether these impacts are disproportionately high and adverse. Fourth, we recommended that the EPA Administrator direct cognizant officials to respond more fully to public comments on environmental justice by, for example, better explaining the rationale for EPA’s beliefs and by providing supporting data. EPA said that as a matter of policy, the agency includes a response to comments in the preamble of a final rule or in a separate “Response to Comments” document in the public docket. The agency noted, however, that it will re-emphasize the need to respond to comments fully, to include the rationale for its regulatory approach, and to better describe its supporting data. EPA’s Progress in Responding to Our Recommendations On July 18, 2007, we met with EPA officials to obtain more up-to-date information on EPA’s environmental justice activities, focusing in particular on those most relevant to our report’s recommendations. While we have not had the opportunity to independently verify the information provided in the few days since that meeting, our discussions did provide insights into EPA’s progress in improving its environmental justice process in the two years since our report was issued. The following discusses EPA activities as they relate to each of our four recommendations. First, regarding our recommendation that workgroups consider environmental justice while drafting and finalizing regulations, EPA had emphasized in its August 2006 letter that making the Office of Environmental Justice an ex officio member of the Agency’s Regulatory Steering Committee would not only allow it to be aware of all important EPA regulatory actions from their inception through rule development and final agency review, but more importantly, would allow it to participate on workgroups that are developing actions with potential environmental justice implications and/or recommend that workgroups consider environmental justice issues. To date, however, the Office of Environmental Justice has not participated directly in any of the 103 air rules that have been proposed or finalized since EPA’s August 2006 letter. According to EPA officials, the Office of Environmental Justice did participate in one workgroup of the Office of Solid Waste and Emergency Response, and provided comments on the final agency review for the Toxic Release Inventory Reporting Burden Reduction Rule. EPA officials also emphasized that its Tiering Form would be revised to include a question on environmental justice. As noted earlier, this key form is completed by workgroup chairs to alert senior managers to the potential issues related to compliance with statutes, executive orders, and other matters. However, two years after we cited the omission of environmental justice from the Tiering Form, EPA explained that its inclusion has been delayed because it is only one of several issues being considered for inclusion in the Tiering process. Second, regarding our recommendation to (1) improve training and (2) include Environmental Justice coordinators from EPA’s program offices in workgroups when appropriate, our latest information on EPA’s progress shows mixed results. On the one hand, EPA continues to provide an environmental justice training course that began in 2002, and has included environmental justice in recent courses to help rule writers understand how environmental justice ties into the rulemaking process. On the other hand, some training courses that were planned have not yet been developed. Specifically, the Office of Policy, Economics, and Innovation has not completed the planned development of training on ways to consider environmental justice during the regulation development process. In addition, while the EPA said in its August 2006 letter that Office of Air and Radiation had developed environmental justice training tailored to that office, air officials told us last week that in fact they were unable to develop the training due to staff turnover and other reasons. Regarding our recommendation to involve the Program Offices’ Environmental Justice coordinators in rulemaking workgroups when appropriate, EPA’s August 2006 letter had said that the Coordinators’ involvement would be facilitated through the Office of Environmental Justice’s participation on the Regulatory Steering Committee. Specifically, it said that the Office of Environmental Justice would be “able to keep the agency’s Coordinators fully informed about new and ongoing rulemakings with potential Environmental Justice implications about which the coordinators may want to participate.” According to EPA officials, however, this active, hands-on participation by Environmental Justice coordinators in rulemakings has yet to occur. Third, regarding our recommendation that EPA improve assessments of potential environmental justice impacts in economic reviews by identifying the data and developing the modeling techniques that are needed to assess such impacts, EPA officials said that their data and models have improved since our 2005 report, but that their level of sophistication has not reached their goal for purposes of environmental justice considerations. EPA officials said that to understand how development of a rule might affect environmental justice for specific communities, further improvements are needed in modeling, and more specific data are needed about the socio-economic, health, and environmental composition of communities. Only when they have achieved such modeling and data improvements can they develop guidance on conducting an economic analysis of environmental justice issues. According to EPA, among other things, economists within the Office of Air and Radiation are continuing to evaluate and enhance their models in a way that will further improve consideration of environmental justice during rulemaking. For example, EPA officials told us that at the end of July, a contractor will begin to analyze the environmental justice implications of a yet-to-be-determined regulation to control a specific air pollutant. EPA expects that the study, due in June 2008, will give the agency information about what socio-economic groups experience the benefits of a particular air regulation, and which ones bear the costs. EPA expects that the analysis will serve as a prototype for analyses of other pollutants. Fourth, regarding our recommendation that the Administrator direct cognizant officials to respond more fully to public comments on environmental justice, EPA officials cited one example of an air rule in which the Office of Air and Radiation received comments from tribes and other commenters who believed that the proposed National Ambient Air Quality Standard for PM 10-2.5 raised environmental justice concerns. According to the officials, the agency discussed the comments in the preamble to the final rule and in the associated response-to-comments document. Nonetheless, the officials with whom we met said they were unaware of any memoranda or revised guidance that would encourage more global, EPA-wide progress on this important issue. Concluding Observation Our 2005 report concluded that the manner in which EPA has incorporated environmental justice concerns into its air rulemaking process fell short of the goals set forth in Executive Order 12898. One year after that report, EPA committed to a number of actions to be taken to address these issues. Yet an additional year later, most of these commitments remain largely unfulfilled. While we acknowledge the technical and financial challenges involved in moving forward on many of these issues, EPA’s experience to date suggests the need for measurable benchmarks—both to serve as goals to strive for in achieving environmental justice in its rulemaking process, and to hold cognizant officials accountable for making meaningful progress. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. Contacts and Acknowledgements Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact John B. Stephenson, Director, Natural Resources and Environment (202) 512-3841, or stephensonj@gao.gov. Key contributors to this testimony included Steven Elstein, Karen Keegan, and Daniel Semick. Other contributors included Marc Castellano, John Delicath, Brenna Guarneros, Terry Horner, Richard Johnson, Carol Kolarik, Alison O’Neil, and Cynthia Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A 1994 Executive Order sought to ensure that minority and low-income populations are not subjected to disproportionately high levels of environmental risk. Studies have shown that these groups are indeed disproportionately exposed to air pollution and other environmental and health problems. The Order sought to address the problem by requiring EPA and other federal agencies to make achieving environmental justice part of their missions. In July 2005, GAO issued a report entitled, Environmental Justice: EPA Should Devote More Attention to Environmental Justice When Developing Clean Air Rules (GAO-05-289). Focusing on three specific rules for detailed study, the report identified a number of weaknesses in EPA's approach to ensuring that environmental justice is considered from the early stages of rule development through their issuance. The report made several recommendations, to which EPA replied in an August 24, 2006 letter. GAO also met recently with cognizant EPA staff to obtain updated information on the agency's responses to these recommendations. In this testimony, GAO (1) summarizes the key findings of its 2005 report, (2) outlines its recommendations to EPA and EPA's August 2006 responses, and (3) provides updated information on subsequent EPA actions. EPA generally devoted little attention to environmental justice when drafting three significant clean air rules between fiscal years 2000 and 2004. GAO's 2005 report concluded, for example, that while EPA guidance on rulemaking states that workgroups should consider environmental justice early in the process, a lack of guidance and training for workgroup members on how to identify potential environmental justice impacts limited their ability to analyze such issues. Similarly, while EPA considered environmental justice to varying degrees in the final stages of the rulemaking process, in general the agency rarely provided a clear rationale for its decisions on environmental justice-related matters. For example, in responding to comments during the final phase of one of the rules, EPA asserted that the rule would not have any disproportionate impacts on low-income or minority communities, but did not publish any data or the agency's assumptions in support of that conclusion. Among its recommendations, GAO called on EPA to ensure that its rulemaking workgroups devote attention to environmental justice while drafting and finalizing clean air rules. EPA's August 2006 letter responded that it had made its Office of Environmental Justice an ex officio member of the Regulatory Steering Committee so that it would be aware of important regulations under development and participate in workgroups as necessary. GAO also recommended that EPA improve the way environmental justice impacts are addressed in its economic reviews by identifying the data and developing the modeling techniques needed to assess such impacts. EPA responded that its Office of Air and Radiation was examining ways to improve its air models so it could better account for the socioeconomic variables identified in the Executive Order. GAO also recommended that cognizant EPA officials respond more fully to public comments on environmental justice by better explaining their rationale and by providing the supporting data for the agency's decisions. EPA responded that it would re-emphasize the need to respond fully to public comments, include the rationale for its regulatory approach, and describe its supporting data. Recent discussions between GAO and EPA officials suggest that some progress has been made to incorporate environmental justice concerns in the agency's air rulemaking, but that significant challenges remain. For example, while the Office of Environmental Justice may be an ex officio member of the Regulatory Steering Committee, it has not participated directly in any air rules that have been proposed or finalized since EPA's August 2006 letter to GAO. Also, according to EPA staff, some of the training courses that were planned have not yet been developed due to staff turnover among other reasons. When asked about GAO's recommendation that cognizant officials respond more fully to public comments on environmental justice, the EPA officials cited a recent rulemaking in which this was done. But the officials said they were unaware of any memoranda or revised guidance that would encourage more global progress on this key issue.
Background Currently, U.S. workers rely primarily on their employers to provide both wages and benefits (such as paid leave, retirement, and health insurance) as part of a total compensation package, with wages comprising approximately 70 percent of total compensation. Of the benefits package employers provide to employees, almost one-third is mandated by law, and includes contributions to programs such as Social Security, Medicare, workers’ compensation, and unemployment insurance, and other programs. The remaining portion of the benefits package is discretionary and typically includes paid leave, retirement income, and health insurance—some of the more costly benefits. Over the last century, employer-sponsored benefits have become an increasingly important part of compensating workers. Prior to the turn of the 20th century, workers relied primarily on their own, their families’, or the communities’ resources in the event of a health or economic emergency. With the advent of the industrial revolution in the United States, unions began to offer disability and death coverage to workers in order to protect them against workplace risks of factory work. The tight labor market of World War II, along with Supreme Court rulings and federal legislation, helped make benefits a legitimate part of collective bargaining and, in part, fueled the offering of employer-sponsored benefits. Outside the benefits that are legally required, those benefits that employers choose to provide serve a number of purposes. From a business perspective voluntary benefits assist employers to attract and retain highly skilled workers. For example, pension plans can be a means of attracting workers, reducing turnover, and encouraging productivity. Defined benefit pension plans, which are typically offered as periodic payments over a specified period beginning at retirement age, can be used to foster a worker’s long-term commitment to his or her employer. Defined contribution pension plans, which are individual accounts to which employers and/or employees make contributions, may be attractive to employees who desire more portable benefits. In deciding to offer benefits, companies must assess the nature of their particular workforce to determine if offering benefits is a necessary employment inducement. Employers may also choose to sponsor benefit plans because of favorable federal tax treatment for certain forms of compensation. To encourage them to establish and maintain pension plans, the federal government provides preferential tax treatment under the Internal Revenue Code for plans that meet certain requirements. A purpose of tax preferences for employer-sponsored pensions is to encourage savings for workers’ retirement. Pension tax preferences are structured to strike a balance between providing incentives for employers to start and maintain voluntary, tax-qualified pension plans and ensuring participants receive an equitable share of the tax-favored benefits. In fiscal year 2004, the federal government was expected to forgo an estimated $95 billion in federal income tax revenue due to the tax exclusion for employer-sponsored pension plans. Tax policies also contain significant tax benefits for employer-sponsored health insurance and medical care. Most notable, the tax exclusion for health care permits the value of employer-paid health insurance premiums to be excluded from employees’ taxable earnings for income taxes. It also excludes the value of the premiums from the calculation of Social Security and Medicare payroll taxes for both employers and employees. The tax exclusion is credited with increasing health coverage for employees. The risk pooling under group health insurance allows employees to obtain insurance at lower costs than the individual insurance market. The federal government was expected to forgo an estimated $153 billion in taxes in fiscal year 2004 due to the exclusion of employer contributions for health care. Recent developments are leading employers to make decisions about the provision of the employer-based benefits system. An aging population with longer life expectancies increases the long-term obligations of companies that provide defined benefit pension plans. Some companies have cited this obligation as a contributing reason for declaring bankruptcy, reorganizing, and terminating large plans of this type. Advances in expensive medical technology, increased use of high-cost services and procedures, and an aging population have contributed to escalating health care costs. Advances in other technologies have stepped up competition from foreign firms by increasing global competition. In response to such competition, U.S. firms have continued to look for ways to reduce their costs, such as offshoring and using contingent workers (many of whom are not offered benefits). In addition to employer-sponsored benefits, multiple federal programs supplement workers’ and retirees’ benefits. For example, Social Security pays monthly cash benefits to more than 36 million eligible retired or disabled workers. Intended to complement retirement incomes, in many cases Social Security may provide the only source of such retirement income. In addition, the federal-state Medicaid program provides health insurance to certain low-income individuals including older Americans in need of long-term care who meet financial eligibility and other requirements. Most recent figures show Medicare provides health insurance to 35 million individuals age 65 years and more than six million disabled individuals under age 65. Average Compensation Costs Grew by 12 Percent between 1991 and 2005, with Benefits Outgrowing Wages by 8 Percentage Points Private employers’ average cost of total compensation (comprised of wages and benefits) for current workers grew by 12 percent between 1991 and 2005, but benefit costs outpaced wages in the most recent years after controlling for inflation. The increases in average total compensation costs were greater for employers with medium and large establishments, full- time workers, and union workers, than for those with small establishments, part-time workers, and nonunion workers. The overall real costs of benefits grew by 18 percent, while real wages grew by 10 percent. Benefits represented more than a quarter of total compensation costs. While Growth in Compensation Costs Fluctuated between 1991 and 2005, Average Benefit Cost Increases Had Outgrown Average Wage Increases by the End of the Period On average, overall employers’ inflation-adjusted cost for total compensation rose about 12 percent between 1991 and 2005. Both components of total compensation—wages and benefits—had also grown after adjusting for inflation, but at different rates. By the end of the period, the cost of total benefits grew by approximately 18 percent and wages had increased by 10 percent (see table 1). By 2005, benefits accounted for 29 percent of total compensation while wages made up 71 percent of the workers’ compensation package. Across the 15 years under examination, the cost of wages and benefits generally grew in tandem, albeit at different rates (see fig. 1). The noteworthy exception was after 2002 when benefit costs continued in a steep ascent and wages began to flatten, resulting in an almost 8 percentage point difference between the growth rates of the two. The recent divergence between benefits and wages is not unprecedented; there was a 6 percentage point difference between wage increases and benefit cost increases in 1994. However, what makes the divergence between the growth of wages and benefits after 2002 compelling is that it is preceded by a steady increase for both. The result, therefore, has been a significantly larger real dollar cost to employers—roughly $1,000 more per year in benefit costs for each full-time employee—when comparing 1994 to 2005. Increases in the Costs of Benefits Outpaced Wage Growth among All Types of Employers, Although Average Cost Increases Varied As was the case in the aggregate, by 2005, growth in the real cost of benefits outpaced the increase in wages for each type of employer (see table 2). For employers of union workers this effect was even more pronounced; these employers experienced benefit cost increases greater than wage increases over most of the time period and saw several years of no growth in wages. This pattern of benefit growth outpacing wage growth rates was least pronounced for employers of part-time workers, but still true. (See app. II, fig. 5 to 13 for all employers.) While employers uniformly saw average real benefit costs grow more than average real wages, the overall increase in total compensation varied by employer type. Employers at medium (100 to 499 workers) and large establishments (500 or more workers) experienced increases in total compensation costs of roughly 20 percent. In contrast, small establishments did not experience statistically significant increases in total compensation costs. Employers’ total compensation costs for full-time workers increased by 16 percent as compared with the 13 percent increase for part-time workers. Employers of unionized workers saw their total compensation costs grow by 21 percent as compared to the 13 percent increase experienced by employers of nonunion workers. (See app. II, tables 8 to 12 for all employers.) The Increase in Employers’ Cost of Benefits Was Largely Composed of Increases in the Cost of Health Insurance and Retirement Benefits The increase in the cost of a total benefits package from 1991 to 2005 was largely composed of increases in the cost of providing health insurance and retirement income. Paid leave had traditionally been the most costly benefit to employers, but by 2005, the cost of health insurance equaled that of paid leave. Of the three benefits, retirement income was the least costly, even though it grew by an estimated 47 percent in real terms during the period, largely between 2004 and 2005. Employers’ Costs for Health Insurance and Retirement Income Increased over 27 Percent between 1991 and 2005 The increase in the real cost of a total benefits package from 1991 to 2005 was largely composed of increases in the real cost of providing health insurance and retirement income. (See table 3 and fig. 2.) Paid leave had traditionally been the most costly benefit to employers, but by 2005, the cost of health insurance equaled that of paid leave. This occurred, in part, because health insurance costs grew by 28 percent while the costs for paid leave did not show significant growth during the period under study. Of the three benefits, retirement income was the least costly, even though it grew by an estimated 47 percent during the period, largely between 2004 and 2005. In combination with paid leave, these three benefits represented on average almost 60 percent of an employee’s total benefit package and over 80 percent of employers’ costs for voluntary benefits. Expert panelists discussed underlying factors driving trends in real costs for employer-sponsored benefits from 1991 to 2003. Regarding trends in retirement income, an expert noted that employers decreased their contributions to funds for defined benefit plans during the 1990s, which was reflected in a decrease in employer spending for retirement income. According to the Bureau of Labor Statistics, defined-benefit pension plan assets grew rapidly in the middle to late 1990s as the stock market continued to rise, so employers often did not need to contribute funds to defined-benefit pension plans. Stock prices generally fell from April 2000 to February 2003, and interest rates on bonds and other investments remained low, requiring employers to contribute more funding to defined- benefit plans beginning in 2003 to meet minimum funding requirements. Recent increases in employer costs for retirement benefits can be attributed to a similar phenomenon. Legislation enacted in 2004—the Pension Funding Equity Act—provided 2-year relief for businesses, allowing contributions to be reduced compared to what would have otherwise been required. In the case of health care benefits, in addition to increases in the cost of providing medical services, several factors were noted to drive trends in employer costs. These include the health insurance underwriting cycle, the emergence of managed care, competition, and consolidation in the health care industry. In the underwriting cycle, health insurance companies forecast premium costs and then set their prices either higher to maximize profitability or lower to maximize market share. In the early 1990s, managed care plans lowered their premium prices in order to increase market share, fueling price competition among health insurance companies. However, later in the decade, many plans moved away from tightly managed health care plans. As one expert noted, in the late 1990s, insurer consolidation and mergers led to a more concentrated industry. Research in this area suggests that many of the remaining plans shifted their strategies from gaining market share to improving profitability, stimulating premium increases and spurring the upward trend in costs for employers. For Most Employers, Retirement Income Showed the Greatest Percentage Increase Most types of employers experienced the largest percentage increases in costs for retirement income compared to the growth in costs for health insurance and paid leave between 1991 and 2005 (see table 4). This was true for employers whether they had union or nonunion employees, or whether they employed part-time or full-time workers. Small establishments were the one exception; health insurance represented their greatest cost increase. Nevertheless, the real dollar costs for health insurance and paid leave remained larger than retirement income costs for all employers. Appendix III, tables 13 to 17 provide real costs for paid leave, retirement income, and health insurance for each employer characteristics between 1991 and 2005. Employees’ Access to Benefits Remained Generally Stable, but Employees Face Greater Costs and Assume More Investment Risk During the time under review, employees’ access to benefits has remained stable, but participation rates declined for health benefits, some costs have shifted to employees, and they have assumed more investment risk. Between 1996 and 2003, the percentage of employees at establishments that offered health insurance did not change. Also, employers continued to pay approximately the same share of the premium for employee health insurance, but a smaller percentage of employees participated as the real dollar amount of the premiums increased. Some employees also saw increases in their deductibles and co-payments during this time, according to the expert panelists we convened. With regard to retirement income, half of all workers participated in employer-provided retirement plans between 1991 and 2003, but the types of plans shifted more toward defined contribution plans, under which employees assume the investment risk. With regard to paid leave, holidays and vacations were generally available to all workers between 1990 and 2003, but a smaller percentage of workers had access to personal leave and sick leave. The Share of Health Care Premiums Paid by Employees and Employers Remained Relatively Stable, but Employee Participation Declined Data presented on premiums, the percentage of workers at establishments offering health insurance, the percentage of workers eligible for health insurance at firms offering the benefit, and the percentage of eligible workers who enroll in the benefit are from the Medical Expenditure Panel Survey-Insurance Component (MEPS IC) and represent the years 1996 to 2003. The data used for this analysis did not allow us to assess the adequacy of coverage, or any change in quality. (See app. I for more details.) Premium costs presented here are for single workers coverage. Family coverage premiums increased by 43 percent between 1996 and 2003—from an annual average of $6,732 to $9,654. The real premium included both the employee’s and employer’s share. To control for the effect of inflation in health insurance premiums, dollars are reported in 2004 terms by using the BLS Consumer Price Index for Medical Care. Inflation in medical care has been great, and using an all items CPI would overstate the growth in premium costs. some employees’ deductibles and co-payments also increased during this period. The percentage of establishments offering insurance, the percentage of employees eligible, and the percentage of eligible employees enrolled ranged across all types of employers. This suggests that some employees were more likely to receive employer-sponsored health insurance than others (see fig. 3). For example, the percentage of employees who worked at small firms (1 to 9 employees) offering health insurance was 46 percent compared with 99 percent for those in firms of 1,000 or more employees. The same was true for the percentage of employees eligible to participate in the health insurance plans offered by companies. For example, of those employed part-time, 32 percent were eligible while 89 percent of those who worked full-time were eligible. This was also the case for participation among those eligible. For most types of employers, over 75 percent of eligible employees enrolled in the company’s health plan. This trend was true across firm sizes, for most industries, and union status. The exception to this trend was in retail where the enrollment rate was 67 percent and for part-time workers at 48 percent. The health insurance premium increases seen overall were true for every type of employer regardless of characteristics, such as firm size or industry. For each type, the average annual single worker premiums increased between 1996 and 2003 by at least 24 percent (see fig. 4). By 2003, the average premium ranged between $3,445 and $4,278, after adjusting for inflation. The mining industry experienced the largest increase over the time period, while premiums for employers and workers in the transportation and utilities industry increased the least. Employees’ shares of these premiums ranged between 12 percent and 21 percent. At the high end of the range were employees in the retail industry, which also had one of the largest declines in enrollment across the period examined. About Half of Employees Had Access to Retirement Income Plans, with a Trend Toward Defined Contribution Plans Employee participation in retirement plans did not change significantly between 1990 and 2003. Roughly half of all workers participated in an employer-sponsored retirement plan, and closer to 60 percent of those who were full-time employees did so. However, there was a noticeable shift that occurred from defined benefit retirement plans to defined contribution plans (see table 5). Employers who sponsor defined benefit retirement plans agree to make future payments during the employee’s retirement. To meet this obligation employers are responsible for making contributions sufficient to fund promised benefits, investing and managing plan assets, and bearing the investment risk. Under defined contribution retirement plans, employers may make contributions but have no obligations regarding the future sufficiency of those funds. Thus, this shift from defined benefit to defined contribution plans shifts the responsibility for providing for one’s retirement income to the employee. In addition, while participation in most defined benefit plans is automatic (depending on one’s position), many defined contribution plans require employee contributions before the employer makes a contribution. Paid Leave Was Generally Available to All Workers, but Certain Types of Leave Were Less Available to Part-Time Workers The percentage of employees offered paid leave was relatively stable between 1990 and 2003. Across the period, three-quarters or more of all workers were eligible for paid holidays and vacations. Full-time workers were more likely than part-time workers to be offered employer-sponsored paid leave (see table 6). Experts Agreed That Rising Benefit Costs Are Forcing Private Employers and Their Employees to Make Trade-Offs between Wages and Benefits Experts who reviewed our data found it reflected their experience and asserted that rising benefit costs have been leading employers and employees to make increasingly difficult trade-offs between wages and benefits. Maintaining health care and pensions is the main priority for workers, according to union representatives who said that workers are trading wage increases in order to maintain benefits. A panelist noted that workers consistently choose to preserve health care benefits over increases in cash compensation. On the other hand, it was noted by a small business leader that in his experience some employees, particularly younger people, prefer to increase wages rather than preserve benefits. A panelist explained that it is the rise in the actual dollar costs of benefits that is driving both employer’s and employee’s decisions. Additionally, our compensation data for the past decade provoked a number of observations from the panelists regarding the likelihood of shifting risk to the individual employee. Experts discussed the continuing shift in employer-sponsored retirement income from defined benefit to defined contribution plans. One expert predicted the eventual termination of defined benefit plans, a freeze or decrease in hybrid plans (those that combine features of defined benefit and defined contribution plans), and a shift towards 401(k) savings plans (which are a type of defined contribution plan). Panelists also observed that with regard to health benefits, employers are experimenting with consumer-directed health care plans, which may also shift more responsibility and risk to the individual employee. In addition, employers are considering changing the way they offer compensation. Experts agreed that there has been a movement from fixed to incentive compensation, wherein employers tie cash compensation to productivity. It was noted that some employers are turning to stock options in lieu of wage increases. Given the risks implied for the individual in such private sector plans, for both retirement and health care, a panelist emphasized that employees will need adequate education to make informed decisions. Panelists also made observations about the rise in compensation costs and their current and future implications for business and for employees. One benefits expert stated that if an employer is locked into paying compensation costs that the productivity of their workers cannot support, jobs will go elsewhere. A union representative noted that the garment industry has faced international competitors with lower compensation costs, which has led to lowered compensation for U.S. workers and a loss of domestic jobs. It was noted that employers may attempt to remain competitive by cutting wages and benefits for workers, offshoring jobs, and increasing the use of contingent workers, who may not be provided benefits. It was also noted that businesses have concerns about their ability to sustain long-term liabilities associated with certain benefit packages. Experts disagreed on whether or how much of the responsibility for addressing the rise in benefit costs should rest with the public sector. In the view of a union spokesperson, such benefits amount to a social good, something that supports the well-being and overall productivity of society. A union representative noted that employees who have dropped out of health insurance plans, especially employees in lower wage industries, have subsequently relied on public programs in which taxpayers ultimately bear the cost. Other panelists expressed belief in the marketplace as an arbiter of resources and said that government or public benefit models are not a solution to employers’ rising costs for compensation. These panelists suggested competition would eventually resolve the distribution of benefits by winnowing out companies that could not attract the kind of employees needed with the type of compensation they provide. One panelist emphasized that government, therefore, should have limited involvement in the provision of employer-sponsored benefits. A human resources representative suggested that businesses should be allowed to experiment with different means of providing benefits. On the other hand, it was also suggested that future solutions to benefit costs would require both public and private initiative and collaboration. A union representative noted that partnerships among employers, workers, and government could begin to address the problem of rising benefit costs. Aside from such different viewpoints, most panelists noted that the employer-sponsored system of benefits in its current form may not be sustainable, largely because productivity growth is unlikely to support rising benefit costs. Given the potential for this unsustainability, they noted that employers and employees will be forced to continue making trade-offs between wages and benefits. Concluding Observations While public policy has focused on the rise of health care costs as it affects today’s retirees, it is apparent that these expenses are also having an effect on current workers and their employers. The growth in real costs is significant, especially given the decrease in participation among those eligible. While a number of factors could influence an employee’s decision not to participate in employer-sponsored benefits, cost is certainly one of them. In the United States, retirement income rests on a proverbial “three-legged stool.” This is income derived from Social Security, employer-sponsored pension plans, and personal savings—all requiring investment over the working life of the employee. For pensions, the ongoing shift to defined contribution plans will require that Americans become far more educated and resourceful to successfully manage the associated risk. With regard to defined benefit plans, it will be imperative that they are not underfunded so that current and future retirees are not put at risk or that taxpayers are not asked to pay when companies default on their obligations. Rising health care and retirement costs affect both employers and employees. Employers may turn to using more contingent workers to whom they may not need to pay benefits and to a workforce overseas. From the employees’ perspectives, as the cost of benefits rises, they will be confronted with continued trade-offs in their compensation packages. For the nation itself, health care and retirement are part of a large and growing fiscal challenge. As policy makers deliberate over public policy support for retirees, they will want to be cognizant of the related challenge posed by the trends in the cost and availability of employer-sponsored compensation. Agency Comments We requested comments on a draft of this report from the departments of Labor and Health and Human Services. We received technical comments from the Bureau of Labor Statistics and the Employee Benefits Security Administration at the Department of Labor and from the Agency for Healthcare Research and Quality at the Department of Health and Human Services. We also provided experts with the section of the draft that characterized the exchange at the expert panel. We incorporated comments where appropriate. We are sending copies of this report to the Secretaries of Health and Human Services and Labor, relevant congressional committees, and other interested parties. Copies will be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7003 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To determine recent trends in employers’ total compensation costs and the factors contributing to the trends, we obtained data from the Department of Labor’s Bureau of Labor Statistics (BLS). We used the BLS’ Employer Costs for Employee Compensation (ECEC), which is derived from data collected in the BLS’ National Compensation Survey (NCS). Although employers spend funds on benefits and may change the benefit package based on cost increases to control spending, BLS characterizes its survey data as “costs” to employers. As such, we report on costs to employers. NCS data are collected from a sample of establishments and include information about the hourly costs of the components of total compensation for a number of establishments and employee characteristics. Samples are selected using a methodology called probability proportional to employment size, which means that establishments with larger employment have a greater chance of selection. Weights are then applied to establish the estimates. Survey coverage includes private sector establishments with one or more workers and state and local governments with one or more workers. Agricultural, private households, and the federal government are not included in the survey. Our analysis focuses on private sector employers’ hourly costs for total compensation, wages and salaries, and total benefits. Within total benefits, we focus on the three most costly discretionary benefits—paid leave, health insurance, and retirement benefits. Costs are calculated for active workers and do not include costs for retiree benefits. We analyzed data for the period 1991 to 2005. All data are from the first quarter of each year. Those data that were not available from BLS’s on-line resources were obtained directly from BLS. In the ECEC, costs are measured as the average employer costs per employee hour worked for wages and salaries and total benefits. To control for the effect of inflation, we adjusted all dollars to 2004 terms by using the BLS’s Consumer Price Index Research Series (CPI-U-RS) for 2004. The CPI-U-RS presents an estimate of the CPI for all urban consumers from 1978 to 2004 that incorporates most of the improvements in the CPI calculations made by BLS over that time period. We used a z-test to test whether the costs in 2005 were statistically significantly different from the costs in 1991. BLS provided us with the relative standard errors (RSE) for the years 2000 to 2003, which BLS officials contend provide reasonable estimates of what the RSEs are for the earlier data. To ensure the greatest level of confidence, we used the highest RSE between 2000 and 2003 to ensure a conservative measure of statistical significance. Our analysis included the following data elements: Total compensation consists of the sum of costs for wages and salaries and total benefits. Wages and salaries are defined as the hourly straight-time wage rate, or for workers not on an hourly basis, straight-time earnings divided by the corresponding hours. Straight-time wages and salary rates are total earnings before payroll deductions and include production bonuses, incentive earnings, commission payments, and cost-of-living adjustments. Total benefits include legally required benefits (Social Security, Medicare, federal and state unemployment, and workers’ compensation). Voluntary benefits reflected in the total benefits calculation are paid leave; supplemental pay (overtime and premium pay, shift differentials, and nonproduction bonuses); insurance benefits (life, health, short-term disability, and long-term disability); retirement and savings benefits; and other benefits (severance pay and supplemental unemployment plans). Paid Leave includes vacation, holidays, sick leave, and other leave such as personal leave, military leave, and funeral leave. Retirement and Savings includes savings and thrift plans, defined benefit, and defined contribution plans. Due to a change in the way BLS classifies retirement plans, we report on the broader category of “retirement and savings” in this report. Beginning in 1996, pension and savings plans within existing sampling units were examined to determine which were defined benefits or defined contributions and were reclassified as such. Although the old divisions cannot be compared with the new divisions, the overall category of retirement and savings remains comparable. Health insurance includes medical, stand-alone dental, and stand- alone vision. Establishments are defined as single physical locations, such as a factory or a retail store, and may be part of a larger firm. The break-outs for establishment size were provided to us from BLS as small (1 to 99 employees), medium (100 to 499 employees), and large (500 or more employees). Union status is determined separately for each occupation in an establishment. An occupation is considered union if all of the following conditions are met: a labor organization is recognized as the bargaining agent for workers in the occupation; wage and salary rates are determined through collective bargaining or negotiations; and settlement terms, which must include wage provisions and may include benefit provisions, are embodied in a signed mutually binding collective bargaining agreement. Not all employees need to belong to the union for the occupation to be classified as such. Full-time and part-time status is defined by the establishment reporting the data. Industry sectors and industries are based on the Standard Industrial Classification System (SIC). The industries within each sector are in table 7. The industry definitions in the NCS changed in 2004, making data prior to 2004 not comparable to the newer data. Therefore, we only present industry data for the period 1991 to 2003. We assessed the reliability of the ECEC data by reviewing BLS documentation, interviewing BLS staff, and performing electronic tests to check for outliers or other potential data problems. Based upon these checks, we determined that the data were sufficiently reliable for the purposes of our work. To determine whether employees’ costs, participation, or access to benefits have changed, we relied on data from two sources: (1) the BLS’ National Compensation Survey (NCS) and (2) the Medical Expenditure Panel Survey-Insurance Component (MEPS IC) administered by the Agency for Healthcare Research and Quality (AHRQ) at the Department of Health and Human Services. The BLS uses the NCS to measure the incidence and provisions of selected employer provided benefit plans. We focused on employee coverage by retirement and savings plans, including defined contribution and defined benefit plans, and the provision of paid leave benefits. Coverage is not necessarily the same as participation. For example, NCS produces data on the availability of sick leave, but not on employees’ use of such benefit. In addition, benefits data were not published every year. Data were available between 1991 and 2003. Despite these issues, we felt the data were reliable and useful in understanding whether and how employers’ provision of retirement and paid leave has changed over time. We collected these data from various BLS publications. We assessed the reliability of the data by reviewing BLS documentation and interviewing BLS staff. Based upon these checks, we determined that the data were sufficiently reliable for the purposes of our work. We used the MEPS IC to provide a detailed analysis of employee access and participation in employer provided health insurance. The MEPS IC is an annual survey of establishments that collects information about employer-sponsored health insurance offerings in the United States. MEPS IC data are tabulated by the AHRQ and tables are available for the period 1996 through 2003. MEPS tables include standard errors, which we used to determine statistical significance in percentage changes over time. We received electronic copies of the MEPS IC tables directly from the AHRQ. The MEPS IC is derived from a random sample of private-sector business establishments with at least one employee and a sample of state and local government employers. We focused our analysis on the private-sector only. The sample contains businesses that existed at the beginning of the sample year and is supplemented with business births through the third quarter of that year. The MEPS IC tables are reported both nationally and for individual states. For our purposes, we focused on the national data only. We analyzed MEPS IC data to determine the trends in the percentage of employees at establishments that offer health insurance, the percentage of employees eligible for health insurance at these firms, the percentage of eligible employees who enroll in the health insurance plans, and the average annual premium for employer-provided health insurance for single workers and the employees’ share of these premiums. To control for the effect of inflation, all premium costs are reported in 2004 terms by using the BLS’s Consumer Price Index for Medical Care. Inflation for medical care has been great, and using an all items CPI (such as the CPI-U-RS) would overstate the growth in premium costs. Our analysis included the following data elements. Offer health insurance—whether an establishment makes available or contributes to the cost of any health insurance plan for current employees. Health insurance plan—an insurance contract that provides hospital and/or physician coverage to an employee or retiree for an agreed-upon fee for a defined benefit period, usually a year. Single coverage—health insurance that covers the employee only. Also known as employee-only coverage. Employee—a person on the actual payroll. Excludes temporary and contract workers, but includes the owner or manager if that person works at the firm. Firm—a business entity consisting of one or more business establishments under common ownership or control. Also known as an enterprise. A firm represents the entire organization, including the company headquarters and all divisions, subsidiaries and branches. A firm may consist of a single-location establishment or multiple establishments. In the case of a single-location firm, the firm and establishment are identical. Firm size—the total number of employees for the entire firm as reported on the sample frame. The data were made available in the following break- outs: 1 to 9 employees, 10 to 24 employees, 25 to 99 employees, 100 to 999 employees, and 1000 or more employees. Union status—employers are asked to identify if they have union or non union employees. Full-time and part-time employee—full-time is defined by the respondent and generally includes employees that work 35 to 40 hours per week. Part-time status is considered an employee not defined as full-time by the respondent. Industry categories—the primary business activity as reported by the respondent. The industry categories that we report are based on the Standard Industrial Classification (SIC) codes. These definitions match those used in the ECEC (see table 7). The data were not readily available by industry sector (goods-producing and service-providing). We assessed the reliability of the MEPS IC data by reviewing AHRQ documentation, interviewing AHRQ staff, and performing electronic tests to check for outliers or other potential data problems. Based upon these checks, we determined that the data were sufficiently reliable for the purposes of our work. To determine the possible implications of changes for private systems, we convened a panel of 17 experts representing the human resources field, industries, unions, and academia. Prior to the panel, we provided the experts with a list of discussion questions and the completed data analysis. During the half-day discussion, panelists provided their unique perspectives on the trends we identified and offered comments on the implications of these trends. We identified the panelists through consultation with internal and external parties who work on the issues covered in this report. We selected individuals who represent a wide variety of entities that address the issue of workers’ benefits and provide a balance of perspectives to help us understand the breadth of opinions on the topic. The panel included the following list of experts. Appendix II: Employers’ Real Hourly Costs for Employee Total Compensation, Wages, and Total Benefits Appendix II: Employers’ Real Hourly Costs for Employee Total Compensation, Wages, and Total Benefits Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. workers) Medium (100-499 workers) workers) Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. The growth rates for certain groups of employers may be higher than the aggregated average growth rate due to changes in employment composition and compensation cost levels overtime. Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. The growth rates for certain groups of employers may be higher than the aggregated average growth rate due to changes in employment composition and compensation cost levels overtime. Percentage change 1991 - 2003 Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. BLS began using new codes to classify industries with the 2004 data. Therefore, 2004 and 2005 data were not comparable to 1991-2003 by industry. Appendix III: Employers’ Real Hourly Costs for Employee Paid Leave, Retirement Income, and Health Insurance Appendix III: Employers’ Real Hourly Costs for Employee Paid Leave, Retirement Income, and Health Insurance Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. Retirement and Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. Retirement and Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. Paid leave Data represent costs to private employers only. Bold signifies that percentage changes between 1991 and 2005 are statistically significant at the 95 percent confidence level. BLS began using new codes to classify industries with the 2004 data. Therefore, 2004 and 2005 data were not comparable to 1991 to 2003 by industry. Appendix IV: GAO Contacts and Acknowledgments Staff Acknowledgments Patrick di Battista, Assistant Director, and Sara L. Schibanoff, Analyst-in- Charge, managed this assignment. Others who made key contributions throughout the assignment include James Pearce, Jean Cook, and Susan Bernstein. Dan Schwimer provided legal assistance. Marc Molino and Mimi Nguyen provided assistance with graphics.
Because most workers rely primarily on their employers to provide both wages and benefits as part of a total compensation package, the trends in the costs and availability of employer-sponsored compensation have a significant bearing on workers' well-being. Through tax preferences and payroll taxes, federal government policy also has a bearing on employees' access to benefits and on the costs carried by employers. The federal government provides significant tax subsidies for both health insurance plans and qualified retirement plans. In addition, workers and employers are required to pay taxes that fund Social Security and Medicare, programs intended to help provide for workers' economic security and peace of mind in retirement. In this report, GAO examined federal data on private employers' costs for active workers and sought perspectives from 17 experts to identify (1) recent trends in employers' total compensation costs; (2) composition of the trends; (3) whether employees' costs, participation, or access to benefits changed; and (4) possible implications of the changes for private systems. GAO received technical comments from the Departments of Labor and Health and Human Services and from some of the experts GAO consulted. These comments were incorporated as appropriate. Private employers' average real cost of total compensation (comprising wages and benefits) for current workers grew by 12 percent between 1991 and 2005. The real costs of benefits grew by close to 18 percent, while real wages grew by 10 percent. Wages and benefits increased by about the same percentage for most of the period until 2002, after which time real wages began to stagnate and real benefit costs continued to grow. The increase in the cost of a total benefits package from 1991 to 2005 was largely composed of increases in health insurance and retirement income costs. Paid leave had been the most costly benefit to employers, but by 2005, the cost of health insurance equaled that of paid leave. In comparison to paid leave and health insurance, retirement income was the least costly, but it grew by an estimated 47 percent. During the time under review, employees' access to most benefits remained stable, but participation rates declined for health benefits as the real dollar amount of the premiums increased. Between 1991 and 2003, roughly half of all workers participated in employer-provided retirement plans. Holidays and vacations were generally available to most workers, but a smaller percentage of workers had access to personal and sick leave. A panel of experts from a variety of backgrounds agreed that rising benefit costs are forcing private employers and their employees to make increasingly difficult trade-offs between wages and benefits. They noted that the employer-sponsored system of benefits in its current form may be unsustainable, largely because productivity growth is unlikely to support the rising costs of some benefits, especially escalating health insurance costs.
Background SSA operates the Disability Insurance (DI) and Supplemental Security Income (SSI) programs—the two largest federal programs providing cash benefits to people with disabilities. The law defines disability for both programs as the inability to engage in any substantial gainful activity by reason of a severe physical or mental impairment that is medically determinable and is expected to last at least 12 months or result in death. In fiscal year 2005, the agency made payments of approximately $126 billion to about 12.8 million beneficiaries and their families. We have conducted a number of reviews of SSA’s disability programs over the past decade, and the agency’s management difficulties were a significant reason why we added modernizing federal disability programs to our high- risk list in 2003. In particular, SSA’s challenges include the lengthy time the agency takes to process disability applications and concerns regarding inconsistencies in disability decisions across adjudication levels and locations that raise questions about the fairness, integrity, and cost of these programs. The process SSA uses to determine that a claimant meets eligibility criteria—the disability determination process—is complex, involving more than one office and often more than one decision maker. Under the current structure—that is, DSI notwithstanding—the process begins at an SSA field office, where an SSA representative determines whether a claimant meets the programs’ nonmedical eligibility criteria. Claims meeting these criteria are forwarded to a DDS to determine if a claimant meets the medical eligibility criteria. At the DDS, the disability examiner and the medical or psychological consultants work as a team to analyze a claimant’s documentation, gather additional evidence as appropriate, and approve or deny the claim. A denied claimant may ask the DDS to review the claim again—a step in the process known as reconsideration. If the denied claim is upheld, a claimant may pursue an appeal with an ALJ, who will review the case. At this step, the ALJ usually conducts a hearing in which the claimant and others may testify and present new evidence. In making the disability decision, the ALJ considers information from the hearing and from the DDS, including the findings of the DDS’s medical consultant. If the claimant is not satisfied with the ALJ decision, the claimant may request a review by SSA’s Appeals Council, which is the final administrative appeal within SSA. If denied again, the claimant may file suit in federal court. In March 2006, SSA published a final rule to establish DSI, which is intended to improve the accuracy, consistency, and fairness of decision making and to make correct decisions as early in the process as possible. While DDSs will continue to make the initial determination, claims with a high potential for a fully favorable decision will be referred to a new Quick Disability Determination (QDD) process. If the claimant is dissatisfied with the DDS’s initial determination or QDD, the claimant may now request a review by a federal reviewing official—a new position to be staffed by centrally managed attorneys. The federal reviewing official replaces the reconsideration step at the DDS level, and creates a new level of federal review earlier in the process. The claimant’s right to request a hearing before an ALJ remains unchanged. However, the Appeals Council is eliminated under the new process, and as a result the ALJ’s decision becomes the final agency decision except in cases where the claim is referred to the new Decision Review Board. Claims with a high likelihood of error, or involving new areas of policy, rules, or procedures, are candidates for board review. If the board issues a new decision, it becomes the final agency decision. As before, claimants dissatisfied with the final agency decision may seek judicial review in federal court. DSI also includes the introduction of new decision-writing tools that will be used at each adjudication level, and are intended to streamline decision making and facilitate training and feedback to staff. In addition, SSA is creating a Medical and Vocational Expert System, staffed by a unit of nurse case managers who will oversee a national network of medical, psychological, and vocational experts, which are together responsible for assisting adjudicators in identifying and obtaining needed expertise. In its final rule, SSA indicated that DSI will further be supported by improvements, such as a new electronic disability system and an integrated, more comprehensive quality system. As noted, the changes introduced by DSI were codified in SSA’s final rule on the subject. Table 1 highlights these new features and associated elements. Implementation of DSI will begin on August 1, 2006, in the Boston region, which includes the states of Connecticut, Massachusetts, Maine, New Hampshire, Rhode Island, and Vermont. Therefore, only those claims filed with SSA in the Boston region on or after August 1 will be subject to the new process. All claims currently in process in the Boston region, and claims filed elsewhere, will continue to be handled under current procedural regulations until SSA takes further action. In addition, for cases filed in the Boston region during the first year of DSI implementation, all ALJ decisions—both allowances and disallowances— will be reviewed by a new Decision Review Board with authority to affirm, modify, reverse, or remand decisions to the ALJ. Since DSI will only affect new claims initiated in the Boston region, claimants whose cases were already in process before August—as well as those filing outside the Boston region—will still have access to the Appeals Council. Concerns Include Fear of Increased Court and Claimant Hardship, while SSA Believes Its New Process Will Reduce the Need for Appeal In their written comments to SSA and discussions with us, public and stakeholder groups, such as claimant representatives and disability advocacy groups, expressed two broad areas of concern regarding the replacement of the Appeals Council with the Decision Review Board: (1) potential for increasing the workload of the federal courts and (2) anticipated hardship for claimants in terms of loss of an administrative appeal level and difficulties associated with pursuing their claims in federal court. SSA’s response to concerns regarding the federal court workload is that all changes associated with new DSI process—taken together—should reduce the need for appeal to the federal courts. At the same time, SSA plans to implement this final step gradually and with additional safeguards to minimize the impact on the courts. In response to concerns about the loss of appeal rights, SSA contends that under the new DSI process, claimants will have a new level of federal review earlier in the process, and should experience a decline in the amount of time it takes to receive a final agency decision without being overly burdened by the Decision Review Board under the new process. Public and Stakeholders Anticipate a Larger Caseload for Courts, while SSA Maintains That Better Decisions Earlier in the Process Will Reduce the Need for Appeal Concerns expressed in comment letters to SSA and in our interviews revolved largely around the possibility that the replacement of the Appeals Council with the Decision Review Board would result in rising appeals to the federal courts. Specifically, more than half of the 252 comment letters we reviewed indicated that the Appeals Council provides an important screening function for the federal courts, and that its replacement with the Decision Review Board could result in rising caseloads at the federal court level. Stakeholder groups with whom we spoke reiterated this concern. With the imminent rollout in the Boston region, several stakeholders suggested that SSA closely monitor the effectiveness of the board and the impact of this change on the federal courts. Data from SSA suggest that the Appeals Council is both screening out a number of cases that might otherwise have been pursued in the federal courts and identifying many claims that require additional agency analysis. Between 2001 and 2005, the number of disability cases appealed to SSA’s Appeals Council rose 13 percent. At the same time, the number of disability cases filed with the federal courts (both DI and SSI) declined 9 percent. Figure 1 illustrates the volume of receipts at both the federal court and the Appeals Council levels during this period. Further, the Appeals Council consistently remanded about 25 percent of the claims it reviewed between 2001 and 2005 for further adjudication by the administrative law judge—see figure 2—providing more evidence that the Appeals Council is identifying a significant number of claims that require additional agency review and modification. SSA believes that the implementation of DSI as an entire process will help it make the correct disability determination at the earliest adjudication stage possible and thereby reduce the need for appeal. According to SSA, several elements of the DSI process will contribute to improved decision making. These include the federal reviewing official position, which presents an enhanced opportunity for the agency to thoroughly review case records—with the assistance of medical and vocational experts—- early in the process, as well as new online policy guidance and new tools to aid decision writing, which will be used at each adjudication level to facilitate consistency and help the agency identify and correct errors more quickly. Last, SSA believes that the number of requests for voluntary remands that SSA makes to the federal courts is an indicator that the Appeals Council is not fully addressing errors in the case or otherwise reviewing the case effectively so as to prevent the federal courts from reviewing appeals that should have been handled administratively. SSA believes the Decision Review Board will more effectively screen cases from federal court review by focusing on error-prone claims identified through a predictive model. SSA acknowledges that the agency cannot predict the likely impact on the federal courts’ workload and cannot prevent denied claimants from filing suit with the federal courts. To reduce the likelihood of too many appeals reaching the federal court level, SSA stated in its final rule that it is pursuing a gradual rollout by implementing the DSI process in one small region—the Boston region—and plans to have the board initially review all of the ALJ decisions in that region. According to SSA officials, the board’s review of all ALJ decisions will allow them to test the efficacy of the new predictive model, to help ensure that the model is identifying the most- error prone cases that might otherwise find their way to federal court. Further, SSA officials told us that they are working with the federal court system to develop a way to gauge changes in the court’s caseload. Finally, SSA’s internal counsel told us that the agency has begun a systematic data collection process to better understand the circumstances surrounding remands from the federal court. To date, SSA attorneys have analyzed the reasons for federal court remands in more than 1,600 cases, but they are still working on a quality control mechanism to ensure that their information has been entered properly and are therefore unwilling to report on the results of their analysis at this time. Public and Stakeholders Anticipate Increased Hardship for Claimants, but SSA Believes the New Federal Reviewing Official Position Will Improve Decision Making Earlier In their comments on the proposed rule and in subsequent conversations with us, stakeholders expressed concern that eliminating the Appeals Council would cause claimants hardship both by eliminating the opportunity to appeal an ALJ decision to the Appeals Council and by increasing the cost and difficulty associated with pursuing cases in federal court. In particular, 48 percent of the 252 comment letters we reviewed expressed concern that the replacement of the Appeals Council with the Decision Review Board would represent a loss in claimant appeal rights within SSA. These letters, as well as subsequent discussions with stakeholders, emphasized the concern that claimants will not have a say in which cases are reviewed by the board. Further, stakeholders were concerned that in the Boston region, claimants whose cases were allowed at the ALJ level could be overturned by the board, presenting additional hardship for claimants as they await a decision. In addition, claimant representatives and disability advocacy organizations are concerned that appealing at the federal court rather than Appeals Council level would be costlier and more intimidating for claimants. For example, there is a filing fee associated with the federal courts, and stakeholders commenting on SSA’s final rule said that the filing procedure is more complicated than that required for an appeal before the Appeals Council. In addition, claimants seeking representation must find attorneys who, among other requirements, have membership in the district court bar in which the case is to be filed. As a result of these hardships, claimant representatives and disability advocacy organizations, in particular, were concerned that claimants would drop meritorious claims rather than pursue a seemingly complicated and intimidating federal court appeal. About 40 percent of the comment letters asserted that the amount of time the Appeals Council spent adjudicating cases—also referred to as its processing time—has improved recently, and letter writers did not believe that terminating the Appeals Council would improve the adjudicative process. Although SSA has contended that the Appeals Council has historically taken too much time without providing claimants relief, stakeholders’ claims that the Appeals Council processing time has decreased significantly in recent years was confirmed by SSA data—see figure 3. In light of these concerns, many stakeholder groups we spoke with suggested that SSA should roll out the Decision Review Board carefully and closely evaluate outcomes from claimants’ perspectives. In their final rule and in conversations with us, SSA officials stated that the new process still affords claimants comparable appeal rights along with the promise of a faster agency decision. Specifically, SSA stated that DSI includes two federal levels of thorough case development and administrative review—one by the new federal reviewing official and another by an ALJ at the hearings level. SSA contends that the new federal reviewing official position is a marked departure from the reconsideration step, in that the position will be managed centrally and staffed by attorneys specifically charged with enhancing the development of a case and working with a new cadre of medical and vocational experts to make decisions. SSA believes that this new position, along with other changes in the new process, will result in many more cases being correctly adjudicated earlier in the process, resulting in fewer decisions appealed and reviewed by ALJs at the hearings level. SSA also argues—recent improvements in processing time notwithstanding—that the elimination of the Appeals Council step will reduce the length of time it takes the agency to reach a final decision on behalf of the claimant. Further, SSA maintains that the replacement of the Appeals Council with the board will not be prejudicial to or complicated for the claimant. SSA indicated that claimants will have an opportunity to submit written statements to the Decision Review Board, thus providing another chance to assert their circumstances. SSA maintains that aside from the written statement, further action is not required on the part of the claimant until the board issues its decision. SSA has told us that it plans to monitor stakeholder concerns in several ways. For example, SSA plans to track the length of time it takes to reach final decisions as well as the allowance rate. SSA also plans to review written statements submitted by claimants to help assess the validity of the board’s predictive model. SSA Has Taken Constructive Steps to Implement the New DSI Process, but Its Schedule Is Ambitious and Many Details Are Not Yet Finalized SSA has prepared in significant ways for DSI, but the agency’s timetable is ambitious and substantive work remains. SSA has moved forward in key areas that should underpin the new system—human capital development, technical infrastructure, and quality assurance. However, some essential measures remain under development, particularly for quality assurance. Nevertheless, on balance, the agency has begun to employ a number of change management strategies we recommended earlier for successful transitioning. SSA Has Moved to Hire and Train Staff, but It Faces Short Timetables While stakeholders have expressed concern that SSA will not be able to hire and sufficiently train staff in time for the new process, we found that the agency has taken a number of steps in this area. With respect to hiring for new positions, the agency has already developed position descriptions and posted hiring announcements for nurse case managers, who will work in the new Medical and Vocational Expert Unit, as well as for federal reviewing officials. To date, SSA officials have begun assessing more than 100 eligible applicants for the reviewing official slots, and expect to hire 70 by late June and another 43 in early 2007. SSA officials also said they posted announcements to hire nurse case managers, and that they expect to hire as many as 90 before the end of the rollout’s first year in the Boston region. SSA officials also said that the agency has posted announcements to hire support staff for both the reviewing officials and nurse case managers, but the exact number SSA is seeking to hire has not been decided. Several stakeholders we spoke with were particularly concerned that SSA will need to hire or otherwise provide adequate support staff for reviewing officials to ensure their effectiveness. Specifically, several of the ALJs we interviewed told us that at the hearings level, judges and their staff currently spend significant time developing case files. They noted that if the reviewing official position is designed to focus on case development, then attorneys in this role will need support staff to help them with this time-consuming work. With respect to training, the agency has been creating a variety of training materials for new and current staff, with plans to deliver training at different times, in different ways. SSA officials reported working on development of a uniform training package for all staff with some flexible components for more specialized needs. Specifically, about 80 percent of the package is common content for all employees, and 20 percent will be adaptable to train disability examiners, medical experts, ALJs, and others involved in the adjudication process. SSA officials said they developed the package with the federal reviewing officials in mind, but also with an eye toward a centralized training content that could apply to current and new staff down the line. SSA plans to provide the full training package, which constitutes about 8 weeks of course work and 13 modules, to reviewing officials in late June, once all attorneys for that position are hired. Among the sessions included are the basics of the disability determination process, eDib and its use, medical listings and their application, and decision writing. Given that the rule was finalized in March and rollout is set for August, agency timetables for hiring, training, and deploying more than 100 new staff—as well as for training existing examiners—in the six states in the Boston region are extremely short. SSA officials have acknowledged the tight time frame, but hope to deliver training by using more than one medium—in person, online, or by video. SSA still expects to accomplish all hiring and training for the Boston region staff in time for an August 1 launch of the new process. SSA Has Readied eDib for the Boston Region, but Time for Resolving Last- Minute Glitches before Rollout Will Be Limited SSA has also taken steps, as we had previously recommended, to ensure that key technical supports, particularly an electronic disability case recording and tracking system known as eDib, are in place in time for Boston staff to adjudicate claims under DSI electronically. The agency has made a variety of efforts to familiarize employees with the system and facilitate their ability to use it as early as possible. First, SSA positioned the Boston region for a fast transition to eDib by reducing the region’s paper case backlog. According to a Boston region ALJ, pending case records are being converted now to familiarize judges and decision writers with the eDib system so they will be comfortable with it when new cases reach that level after August 1. Then SSA worked with Boston region staff to certify that the region’s DDS offices were ready for full eDib implementation. According to claimant representatives, SSA has also worked to facilitate their transition to eDib, and according to SSA officials, the agency has developed a system called Electronic Records Express to facilitate medical providers’ submission of records to SSA. A stakeholder group of claimant representatives told us that SSA has offered them training and that they have met regularly with agency staff to smooth out eDib issues, such as difficulties associated with the use of electronic folders— electronic storage devices that replace paper folders as the official record of evidence in a claimant’s case file. This stakeholders group also reported that its members have voluntarily coordinated with SSA to test new techniques that might further facilitate eDib implementation. SSA has also been developing electronic templates to streamline decision writing. ALJs have already received some training on theirs, which is known as the Findings Integrated Template. According to SSA officials, this template is now used, voluntarily, by ALJs nationwide, after months of extensive testing and refinement. For DDS-level decisions, SSA is designing a template—called the Electronic Case Analysis Tool (E-CAT)— which it expects to be partially operational by July and fully implemented by November. DDS examiners in the Boston region will receive training on the tool in July and will also receive training prior to then on the elements of sound decision making. A similar tool is in development for the reviewing officials. While SSA officials expressed confidence in having technical supports sufficiently in place in time for implementation of DSI in August, unanticipated problems associated with new technology may challenge their ability to do so. In addition to eDib and E-CAT, SSA is implementing other new software systems to support the rollout (such as the predictive models and electronic medical records transmission)—any one of which may involve unexpected problems. For example, in 2005 we reported that a number of DDSs were experiencing operational slowdowns and system glitches associated with the new eDib system. It remains to be seen whether the Boston region experiences similar problems with eDib, or problems with other new systems, and whether SSA will be able to resolve technical issues that may arise before implementation begins in August. SSA Is Improving Its Quality Assurance System as Part of DSI Rollout, although Key Elements Have Yet to Be Revealed SSA is taking steps to improve its quality assurance system that have potential for improving the accuracy and consistency of decisions among and between levels of review, in a manner that is consistent with our past recommendations. As early as 1999, GAO recommended that in order to improve the likelihood of making improvements to its disability claims process, SSA should focus resources on initiatives such as process unification and quality assurance, and ensure that quality assurance processes are in place that both monitor and promote the quality of disability decisions. Consistent with these recommendations, many of SSA’s current efforts involve adding steps and tools to the decision-making process that promote quality and consistency of decisions and provide for additional monitoring and feedback. While these developments are promising, many important details of SSA’s quality assurance system have yet to be finalized or revealed to us. SSA has recently elevated responsibility for its quality assurance system to a new deputy-level position and office—the Office of Quality Performance. This office is responsible for quality assurance across all levels of adjudication. Listed below are new aspects of the quality assurance system that this office oversees and that hold promise for promoting quality and consistency of decisions. SSA will continue to provide accuracy rates for DDS decisions, but these accuracy rates will be generated by a centralized quality assurance review, replacing the agency’s older system of regionally based quality review boards and thereby eliminating the potential differences among regional reviews that were a cause for inconsistent decisions among DDSs. As part of the DSI rollout, SSA plans to incorporate new electronic tools for decision writing to be used by disability examiners, federal reviewing officials, and ALJs. The tools are intended to promote quality in two ways. First, the tools will require decision makers to document the rationale behind decisions in a consistent manner while specifically addressing areas that have contributed to errors in the past, such as failing to list a medical expert’s credentials or inaccurately characterizing medical evidence. Second, the tools will help provide a feedback loop, by which adjudicators and decision writers can learn why and under what circumstances their decisions were remanded or reversed. SSA officials told us that once the tools are in full use, the Office of Quality Performance will collect and analyze their content to identify errors or areas lacking clarity. They also plan to provide monthly reports to regional managers in order to help them better guide staff on how to improve the soundness of their decisions and the quality of their writing. The establishment of the Decision Review Board, with responsibility for reviewing ALJ decisions, is intended to promote quality and consistency of decisions in two ways. First, once DSI is rolled out nationwide, the board will be tasked to review error-prone ALJ decisions with the intent of further ensuring the correctness of these decisions before they are finalized. Second, during the initial rollout phase, SSA plans to have the board review all ALJ decisions to verify that the predictive model used to select error-prone cases is doing so as intended. Importantly, both the tools and the board’s assessment are consistent with our prior recommendations that SSA engage in more sophisticated analysis to identify inconsistencies across its levels of adjudication and improve decision making once the causes of inconsistency among them have been identified. In addition to these actions, SSA told us it plans to measure outcomes related to how DSI is affecting claimants, such as allowance rates and processing times at each adjudication stage, and the proportion of cases remanded from the federal courts and the rationales for these remands. Further, officials told us they will work with the federal courts to track changes in their workload. SSA officials also told us they are working to monitor changes in costs associated with the new DSI process, in terms of both the administrative costs of the process, as well as its overall effect on benefit payments. Officials also said that SSA will track the length of time it takes the agency to reach a final decision from the claimant’s perspective, which we have recommended in the past. Although SSA officials told us that ALJ accuracy rates will be generated from the board’s review of all ALJ decisions, they said they were not yet certain how they will measure these rates once DSI is rolled out nationwide and the board is no longer reviewing all ALJ decisions. While these developments are promising, aspects of these changes and of SSA’s plans to monitor the DSI implementation have either not been finalized or not been revealed to us. For example, SSA has not yet revealed the types of reports it will be able to provide decision makers based on the decision-writing tools. In addition, while SSA plans to measure the effectiveness of the new process, its timeline for doing so and the performance measures it plans to use have not been finalized. According to SSA officials, potential measures include how well the predictive models have targeted cases for quick decisions at the initial DDS level or error-prone cases for the board, and whether feedback loops are providing information that actually improves the way adjudicators and decision writers perform their work. SSA Has Employed Other Change Management Practices to Implement DSI SSA’s efforts and plans show commitment to implementing DSI gradually, using tested concepts, involving top-level management, and communicating frequently with key stakeholders—practices that adhere closely to our prior recommendations on effective change management practices. With regard to gradual implementation, we had previously suggested that SSA test promising concepts in a few sites to allow for careful integration of the new processes in a cost-effective manner before changes are implemented on a larger scale. SSA’s decision to implement DSI in one small region is consistent with this recommendation. SSA officials told us they selected Boston because it represents the smallest share of cases reviewed at the hearings level and because it is geographically close to SSA’s headquarters to facilitate close monitoring. While SSA officials acknowledged that unanticipated problems and issues are likely to arise with implementation, they assert that they will be able to identify major issues in the first 60 to 90 days. SSA officials believe this will give them plenty of time to make changes before rollout begins in a second region. SSA has also indicated that it plans to roll DSI out next in another relatively small region. Also consistent with our past recommendations, SSA officials noted that some new elements of DSI have been tested prior to integration. For example, the ALJ tool for decision writing has been tested extensively during development, and they anticipate having fewer challenges when similar tools are used more widely. In addition, SSA has said that it has rigorously tested its model related to the Quick Disability Determination System and that it will continue to check the selection of cases and monitor the length of time it takes for quick decisions to be rendered. SSA’s efforts and plans are also consistent with effective change management practices in that they ensure the commitment and involvement of top management. Specifically, SSA’s Commissioner first proposed DSI-related changes in September 2003, and the agency began restructuring itself soon after the rule was finalized. In addition, SSA created a deputy-level post for its new Office of Quality Performance and appointed a new Deputy Commissioner in its newly created Office of Disability Adjudication and Review, which oversees the hearing and appeals processes. We have also encouraged top managers to work actively to promote and facilitate change, and SSA appears to be adhering to these principles as well. For example, SSA officials told us that the Deputy Commissioners from SSA’s offices of Personnel and Human Capital have collaborated with their counterparts in policy units to develop position descriptions and competencies for nurse case managers and federal reviewing officials. According to SSA officials, these leaders are also collaborating to develop interview questions for eligible candidates. Further, SSA officials told us their new human capital plan will be released sometime in July and that it will emphasize the goals of DSI, as well as the personnel changes that will accompany it. Finally, SSA’s communication efforts with stakeholders align with change management principles in several respects. For example, SSA has employed a proactive, collaborative approach to engaging the stakeholder community both during DSI’s design and in its planning for implementation in order to explain why change is necessary, workable, and beneficial. Even before the notice of proposed rule making on DSI was published, SSA began to meet with stakeholder groups to develop the proposal that would eventually shape the new structure. Then, once the proposed rule was issued, SSA officials told us they formed a team to read and analyze the hundreds of comment letters that stakeholders submitted. In addition, they conducted a number of meetings with external stakeholders to help the agency identify common areas of concern and develop an approach to resolving the issues stakeholders raised before rollout began. According to SSA officials responsible for these meetings, the Commissioner attended more than 100 meetings to hear stakeholder concerns directly. Further, SSA recently scheduled a meeting for early July with claimant representatives to discuss that group’s particular concerns about how the new process will affect their work and their disability clients. SSA officials told us that senior-level staff will lead the meeting and that about 100 claimant representatives from the Boston region will attend. In addition, SSA officials have also worked to ensure that there are open lines of communication with its internal stakeholders, thereby ensuring that disability examiners and staff in the Boston region are knowledgeable about DSI-related changes. For example, SSA solicited comments and questions from the Boston region’s staff about the specifics of the rollout and held a day-long meeting in the region, led by Deputy Commissioners, to respond to these concerns. Concluding Observations For some time, SSA has been striving to address long-standing problems in its disability claims process. From our perspective, it appears that SSA is implementing the new claims process by drawing upon many lessons learned from past redesign efforts and acting on, or at least aligning its actions with, our past recommendations. For example, significant aspects of the DSI rollout are consistent with our recommendations to focus resources on what is critical to improving the disability claims process, such as quality assurance and computer support. SSA’s incremental approach to implementing DSI—taking a year to monitor the process and testing new decision-writing tools, for example—is also consistent with our recommendation to explore options before committing significant resources to their adoption. Thus, the agency is positioning itself to make necessary modifications before implementing the new process in subsequent locations. Finally, and fundamental to all of this, SSA’s top leadership has shown a commitment to informing affected stakeholders and listening to their advice and concerns with respect to the development and implementation of this process. While SSA’s steps and plans look promising, we want to stress the importance of diligence and follow-through in two key areas. The first is quality assurance, which entails both effective monitoring and evaluation. A solid monitoring plan is key to helping SSA quickly identify and correct problems that surface in the Boston rollout, because any failure to correct problems could put the entire process at risk. An evaluation plan is critical for ensuring that processes are working as intended and that SSA is achieving its overarching goals of making accurate, consistent decisions as early in the process as possible. The second key area is communication. It is important for SSA’s top leadership to support open lines of communication throughout implementation if the agency is to facilitate a successful transition. Failure to, for example, provide useful feedback to staff—many of whom will be new to the agency or at least to the new tools—could significantly jeopardize opportunities for improvement. Just as important, SSA’s top management needs to ensure that the concerns and questions of stakeholders affected by the new process are heard, and that concerned parties are kept apprised of how SSA intends to respond. The eventual elimination of the Appeals Council and its replacement with the Decision Review Board with a very different purpose has been a great cause of concern for a number of stakeholders. SSA appropriately has plans to assess its impact by tracking decisions resulting from each stage of the new process, as well as the effect of the process on the federal courts’ caseloads and claimants at large. To its credit, SSA plans to reduce any immediate impact on the courts by requiring that the board initially review all ALJ decisions in the Boston region. However, given that the agency plans to rely heavily on new positions, such as the federal reviewing official, and on new technology, SSA will need to ensure that staff are well trained, and that each adjudicator has the support staff needed to work effectively. Focusing on one small region will, it is hoped, allow the agency to ensure that training, technology, and other resources are well developed to achieve expected goals before DSI is expanded to other parts of the country. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. GAO Contact For future contacts regarding this testimony, please contact me at (202) 512-7215 or RobertsonR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Staff Acknowledgments The following individuals have made major contributions to this statement— Susan Bernstein, Candace Carpenter, Joy Gambino, Michele Grgich, Luann Moy, Daniel Schwimer, and Albert Sim. Appendix I: Objectives, Scope, and Methodology To learn more about the public’s and stakeholders’ views with regard to the Appeals Council and the Decision Review Board, we reviewed and analyzed a large sample of comment letters they submitted to the Social Security Administration (SSA) in response to its July 2005 notice of proposed rule making on the Disability Service Improvement process (DSI) that were related to these topics. We also interviewed a number of key stakeholder groups to solicit their opinions once the rule had been finalized. Reviewing and Analyzing Comment Letters To review and analyze the comment letters, we first downloaded all 1,143 comments that SSA had received and posted to its public Web site. In order to focus our review on only those letters that related to the Appeals Council and the Decision Review Board, we then applied a word search to restrict our analysis to the responses that used the terms “Decision Review Board,” “DRB,” and “Council.” Applying these search terms reduced the number of comment letters for review to 683. We discarded 43 of these 683 letters over the course of our review because they were duplicates of letters by the same authors or did not contain relevant comments. As a result, our final analysis was based on the remaining 640 letters. To classify the nature of the comments contained in these 640 letters, we coded the opinions as related to one of more of the following concerns: The Appeals Council is improving, and its termination will not improve the disability determinations process. There is a risk that the Decision Review Board may not select the most appropriate cases for review. There is a risk that Decision Review Board could unfairly evaluate or influence administrative law judge decisions. In the absence of an Appeals Council, the claimant no longer has the right to initiate subsequent case review. There is no opportunity for the claimant or his or her representative to argue before the Decision Review Board. A claimant’s benefit might be protracted or delayed during Decision Review Board assessment. Petitions to the federal court are likely to increase. Appeals to the federal court are costly or intimidating, and claimants may not have the wherewithal to pursue the claim at this level. Of the 640 letters in our review, we initially identified 388 as form letters, or letters containing identical comments, even though they had different authors. To simplify our review, we coded these form letters separately from the other letters. For the 252 letters that we did not initially identify as form letters, one analyst reviewed and coded each letter, while a second analyst verified that he or she had coded the statements appropriately. If the first and second analysts did not come to an agreement, a third analyst reviewed the comment and made the final decision for how the content should be classified. Table 2 below indicates the percentage of the 252 letters citing one or more of the above concerns. For the 388 form letters, we coded one letter according to the process described above. Because the text of the form letters was identical for each, we then applied the same codes to each of the other form letters. All 388 form letters expressed each of the concerns above. Identifying and Interviewing Stakeholders To identify key stakeholders, we first referenced the list of organizations that SSA included in its notice of proposed rule making as having met with the agency during its development of the final rule. We then narrowed this list by obtaining suggestions from SSA officials about organizations that are the most active and cover a broad spectrum of disability issues. In total, we spoke with representatives from 10 groups: Administrative Office of the U.S. Courts’ Judicial Conference Committee on Federal-State Jurisdiction, Association of Administrative Law Judges (AALJ), Consortium for Citizens with Disabilities’ Social Security Task Force (CCD), National Association of Councils on Developmental Disabilities National Association of Disability Examiners (NADE), National Association of Disability Representatives (NADR), National Council of Disability Determination Directors (NCDDD), National Council of Social Security Management Associations National Organization of Social Security Claimants’ Representatives (NOSCCR), and Social Security Advisory Board. Related GAO Products This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In March 2006, the Social Security Administration (SSA) published a rule that fundamentally alters the way claims for disability benefits are processed and considered. The rule establishes the Disability Service Improvement process (DSI)--intended to improve the accuracy, timeliness, consistency, and fairness of determinations. DSI's changes include an opportunity for an expedited decision during the initial determination process and the elimination of the Appeals Council, which had given claimants the right to appeal administrative law judge (ALJ) decisions before pursuing federal court review. DSI replaces the council with a Decision Review Board, which will selectively review ALJ decisions. However, dissatisfied claimants whose cases are not selected for board review must now appeal directly to the federal courts. Based on its ongoing work, GAO was asked to testify on (1) public and stakeholder concerns about the elimination of the Appeals Council and its replacement by the Decision Review Board and SSA's response to these concerns, as well as (2) the steps that SSA has taken to help facilitate a smooth implementation of the DSI process. Concerns regarding the replacement of the Appeals Council with the Decision Review Board--raised by the public and stakeholder groups, such as claimant representatives--generally fall into two areas: (1) potential for increasing the workload of the federal courts and (2) anticipated hardship for claimants in terms of the loss of an administrative appeal level and difficulties associated with pursuing their claim in federal court. SSA's response to concerns regarding the federal court workload is that all changes associated with the new DSI process--taken together--should reduce the need for appeal to the federal courts; at the same time, SSA plans to implement this final step gradually and with additional safeguards to minimize impact on the courts. In response to concerns about the loss of appeal rights, SSA contends that DSI introduces enhanced levels of federal review earlier in the process and that claimants should experience a decline in the amount of time it takes to receive a final agency decision. SSA has prepared in significant ways for the initial rollout of DSI in its Boston region, but the agency's timetable is ambitious and much work remains. The agency has moved forward in key areas that underpin the new system--human capital development, technical infrastructure, and quality assurance--taking actions consistent with past GAO recommendations for improving the disability determination process. For example, SSA has taken steps to ensure that key technical supports, particularly its electronic disability case processing system, are in place--even though it has allowed itself little time to address and resolve any glitches that may arise prior to implementation. SSA has also taken several steps to lay a foundation for quality assurance by centralizing its quality assurance reviews, establishing a Decision Review Board for reviewing decisions, and developing writing tools that should foster consistency and thorough documentation at all phases of the determination process. Further, we found that SSA's decision to implement DSI first in one small region prior to its introduction nationally is a good change management strategy that reflects our earlier recommendations. Additionally SSA has taken a proactive, collaborative approach to both the design and the implementation of the new determination process. Nevertheless, key facets of SSA's plan to monitor and evaluate the Boston rollout remain to be developed. For example, performance measures for assessing the execution of the rollout are still unclear to us, and mechanisms for delivering feedback to staff on the clarity and soundness of their decision writing have not yet been fully developed.
Background Since the September 11, 2001, terrorist attacks on the United States, DOD has launched three major military operations requiring significant military personnel: Operation Noble Eagle, which covers military operations related to homeland security; Operation Enduring Freedom, which includes ongoing military operations in Afghanistan and certain other countries; and Operation Iraqi Freedom, which includes ongoing military operations in Iraq. These military operations have greatly increased the services’ operations and personnel tempo of the military services, and especially those of the Army and Marine Corps, which have provided the bulk of the military personnel burden associated with operations in Iraq. Additionally, a significant number of military personnel have been killed or wounded in Iraq. Many congressional and military observers have expressed concern that the current operations tempo, combined with the level of casualties in Iraq, might lead to lower recruiting and retention rates, thereby raising questions about DOD’s ability to sustain long-term force requirements. In addition, there are growing concerns that a number of stress factors, such as back-to-back and/or lengthy overseas deployments and heavier reliance on the reserve components in the Army and Marine Corps, may significantly hinder DOD’s overall ability to effectively recruit and retain forces. According to DOD officals, recruiting is the military services’ ability to bring new members into the military to carry out mission essential tasks in the near term and to begin creating a sufficient pool of entry-level personnel to develop into future mid-level and upper-level military leaders. To accomplish this task, active, reserve, and Guard components set goals for accessions, or new recruits, who will enter basic training each year. To assist in recruiting, the military services advertise on television, on radio, and in print and participate in promotional activities, such as sports car racing events. In response to some of the services missing their overall recruiting goals in the late 1990s, DOD increased its advertising, number of recruiters, and financial incentives. Our September 2003 report assessed DOD’s recruiting advertising programs, and concluded that DOD did not have clear program objectives and adequate outcome measures to evaluate the effectiveness of its advertising. We recommended, and DOD agreed, that measurable advertising objectives should be established and outcome measures should be developed to evaluate advertising programs’ performance. The term retention used by DOD refers to the military services’ ability to keep personnel with the necessary skills and experience. Servicemembers have the opportunity to either leave the military or reenlist when their contracts expire. A common retention concern is that too few people with the needed skills and experience will stay in the military, thereby creating a shortage of experienced personnel, decreased military efficiency, and lower job satisfaction. Although the services have each created their own unique means of tracking retention, they all measure retention in a career path at key points that are delineated by various combinations of years of service and number of enlistments. The Army and Marine Corps set numerical retention goals; the Air Force and Navy state their retention goals in terms of percentages of those able to reenlist. Military Components Generally Met Overall Recruiting and Retention Goals for the Past 5 Fiscal Years (2000-2004), but Some Components Have Missed Early 2005 Goals The military components generally met their overall recruiting and retention goals over the past 5 fiscal years. However, some are beginning to experience difficulties in meeting their overall recruiting and retention goals for fiscal year 2005. Most Overall Recruitment Goals Were Met for Past 5 Years, but Army and Marine Corps Experienced Recruiting Shortages Early This Year According to DOD data, the active and reserve components generally met their enlisted aggregate recruiting goals for fiscal years 2000 to 2004. However, it should be noted that the “stop loss” policy implemented by several components shortly after September 11, 2001, might have facilitated these components in meeting their overall recruiting goals for fiscal year 2002 and beyond. A “stop loss” policy requires some servicemembers to remain in the military beyond their contract separation or retirement date. Keeping servicemembers on active duty longer can reduce the number of new people the services need to recruit to maintain endstrength. For example, the Army, which has implemented some form of “stop loss” since December 4, 2001, has required several thousand servicemembers to remain on active duty beyond their contractual separation or retirement date. The recruiting data presented in table 1 show that in fiscal year 2004, the Army, Navy, and Air Force actually exceeded their goals with a 101 percent rate. More recently, however, the Marine Corps and Army failed to meet February 2005 overall recruiting goals. The Marine Corps missed its January goal of 3,270 new recruits by 84 people, or 2.6 percent, and narrowly missed its goal again in February. This is the first time that the Marine Corps has missed a monthly annual recruiting goal since 1995. The Army is also beginning to experience difficulties and, in February 2005, missed its goal of 7,050 new recruits by 27.5 percent, or 1,936 recruits. This is significant, given that the Army has also called members of the Individual Ready Reserve into active duty and moved thousands of recruits from its delayed entry program into basic training ahead of schedule. Air Force and Navy overall recruiting goals, on the other hand, do not appear to be in jeopardy at this time, as both services intend to reduce their endstrengths. Over the next year the Air Force plans to downsize by about 20,000 personnel, and the Navy is looking to trim more than 7,300 sailors. Table 2 shows that four of the six DOD reserve components generally met their enlisted aggregate recruiting goals for fiscal years 2000 through 2004 but that the Army National Guard achieved only 82 percent of its recruiting objectives in fiscal years 2003 and 87 percent 2004, and that the Air National Guard achieved 94 percent of its recruiting objective in fiscal year 2004. First quarter 2005 reserve and Guard recruiting data suggest that the reserve components may experience difficulties in meeting their early 2005 overall recruiting goals. The Marine Corps Reserve, which achieved 106 percent of its overall first quarter 2005 recruiting goals, is the only reserve component that has met or surpassed its goal so far this year. The Army Reserve and Army National Guard achieved 87 and 80 percent of their overall recruiting goals, respectively. The Air Force Reserve achieved 91 percent of its overall recruiting goal; the Air National Guard, 71 percent; and the Navy Reserve, 77 percent. DOD has noted that the Army Reserve components will be particularly challenged, since more active Army soldiers are staying in the active force, and of those leaving, fewer are joining the reserve components. Most Overall Retention Goals Met for Past 5 Years According to DOD data, the four active components generally met their enlisted aggregate retention goals from fiscal year 2000 through fiscal year 2004. However, as I stated in the discussion on recruiting, it should also be noted here that the services’ “stop loss” policies implemented shortly after September 11, 2001, might have facilitated the services in meeting their aggregate retention goals since fiscal year 2002. In addition, the Army generally reduced its overall retention goals from fiscal year 2000 through fiscal year 2003. Table 3 shows that the Army is the only active component that met all of its retention goals for fiscal years 2000 through 2004. Table 3 also shows that, in fiscal year 2004, the Navy missed its retention goal for initial reenlistments by just less than 2 percentage points and the Air Force missed its goal for midcareer term reenlistments by 5 percentage points. In fact, the Air Force missed this goal in 4 of the past 5 fiscal years and missed its goal for career third term or subsequent reenlistments in 2000 and 2001. The Navy missed its goal for reenlistments among enlisted personnel who have served from 10 to 14 years in 2 of the past 5 fiscal years, and the Marine Corps missed its goal for second and subsequent reenlistments in fiscal year 2003 only. For the first quarter of fiscal year 2005, data show that the Army missed its initial reenlistment goal for active duty enlisted personel by 6 percent and its midcareer reenlistment goal by 4 percent. The Air Force also missed two of its reenlistment goals for active duty enlisted personnel in the first quarter of fiscal year 2005. The Air Force achieved a reenlistment rate of 50 percent for second-term reenlistments, compared with its goal of 75 percent, and a reenlistment rate of 92 percent for career reenlistments, compared with its goal of 95 percent. The Air Force also established a goal for 55 percent of all personnel eligible for a first-term reenlistment to reenlist and missed this goal by just 1 percent. We are continuing to collect, analyze, and assess the reliability of retention data for both the active and reserve components, which we will incorporate into our final report. Aggregate Recruitment and Retention Data Do Not Identify Over- or Under-staffing within Certain Military Occupations Recruitment and retention rates, when shown in the aggregate, do not provide a complete representation of occupations that are either over- or under-filled. For example, our analysis of fiscal year 2005 Army data, on its 185 active component enlisted occupations, shows that 116 occupations, or 63 percent, are currently overfilled and that 60 occupations, or 32 percent, are underfilled. Also, the Marine Corps told us that, of its 255 active component enlisted occupations, 52 occupations, or 20 percent, are overfilled and that 37 occupations, or 15 percent, are underfilled. Data provided by the Navy show that 32 enlisted occupations are overfilled and 55 occupations are under filled. According to the Congressional Budget Office, about 30 percent of the occupations for enlisted personnel experienced shortages and about 40 percent experienced overages, on average, from fiscal year 1999 through fiscal year 2004. We requested the active, reserve, and Guard components provide us with their list of hard-to-fill occupations. On the basis of data for 7 of 10 components, we identified several hundred occupations that have been consistently designated as hard-to-fill because the components had not been able to successfully recruit and retain sufficient numbers of personnel in these areas to meet current or projected needs. Of these, we identified 73 occupations as being consistently hard to fill. Table 4 shows these 73 hard-to-fill occupations, by components. More specifically, we asked DOD to provide us with the current hard-to-fill occupations for active duty components, and we received data for the Army, Navy, and Air Force. The Marine Corps currently does not report hard-to-fill occupation information to DOD. Table 5 shows the extent to which these occupations were over- or under-filled as of November 2004. Further analysis of the data shows that 7 of the Army’s occupations (infantry, fire support specialist, cavalry scout, chemical operations specialist, motor transport operator, petroleum supply specialist, and food service specialist) and 6 of the Air Force’s occupations (airborne linguist; combat control; imagery analysis; linguist; SERE [survival, evasion, resistance, escape operations]; pararescue, and explosive ordnance disposal) are on both the services’ “hard to recruit” and “hard to retain” lists. DOD’s Components Are Taking Steps to Address Recruiting and Retention Challenges DOD has made enhancements to existing programs and introduced new programs in recent years to improve its ability to recruit and retain servicemembers. These programs include increasing the eligibility for and size of enlistment and reenlistment bonuses and educational benefits, and the number of recruiters. DOD, for example, expanded the pool of servicemembers who are eligible to receive a selective reenlistment bonus. Selective reenlistment bonuses are designed to provide an incentive for an adequate number of qualified midcareer enlisted members to reenlist in designated critical occupations where retention levels are insufficient to sustain current or projected levels necessary for a service to accomplish its mission. The statutory authority for this bonus was amended in the Fiscal Year 2004 Defense Authorization Act to allow the Secretary of Defense to waive the “critical skill” requirement for members who reenlist or extend an enlistment while serving in Afghanistan, Iraq, or Kuwait in support of Operations Enduring Freedom and Iraqi Freedom. In addition, in February 2005, DOD announced a new retention bonus for Special Operations Forces personnel (Army Special Forces; Navy SEALs; and Air Force pararescue, plus a few other specialties) who decide to remain in the military beyond 19 years of service. The largest bonus, $150,000, will go to senior sergeants, petty officers, and warrant officers who sign up for an additional 6 years of service. Personnel who sign up for shorter extensions will receive a smaller bonus; personnel who extend for 1 additional year, for example, will receive $8,000. Individual components have also implemented changes. The Army, for instance, increased the amount of cash bonuses it offers to new recruits in hard-to-fill military occupations up to $20,000. In December 2004, the National Guard announced that it is increasing its initial enlistment bonuses from $8,000 to $10,000 for individuals without prior service who sign up for one of the National Guard’s top-priority military occupations such as infantry, military police, and transportation. DOD officials also said the Army and the National Guard are increasing the amount of their college scholarship funds for new enlistees. The Army increased the maximum college scholarship from $50,000 to $70,000, while the Army National Guard doubled the amount it will provide to repay a recruit’s student loan to $20,000. Finally, the Army and Marine Corps components are increasing their recruiting forces to meet their additional recruiting challenges. The Army plans to add 965 recruiters to its recruiter force in fiscal year 2005, for a total force of 6,030 recruiters, and the Marine Corps plans to add 425 recruiters to its recruiter force by fiscal year 2007, bringing its total recruiter force to 3,025 recruiters. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. Contacts and Staff Acknowledgments For questions about this statement, please contact Derek B. Stewart at (202) 512-5559 (e-mail address: Stewartd@gao.gov) or David E. Moser at (202) 512-7611 (e-mail address: Moserd@gao.gov). Individuals making key contributions to this testimony included Alissa H. Czyz, Joseph J. Faley, Brian D. Pegram, and John S. Townes. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To meet its human capital needs, the Department of Defense (DOD) must convince several hundred thousand people to join the military each year while, at the same time, retain thousands of personnel to sustain its active duty, reserve, and National Guard forces. Since September 11, 2001, DOD has launched three major military operations requiring significant military personnel--Operation Noble Eagle, Operation Enduring Freedom, and Operation Iraqi Freedom. The high pace of military operations combined with the level of casualties in Iraq and other factors, such as lengthy overseas deployments, have raised concerns about DOD's ability to recruit and retain sufficient numbers of personnel who possess the skills and experience needed. This testimony presents GAO's preliminary findings on (1) the extent to which the active duty, reserve, and Guard components have met their overall recruiting and retention goals, (2) the degree to which the components have met their recruiting and retention goals for selected hard-to-fill critical occupations, and (3) steps the components have taken to enhance their recruiting and retention efforts. This testimony focuses on enlisted personnel. In continuing its work, GAO will assess the reliability of DOD-provided data and plans to issue a report on these issues this fall. DOD's 10 military components generally met their overall recruitment and retention goals for each of the past 5 fiscal years (FY), but some of the components experienced difficulties in meeting their overall goals in early FY 2005. However, it should be noted that several components introduced a "stop loss" policy shortly after September 11, 2001. The "stop loss" policy requires some servicemembers to remain in the military beyond their contract separation date, which may reduce the number of personnel the components must recruit. During FY 2000-2004, each of the active components met or exceeded their overall recruiting goals. However, for January 2005, the Marine Corps missed its overall active duty recruiting goal by 84 recruits and narrowly missed its goal again for February 2005. The Army also missed its overall recruiting goal for February 2005 by almost 2,000 recruits. This is significant, given that the Army has also already called up members from the Individual Ready Reserve and moved new recruits from its delayed entry program into basic training earlier than scheduled. Four of the six reserve components mostly met their overall recruiting goals for FYs 2000 through 2004, but many experienced difficulties in early FY 2005. DOD has noted that the Army Reserve components will be particularly challenged, since fewer active Army soldiers leaving active duty are joining the reserves. In terms of retention, the active components generally met their overall retention goals for the past 5 FYs. The Army, for example, met or exceeded overall retention goals from FY 2000 through FY 2004. The Army and the Air Force, however, missed retention goals in the first quarter of FY 2005. Overall recruitment and retention data do not provide a complete representation of military occupations that are either over- or under-staffed. For example, GAO's analysis of early FY 2005 data shows that 63 percent of the Army's active component specialties are overfilled and 32 percent are underfilled. Also, several hundred hard-to-fill occupations exist within the 10 DOD components. GAO identified 73 occupations that have been consistently designated as hard-to-fill occupations. GAO's analysis also shows that 7 of the Army's current occupations (e.g., infantry and cavalry scout) and 6 of the Air Force's current occupations (e.g., combat control and linguist) are on both their "hard-to-recruit" and "hard-to-retain" lists. DOD's components have been taking a number of steps to enhance their recruiting and retention efforts. For example, DOD has expanded eligibility for selective reenlistment bonuses and has also begun offering reenlistment bonuses of as much as $150,000 to special operation forces personnel with 19 or more years of experience who reenlist for an additional 6 years. The Army increased the amount of cash bonuses it offers to new recruits in hard-to-fill military occupations to as much as $20,000. The Army also increased its maximum college scholarship from $50,000 to $70,000. In addition, the Army plans to add 965 recruiters in FY 2005, and the Marine Corps plans to add 425 recruiters by FY 2007.
Background The U.S. district courts are the trial courts of the federal court system. There are 94 federal judicial districts—at least one for each state, the District of Columbia, and four U.S. territories—organized into 12 regional circuits. Each circuit has a court of appeals whose jurisdiction includes appeals from the district and bankruptcy courts located within the circuit, as well as appeals from decisions of federal administrative agencies.The Administrative Office of the United States Courts (AOUSC) within the judicial branch carries out a wide range of services for the federal judiciary, including capital-planning. The Judicial Conference of the United States (Judicial Conference) supervises the Director of the AOUSC and is the principal policy-making body for the federal judiciary and recommends national policies and legislation on all aspects of federal judicial administration. Federal courthouses can house a variety of appellate, district, senior district, magistrate, or bankruptcy judges as well as other court and non- court-related tenants. Prior to 2008, the judiciary did not require judges to share courtrooms, except in situations where the courthouse was out of space. courtroom-sharing (1) between senior district judges and (2) between magistrate judges. In 2011, the Judicial Conference adopted a courtroom- sharing policy for bankruptcy judges. These policies apply to new courthouse projects and existing courthouses when there is a new space need that cannot otherwise be accommodated. (See app. II for more information on judiciary’s courtroom-sharing policies.) The judiciary has also been studying the feasibility of an appropriate sharing policy for district judges in courthouses with more than 10 district judges, but has not yet finalized a policy and could not tell us when or if it expected to do so. Our 2010 report examined judiciary data on courtroom usage and found that there are additional opportunities for significant cost savings through courtroom-sharing, particularly for district judges. Appellate judges, however, have always shared courtrooms because they sit in panels of three or more. operational deficiencies in the existing courthouse, and (4) the current number of judges who do not have a permanent courtroom and chambers in the existing courthouse, plus the projected number of judges over the 10-year planning period who will not have a courtroom and chambers. From fiscal years 2005 to 2006, as a cost containment initiative, the judiciary imposed a moratorium on new courthouse construction while it reevaluated its capital-planning process. In 2008, the judiciary began using a new capital-planning process, called the Asset Management Planning (AMP) process, to assess, identify, and rank its space needs. According to judiciary officials, the AMP process addresses concerns about growing costs and incorporates best practices related to capital-planning. The AMP process includes several steps beginning with the completion of a district-wide Long Range Facilities Plan (LFRP). Collectively, the AMP process: documents courthouse space conditions and district space needs develops housing strategies that can include construction of a new based, in part, on the judiciary’s AMP process rules and building standards as specified in the U.S. Courts Design Guide; identifies space needs on a building-specific and citywide basis; and courthouse or annex and renovation projects. The AMP process results in an urgency score for construction or renovation based primarily on the current and future need for courtrooms and chambers and the condition assessment of the existing building (see app. III). The AMP process establishes criteria for qualifying for new courthouse construction, such as requiring that an existing courthouse have a chamber for each judge and needing two or more additional courtrooms. Judiciary officials told us that unlike the previous capital- planning process, a new courthouse could no longer be justified as part of the AMP process based solely on security or operational deficiencies. The judiciary has chosen to improve security within existing courthouse rather than replace them with new courthouses. After the Judicial Conference identifies courthouse projects, GSA conducts feasibility studies to assess alternatives for meeting the judiciary’s space needs and recommends a preferred alternative. The judiciary adopts the GSA recommended alternative, which may differ from the alternative recommended in the AMP process. For example, a project may not qualify for new courthouse construction under the AMP process, but GSA may determine through its feasibility study that new construction is the most cost-efficient, viable solution. See figure 1 for the judiciary’s current process for selecting and approving new courthouse construction projects. Part of the judiciary’s capital-planning—both the previous and current processes—has been to periodically communicate its facility decisions for construction projects via a document known as the Five Year Courthouse Project Plan (5-year plan). The 5-year plan is a one-page document that lists proposed projects by fiscal year and the estimated costs for various project phases (site acquisition, design, or construction) as approved by the Judicial Conference. The judiciary uses the plan to communicate its most urgent projects to Congress and other decision makers. Previously, we found that judiciary’s 5-year plans did not reflect the most urgently needed projects and lacked key information about the projects selected— such as a justification for the project’s priority level. GSA reviews its courthouse studies with the judiciary and forwards approved projects for new courthouses to the Office of Management and Budget (OMB) for review. If approved by OMB, GSA then submits requests to congressional authorizing committees for new courthouse projects in the form of detailed descriptions, or prospectuses, authorizing acquisition of a building site, building design, and construction. Following congressional authorization and the appropriation of funds for the projects, GSA manages the site, design, and construction phases. After occupancy, GSA charges federal tenants, such as the judiciary, rent for the space they occupy and for their respective share of common areas, including mechanical spaces. In fiscal year 2012, the judiciary’s rent payments to GSA totaled over $1 billion for approximately 42.4 million square feet of space in 779 buildings that include 446 federal courthouses. Before Congress makes an appropriation for a proposed project, GSA submits detailed project descriptions called prospectuses to the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure, for authorization by these committees when the proposed construction, alteration, or acquisition of a building to be used as a public building exceeds a specified dollar threshold. For purposes of this report, we refer to these committees as “authorizing committees” when discussing the submission of the prospectuses and providing additional information relating to prospectuses to these committees. Furthermore, for purposes of this report, we refer to approval of these projects by these committees as “congressional authorization.” See 40 U.S.C. § 3307. evaluating all of the courthouses until October 2015 and would take another 18 to 24 months to complete the LRFPs, dependent upon the availability of funding. AMP Process Partially Aligns with Several Leading Practices but Does Not Provide Needed Information to Decision Makers AMP Process Partially Aligns with Several Leading Practices in Capital Planning The AMP process, which the judiciary has applied to about 67 percent of its courthouses, represents progress by the judiciary in aligning its capital- planning process with leading capital-planning practices, but the document the judiciary uses to request courthouse construction projects lacks transparency and key information. We have previously reported that prudent capital-planning can help agencies maximize limited resources and keep capital acquisitions on budget, on schedule, and aligned with mission needs and goals. Figure 2 summarizes leading capital-planning practices and our assessment of the extent to which the AMP process aligns with those practices. For our analysis of judiciary’s planning practices, we focused on the judiciary’s implementation of the concepts that underlie the planning phase of OMB and GAO guidance, including linking capital-planning to an agency’s strategic goals and objectives and developing a long-term capital investment plan. Several aspects of the AMP process partially align with leading capital- planning practices, but none fully align and the 5-year plan only aligns to a limited extent—which we discuss further in this report. Here are some examples to illustrate partial alignment: Strategic Linkage. The judiciary’s strategic plan links to its management of capital assets, but the AMP process does not link to the strategic plan. For example, the AMP process documents we reviewed did not explain how the process helps achieve the goals and objectives in the judiciary’s current strategic plan, which are organized around seven issues: providing justice; the effective and efficient management of public resources; the judiciary workforce of the future; harnessing technology’s potential; enhancing access to the judicial process; the judiciary’s relationships with the other branches of government; and enhancing public understanding, trust, and confidence. However, after our review, a judiciary official told us that the Long Range Facility Plans (LRFP) currently under development would include a reference to the strategic plan. Needs Assessment and Gap Identification. The AMP process has improved judiciary’s needs assessment and gap analysis by establishing a comprehensive, nationwide 328-factor study for every courthouse, whereas the previous process was not as comprehensive and only assessed courthouses when requested by a local judicial district. The AMP process evaluates the degree to which existing facilities support court operations by applying space functionality standards, security, and building condition factors. However, cost estimates supporting the judiciary’s needs are incomplete, as discussed later in this report. Alternatives Evaluation. The AMP process establishes a review and approval framework criteria for justifying new construction, whereas none existed in the previous process. The AMP process evaluates some alternatives, such as renovating existing courthouses to meet needs, but it is unclear if the judiciary considered other options, such as courtroom-sharing in the existing courthouse. Assessing a wide- range of alternatives would help the judiciary ensure that it evaluated other, less costly, approaches to bridging the performance gap before recommending new construction. Review and Approval Framework with Established Criteria for Selecting Capital Investments. The AMP process includes a review and approval framework with criteria, such as courthouses needing two or more courtrooms to qualify for a new courthouse project. However, courtroom deficits are not apparent in most projects reported in the 5-year plan. Long-Term Capital Investment Plan. Judiciary officials with whom we spoke agreed that the 5-year plan is not a long-term capital investment plan, but it is what the judiciary uses to document its request for new courthouse construction to decision makers. The one- page 5-year plan document does not reflect the depth of the AMP process, describe all other projects that the judiciary considered, or indicate how the projects chosen will help fulfill the judiciary’s mission, goals, and objectives. Two courthouse projects illustrate how the AMP process has changed the way the judiciary evaluates its need for new courthouses. Specifically, two projects listed on a previous 5-year plan (covering fiscal years 2012 through 2016) were re-evaluated under AMP—San Jose, California, and Greenbelt, Maryland. Both had ranked among the top 15 most urgent projects nationwide under the previous capital-planning process, and as such, the judiciary prioritized them for new construction in 2010. However, after the judiciary evaluated the San Jose and Greenbelt projects under the AMP process, their nationwide rankings fell to 117 and 139, respectively. Judiciary officials explained that this drop was largely because of the completion of additional AMP assessments, coupled with the reduced space needs because of courtroom-sharing. Following the change in rankings, GSA and the judiciary determined that judiciary’s needs could alternatively be addressed through repair and alteration projects that reconfigure existing space. The judiciary added that its decision saved taxpayer money. As a result, at the request of the judiciary, the Judicial Conference of the United States removed the two projects from the 5-year plan. Current 5-Year Plan Lacks Transparency, and $1- Billion Cost Estimate Is Not Comprehensive The judiciary’s current 5-year plan—the end product of the judiciary’s capital-planning process—does not align with leading practices for a long- term capital investment plan in a number of ways. The plan does not provide decision makers with detailed information about proposed construction projects or how they were selected. The one-page document lists each project by city name, year, and dollar estimate for the next phase of the project’s development as shown in figure 3. The one-page plan also provides the project’s urgency score from the judiciary’s capital- planning process. However, the document does not specify whether the scores were developed under the old process or the AMP process. Unlike a long-term capital investment plan—usually the end product under leading capital-planning practices—the 5-year plan lacks complete cost and funding information, linkage to the judiciary’s strategic plan, and information on why projects were selected. Specifically, while courthouses provide facilities for the judiciary to accomplish goals set out in its strategic plan, such as enhancing access to the judicial process, the 5-year plan contains no mention of the strategic plan. In addition, the 5- year plan does not include a discussion of the AMP process and criteria; a schedule of when the AMP process will be completed; and details on the alternatives considered during the process, such as whether the judiciary’s courtroom-sharing policy was applied prior to requesting a new courthouse project. The 5-year plan is not transparent and does not provide key funding information, such as total estimated project costs. Specifically, it lists about $1.1 billion in estimated costs, which are the funds needed for that specific 5-year period. However, these costs only include part of the project phases. The estimated cost of all project phases—site acquisition, building design, and construction—comes to $1.6 billion in 2013 dollars. In addition, while no longer included in the 5-year plan, the judiciary estimated that it would need to pay GSA $87 million annually in rent, or $1.6 billion over the next 20 years, to occupy these courthouses if constructed. Table 1 describes our analysis of judiciary’s data for the estimated cost of all phases and projected rent costs that total almost $3.2 billion. However, even though the $3.2-billion estimate provides a more complete presentation of the project costs, that estimate could change based on GSA’s redesign of projects because of changes in the judiciary’s needs. In addition, the $3.2-billion estimate does not include life-cycle costs, such as furniture and GSA disposal of existing facilities, which would also have to be included for the cost estimate to be comprehensive. GAO and OMB have established that estimates of life-cycle costs are necessary for accurate capital-planning. In addition, the 5-year plan does not provide the amount of funding already provided for all of the projects. Since fiscal year 1995, Congress has appropriated about $177 million of the estimated $1.6 billion needed for 10 of these projects’ phases, mostly for site acquisitions and designs. None of the projects has begun construction, and only the Mobile project has received any construction funding (see fig. 4). We found that the 5-year plan does not align with the leading practice of considering the risks involved in acquiring new courthouses. Specifically, the plan does not inform stakeholders that 11 of the 12 projects require further design before construction can begin. According to GSA officials, the agency has not received funding for the design of two projects (Chattanooga and Des Moines). Of the remaining 10 projects that have design funding, 1 is in the design process and 9 are on hold. According to GSA officials, some of the projects on hold must be re-designed to accommodate policy and other requirements relating to, for example, changes such as courtroom-sharing and energy management.example, the design of the Savannah courthouse project was completed in 1998 and now needs extensive re-design to accommodate changes mandated by policy shifts, including improved security and a reduction in the number of courtrooms needed. GSA officials said that only the design of the Nashville project—though oversized by one floor—is likely to remain largely intact because it would be more cost-effective to rent the additional space to other tenants than to completely re-design the project. In February 2012, judiciary submitted its 5-year plan to Congress and other decision makers. As a result, there is a risk that funding decisions could be made without complete and accurate information. Congress would benefit from having information based upon a long-term capital investment plan for several reasons. Specifically, transparency about future priorities could allow decision makers to weigh current-year budget decisions within the context of projects’ expected future costs. In the case of the judiciary, which has identified a number of future courthouse projects estimated to cost several billion dollars, full transparency regarding these future priorities may spur discussion and debate about actions Congress can take to address them. Additionally, transparency regarding future capital costs would put the judiciary’s priorities in context with federal spending. There is widespread agreement that the federal government faces formidable near- and long-term fiscal challenges. GAO has long stated that more transparent information and better incentives for budget decisions, involving both existing and proposed programs, could facilitate consideration of competing demands and help put U.S. finances on a more sustainable footing. Most Courthouse Projects Were Not Evaluated under AMP Process and Do Not Meet AMP Criterion for New Construction Judiciary Has Not Evaluated Most 5-Year Plan Projects under the AMP Process The judiciary has not applied the AMP process to 10 of the 12 construction projects on the current 5-year plan dated September 2012. These 10 projects were evaluated under the judiciary’s prior capital- planning process and approved based on their urgency levels as determined under that process. Judiciary officials said that they did not want to delay the projects or force them to undergo a second capital- planning process review because the judiciary had already approved the projects. Only 2 projects on the current 5-year plan (2014 to 2018) were assessed under the AMP process—Chattanooga, Tennessee, and Des Moines, Iowa. Judiciary officials said these projects were added to the 5- year plan in September 2010 because they had the highest priority rankings of the projects that had undergone an AMP review at that time. Judiciary officials explained that these projects also had GSA feasibility studies that recommended new construction. However, the Chattanooga and Des Moines projects have not retained their top rankings as the judiciary has continued to apply the AMP process to additional courthouses. Specifically, judiciary documents show that more than a dozen other projects not included on the 5-year plan now rank above the Chattanooga and Des Moines projects, six of which recommend new construction. For example, we visited the federal courthouse in Macon, Georgia, which now ranks higher than either the Chattanooga or Des Moines projects. The Macon courthouse suffers from numerous operational and security issues typical of historic courthouses, but it is not included on the 5-year plan. As we previously noted, the judiciary also applied the AMP process to 2 other projects that were included on an older 5-year plan (2012 to 2016)—San Jose and Greenbelt—and subsequently removed them after the projects received substantially lower priority rankings, as shown in appendix IV. The change in the rankings of the 4 projects calls into question the extent to which the projects remaining on the 5-year plan represent the judiciary’s most urgent projects and whether proceeding with these projects while hundreds of AMP reviews remain to be done represents the most fiscally responsible path. We recognize that conducting AMP reviews of the 10 projects on the 5-year plan would involve additional costs; however, not conducting AMP reviews on these projects could involve spending billions of dollars over the next 20 years on courthouses that may not be the most urgent projects. While the AMP process only partially aligns with leading practices in capital-planning, it is a significant improvement over the capital-planning process the judiciary used to choose 10 of the 12 projects on the 5-year plan. Assessing the 10 projects with the AMP process could help ensure that projects on the 5- year plan do, in fact, represent the judiciary’s most urgent projects. Most Projects Do Not Qualify for a New Courthouse under the AMP Courtroom Criterion found that 5 of the projects on the list currently need additional courtrooms, and of those, only the Charlotte and Greenville projects would qualify under the AMP criterion because both need three additional courtrooms (see table 2). We did not assess if the shortage of courtrooms alone is the most appropriate criterion for requesting new construction from GSA, but the establishment of a clear criterion adds an element of transparency that was lacking in the judiciary’s previous capital-planning process. We visited two courthouses on the current 5-year plan that were selected as new construction projects under the prior capital-planning process— Savannah and Anniston built in 1899 and 1906, respectively. These historic courthouses qualified for new construction under the previous process because of space needs and security and operational deficiencies because of their age, condition and building configuration. According to judiciary and GSA officials, neither courthouse meets Design Guide standards for (1) the secure circulation of prisoners, the public, and courthouse staff and (2) the adjacency of courtrooms and judge’s chambers. However, neither of these courthouses would qualify for new construction under the AMP criterion as both have a sufficient number of existing courtrooms for all the judges. Specifically, the Savannah and Anniston courthouses each have enough courtrooms for all assigned judges to have exclusive access to their own courtroom. Savannah currently houses one district judge, one senior district judge, one magistrate judge, and one bankruptcy judge. Figure 5 shows two courtrooms in the Anniston courthouse that currently house one senior district judge and one bankruptcy judge. As discussed, the judiciary’s courtroom-sharing policies for senior district, magistrate, and bankruptcy judges allow it to reduce the scope of its courthouse projects and contributed to the cancelation of other courthouse projects. The judiciary has also been studying a courtroom- sharing policy for district judges but has not yet finalized a policy and could not provide a date when and if it planned to do so. Our 2010 report based on judiciary data on courtroom scheduling and use showed that judges of all kinds, including district judges, could share courtrooms without delaying any scheduled events and recommended that the judiciary expand courtroom-sharing to more fully reflect the actual scheduling and use of district courtrooms. Specifically, judiciary data showed that three district judges could share two courtrooms or a district judge and a senior district judge could share one courtroom. If district judges shared courtrooms in this way, the judiciary would have a sufficient number of courtrooms in all of the 12 proposed projects in the 5- year plan, based on the AMP criterion. In responding to our recommendation, the judiciary stated that our 2010 report oversimplified the complex task of courtroom-sharing by assuming that judicial proceedings were more certain and predicable than they are. We addressed the uncertainty of courtroom scheduling by (1) accounting for unused scheduled time as if the courtroom were actually used and (2) providing additional unscheduled time in courtrooms. Since potential courtroom-sharing among district judges could reduce the need for additional courtroom space and the AMP criterion for qualifying for new courthouse construction, it is important for the judiciary to finalize its position and policy on courtroom-sharing, as we previously recommended. Conclusion With the development and implementation of the AMP process, the judiciary’s capital-planning efforts partially align with several leading practices. The AMP process has the potential to provide a wealth of information on the judiciary’s existing facilities and assess and rank the need for new construction based on measurable criteria. However, the 5- year plan submitted for approval of several billion-dollars worth of projects—a one-page list of projects with limited and incomplete information—does not support the judiciary’s request for courthouse construction projects. For example, the AMP process introduces a criterion for when new construction is warranted—when two or more courtrooms are needed—but the 5-year plan does not show how this criterion applies to the recommended projects. Furthermore, the 5-year plan has underestimated total costs of these projects by about $2 billion because it does not include all project phases and because the judiciary no longer includes its rent costs on the 5-year plan. Additionally, construction has not begun on any of the 12 courthouse projects on the 5- year plan and most need to be redesigned to meet current standards. Given the fiscal environment, the judiciary and the Congress would benefit from more detailed information about courthouse projects and their estimated costs than judiciary currently provides. Such information would enable judiciary and Congress to better evaluate the full range of real property priorities over the next few years and, should fiscal constraints so dictate, identify which should take precedence over the others. In short, greater transparency would allow for more informed decision making among competing priorities. Current fiscal challenges also require that the federal government focus on essential projects. While the judiciary has made significant strides in improving its capital-planning process, most of the 12 projects listed on the 5-year plan are products of its former process. It is possible that some of the 12 projects do not reflect the most urgent capital investment needs of the judiciary under its current criteria. Two projects on a previous 5- year plan that were assessed under the AMP process were removed from the list and now rank well down on the judiciary’s list of priorities, but the judiciary has not applied the AMP process to 10 courthouses on the current 5-year plan dated September 2012. Furthermore, 10 of the 12 projects on the current 5-year plan do not require a sufficient number of courtrooms to qualify for new construction under the AMP courtroom criterion. In addition, there is no evidence that the judiciary considered how it could meet the need for courtrooms without new construction if district judges shared courtrooms. Although there would be some incremental costs involved with an additional 10 AMP reviews, those costs appear justified given the billions involved in moving forward with the construction of those 10 courthouses. Similar to the 2-year moratorium the judiciary placed on courthouse construction while it developed the AMP process, it is not too late to apply the AMP process to the 5-year plan projects and possibly save taxpayers from funding construction of projects that might not represent the judiciary’s highest priorities under current criteria. It is critical that the judiciary accurately determine its most urgent projects because of the taxpayer cost and the years of work involved in designing and constructing new courthouses. Recommendations To further improve the judiciary’s capital-planning process, enhance transparency of that process, and allow for more informed decision making related to the federal judiciary’s real property priorities, we recommend that the Director of the Administrative Office of the U.S. Courts, on behalf of the Judicial Conference of the United States, take the following two actions: 1. Better align the AMP process with leading practices for capital- planning. This should include linking the AMP process to the judiciary’s strategic plan and developing and sharing with decision makers a long-term capital investment plan. In the meantime, future 5- year plans should provide comprehensive information on new courthouse projects, including: a) a summary of why each project qualifies for new construction and is more urgent than other projects, including information about how the AMP process and other judiciary criteria for new courthouse construction were applied to the project; b) complete cost estimates of each project; and c) the alternatives to a new project that were considered, including courtroom-sharing, and why alternatives were deemed insufficient. 2. Impose a moratorium on projects on the current 5-year plan until AMP evaluations are completed for them and then request feasibility studies for courthouse projects with the highest urgency scores that qualify for new construction under the AMP process. Agency Comments and Our Evaluation We provided copies of a draft of this report to GSA and AOUSC for review and comment. GSA and AOUSC provided technical comments that we incorporated as appropriate. Additionally, AOUSC provided written comments in which it agreed with our recommendation to link the AMP process to the judiciary’s strategic plan. However, AOUSC raised a number of concerns that the subpoints related to our first recommendation on improving capital planning would duplicate other judiciary or GSA documents. Furthermore, AOUSC disagreed with our recommendation to place a moratorium on the projects in the 5-year plan until it could perform AMP evaluations of those projects because it would take years and not change the result. We continue to believe that our recommendation is sound because the projects included on the 5-year plan were evaluated under the judiciary’s previous capital planning process and evidence suggested they no longer represent the judiciary’s highest priorities. Specifically, two projects on a previous 5-year plan that were assessed under the AMP process were removed from the list and now rank well down the judiciary’s list of priorities. In addition, 10 of the 12 projects on the current 5-year plan do not qualify for new construction under the AMP process. In response to AOUSC’s comments, we made some technical clarifications where noted, none of which materially affected our findings, conclusions, or recommendations. AOUSC’s complete comments are contained in appendix V, along with our response to specific issues raised. In commenting on a draft of our report, AOUSC said it would take steps to address our first recommendation to link the AMP process to the judiciary’s strategic plan, but cited concerns about our presentation of information, accuracy of data, and the subpoints of the first recommendation. Specifically, AOUSC disputed our characterization of the judiciary’s role in the capital-planning process for new courthouses and the information provided to Congress to justify new courthouses. According to AOUSC, Congress receives extensive, detailed information on new courthouse projects from GSA, and our recommendation for the judiciary to provide more comprehensive information on courthouse projects in 5-year plans would duplicate the GSA’s work. AOUSC also disputed our presentation of the AMP process, stating that GAO did not consider all documents when making our conclusions. AOUSC disagreed with our recommendation for a moratorium on all projects currently on the 5-year plan because completing AMP evaluations for those projects would unnecessarily delay the projects and exacerbate existing security and structural issues with the existing courthouses. In AOUSC’s view, AMP evaluations for these courthouses would take years and not alter the justification for new construction projects. AOUSC further disputed the data we used to support our conclusions about the projects on the 5-year plan and our explanation of the data’s source. AOUSC also questioned our characterization of the judiciary’s actions in response to recommendations in a prior GAO report. We believe our findings, analysis, conclusions, and recommendations are well supported. GAO adheres to generally accepted government auditing standards, which ensure the accuracy and relevance of the facts within this report. These standards include a layered approach to fact validation that includes supervisory review of all work papers, independent verification of the facts within the report, and the judiciary’s review of the facts prior to our release of the draft report for agency comment. To the extent that the judiciary is questioning any facts, the judiciary had multiple opportunities provide supporting documentation to substantiate its view. We believe that our description of the roles and responsibilities of the judiciary and the GSA in the capital-planning process for new courthouses is correct and appropriate. In reaching our conclusions about the information provided to Congress, we relied on documents we received from the judiciary and GSA. We continue to believe that by implementing our recommendation about providing additional information to Congress, the judiciary would improve the completeness and transparency of the information that Congress needs to justify and authorize funding of new courthouse projects. We will review AOUSC’s steps, once finalized, to address our recommendation that the AMP process be linked to the judiciary’s strategic plan. We continue to believe that any steps that AOUSC takes should be aligned with leading practices, including presentation of total project cost estimates and alternatives considered, such as greater courtroom sharing in existing courthouses. With regard to our recommended moratorium on projects on the current 5-year plan, we note that the AMP process represents progress by the judiciary in better aligning its capital-planning process with leading practices. Consequently, we believe that it would be worthwhile to use this improved process to ensure that all courthouse construction proposals remain the judiciary’s top priorities and qualify for new construction under the AMP process. The San Jose and Greenbelt projects were approved as among the highest priorities for new construction under the old process but, after being evaluated under the AMP process, now rank far lower on the judiciary’s list of priorities—117th and 139th, respectively. We also noted that regardless of whether a project is on the 5-year plan, GSA is responsible for ensuring that courthouses are adequately maintained. We relied on data provided by the judiciary and the GSA to support our analysis of whether the projects on the 5-year plan would qualify under the AMP process, and stand by our conclusions. We used the most current and complete data provided by the judiciary to evaluate the cost of these projects. We will review information provided by the judiciary and determine whether to close the recommendation from our 2010 report at the appropriate time. In response to AOUSC’s comments, we clarified the report and added detail to our methodology in appendix I as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Director of the Administrative Office of the U.S. Courts, the Administrator of GSA and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix VI. Appendix I: Objectives, Scope and Methodology This report addresses the following objectives: To what extent does the judiciary’s capital-planning process align with leading practices and provide the information needed for informed decision making? To what extent were the courthouse projects recommended for funding in fiscal years 2014 to 2018 assessed under the judiciary’s current capital-planning process? To evaluate the judiciary’s capital-planning process, we collected information on leading capital-planning practices from the Office of Management and Budget’s (OMB) Capital Programming Guide and GAO’s Executive Guide and compared this information with the AMP process contained in the judiciary’s Long Range Facility Plans, Facility Benefit Assessments, Citywide Benefit Assessments, Urgency Evaluations, 5-year plans and Strategic Plan. We did not review the appropriateness of criteria used by judiciary in its AMP process. We reviewed documentation on the status of courthouse construction projects and information about other federal buildings occupied by the judiciary. We reviewed GSA data on actual costs of construction and tenant improvements at two courthouse projects (Las Cruces, NM and Ft. Pierce, FL) one completed in 2010 and one completed in 2011; and GSA and judiciary estimated costs of construction for the courthouse projects on the most recent 5-year plan, covering fiscal years 2014 to 2018. To determine if life-cycle cost estimates were provided in the 5-year plan, we assessed the judiciary data against GAO’s Cost Estimating and Assessment Guide. To determine the current dollar value of the judiciary’s estimate of courthouse projects’ rents, we calculated the present value of the estimated project cost based upon averages of monthly indexes from U.S. Department of Labor, Bureau of Labor Statistics, and rent based upon 20 year OMB published discount rate for analyses. In addition, we interviewed judiciary officials on the AMP process and its alignment with leading capital-planning practices. To analyze judiciary’s capital-planning process, we reviewed our previous reports on capital-planning across the federal government, including the efforts by the judiciary and the Department of Veterans Affairscommunicate its urgent housing needs to Congress. To assess recent courthouse projects recommended for funding under the judiciary’s current capital-planning process, we reviewed the judiciary’s documents detailing the projects recommended for funding for fiscal years 2009 through 2018, called 5-year plans, and other documents on: congressional authorizations and funding appropriations for courthouse projects; judiciary information on courts and courthouses; and GSA information on federal buildings, existing and planned federal courthouses, courthouse design, and federal historic property. We interviewed judiciary and GSA officials in Washington, D.C., and federal courthouses we selected in Anniston, AL; Macon, GA; and Savannah, GA. To observe existing courthouses, we selected Anniston and Savannah because they were evaluated under judiciary’s old capital- planning process and are on the most recent 5-year plan, covering fiscal years 2014 to 2018. We selected Macon because it was highly ranked under the judiciary’s new capital-planning process and is in close proximity to Anniston and Savannah. While our observations cannot be generalized to all federal courthouses, they provide keen insights into physical conditions at old historic courthouses. We reviewed documentation provided by the judiciary on strategic planning, capital- planning, existing courthouse evaluations, the rating and ranking of existing courthouse deficiencies, existing and future judgeships, and courtroom-sharing by judges. To determine the extent that courthouse projects on the 5-year plan reflect future judges needed and courtroom- sharing, we compared the judiciary’s planned occupancy information to the judiciary’s own guidance, our previous work on judiciary’s courtroom- sharing, and a recently proposed bill from the 112th Congress that would have required GSA to design courthouses with more courtroom-sharing. We determined the number of courtrooms in the existing courthouses and compared them to the number of courtrooms needed in the new courthouses using the judiciary’s courtroom-sharing policy. We also applied judiciary’s courtroom-sharing policy for new courthouses to existing courthouses. We reviewed documentation provided by GSA on the status of courthouse construction; the status of courthouse projects on the two most recent 5-year plans; and federal buildings and courthouses occupied by the judiciary. We reviewed the judiciary’s and GSA’s data for completeness and determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from March 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Judiciary’s Courtroom-Sharing Policy for New Construction Senior District Judges In court facilities with one or two Bankruptcy Judges, one courtroom will be provided for each Bankruptcy Judge. In court facilities with three or more bankruptcy judges, one courtroom will be provided for every two bankruptcy judges, rounding down when there is an odd number of judges. In addition, one courtroom will be provided for emergency matters, such as Chapter 11 first-day hearings. Appendix III: Judiciary’s Asset-Management Planning Process Urgency-Evaluation Matrix for New Construction Projects Categories (weight) Description Current additional courtrooms needed (15%) Courtrooms needed today. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Future additional courtrooms needed (5%) Courtrooms needed within 15 years. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Current additional chambers needed (22.5%) Chambers needed today. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Future additional chambers needed (7.5%) Chambers needed within 15 years. Data separated by judge type and weights assigned (district judges 100%, senior district judges 75%, magistrate judges 50% or bankruptcy judges 50%). Courtroom-sharing per Judicial Conference policy. Citywide benefit assessment result (40%) In cities where courtrooms and chambers are located in multiple facilities, a citywide benefit assessment is produced. This incorporates the individual Facility Benefit Assessment for each facility; the type, a mix of facility ownership; and fragmentation of the court operations on a citywide basis. In cities with a single courthouse, the Facility Benefit Assessment is the same as the citywide assessment and covers 328 items in four main categories: building conditions (30%); space functionality (30%); security (25%); and space standards (15%). Civil filings historic (3%) Average annual change in the number of civil filings (1997-2011). Civil filings projected (1%) Projected average annual change in the number of civil filings (2012-2026). Criminal defendants historic (4.5%) Average annual change in the number of number of criminal defendants (1997-2011). Criminal defendants projected (1.5%) Projected average annual change in the number of criminal defendants (2012-2026). Appendix IV: Judiciary’s New Courthouse Projects for Fiscal Years 2012 to 2016 and Fiscal Years 2014 to 2018 New asset-management planning process Fiscal year 2014 – 2018 5- X 90.4 See note See note The higher the “score,” the greater the space need urgency. More than one building assessed. Appendix V: Comments from the Federal Judiciary GAO Comments 1. AOUSC stated that we failed to understand the purpose of the 5-year plan, indicating that it is not a long-term capital investment plan. The draft report that we provided to AOUSC for comment indicates that the 5-year plan is not a long-term capital investment plan. However, the 5-year plan represents the only document that communicates the judiciary’s recommendations related to new courthouse projects to Congress and other stakeholders. Since it is important for stakeholders to understand the context for new courthouse projects, we continue to believe that the judiciary should improve the completeness and transparency of the information the judiciary uses to justify these projects. 2. AOUSC stated that funding for the projects totaled $188.29 million, but did not provide any supporting information for this amount. We used General Services Administration (GSA) data to determine the amount of funding appropriated for the projects on the 5-year plan, which we state to be $177 million in our report. 3. AOUSC stated that GSA already provides sufficient information to Congress on the judiciary’s behalf for courthouse projects. While GSA provides information to congressional committees when seeking authorization for new courthouse projects, by that time, the judiciary has already recommended the projects for new construction. The 5-year plan represents the only document that communicates the judiciary’s recommendations for new construction, and it is incomplete and lacks transparency. For example, the 5-year plan underestimates the total costs of these projects by about $2 billion because it does not include all project phases and because the judiciary no longer includes its rent costs on the 5-year plan. 4. AOUSC was critical of our conclusion that the AMP process does not link to the judiciary’s strategic plan. According to AOUSC, the template for future Long-Range Facilities Plans will clearly illustrate how the AMP process supports and links to the judiciary’s strategic plan. We continue to welcome improvements to the judiciary’s approach to strategic planning for courthouse construction. We will assess these changes when they are implemented, as part of our recommendation follow-up process. 5. AOUSC noted that, with respect to our recommendation, imposing a moratorium and reviewing the projects on the 5-year plan under the AMP process would create a delay of up to 6 years and that 10 of the 12 projects have been on the 5-year plan since 1999 or earlier. AOUSC states in its response that the previous capital-planning process was “stringent,” and as a result should be respected for its policy and budgetary implications. We have previously found deficiencies in the judiciary’s previous capital planning process, including that the judiciary tends to overstate the number of judges that will be located in a courthouse after 10 years. Our draft report noted that the AMP process represents progress by the judiciary in better aligning its capital-planning process with leading practices. When the judiciary applied the AMP process to two projects on a previous 5-year plan—San Jose, California, and Greenbelt, Maryland—neither project ranked among the judiciary’s revised priorities for new construction, indeed, they ranked 117th and 139th, respectively. In addition, only two projects in the current 5-year plan qualify for new construction under the judiciary’s AMP process. Shifting courthouse priorities demonstrate a process that is not yet finalized. Given the federal government’s current budgetary condition, the judiciary should assure the Congress through its planning process that the courthouses prioritized for construction funding truly represent its most urgent needs. Otherwise, the government stands to potentially spend billions of dollars on courthouse construction that does not meet the judiciary’s most urgent needs. Assessing all courthouses under the AMP process, given the problems of the previous process, would help assure the judiciary and the Congress that the highest priority courthouses are selected and that the government is effectively spending construction funds. 6. AOUSC stated that the declining conditions in existing courthouses on the 5- year plan place judges, staff, and the public in harm’s way. Our work over a number of years has shown that many federal buildings face deteriorated conditions, a reason that federal property was included on GAO’s High Risk List. The courts are not alone in this regard. Our draft report noted that GSA is responsible for ensuring that courthouses are adequately maintained. As a result, GSA addresses building maintenance issues regardless of the status of the courthouse construction program. In addition, we note that the criteria the judiciary uses to select new courthouse construction projects are its own. The AMP process established that space shortages, not facility condition, are the only criteria for requesting new courthouse construction. 7. AOUSC noted the security concerns at existing courthouses we visited that we did not independently evaluate. For additional context, we added to the report references to the judiciary’s approach to improve security within existing courthouses rather than replace them with new courthouses. The judiciary’s AMP process criteria are consistent with this approach, as facility security deficiencies under the AMP process are no longer a justification for new courthouse construction. 8. AOUSC attached a letter from Chief Judge Lisa Godbey Wood of the Southern District of Georgia, which we have printed in this report on pages 47 to 54. We address Judge Wood’s comments separately (see comments 21- 27). 9. AOUSC stated table 2 included incorrect information and provided revisions to the table. We stand by the information provided in our report, which was provided by the GSA and the judiciary and was reviewed consistent with our internal controls under generally accepted government auditing standards. The AOUSC’s most recent numbers relate only to one courthouse in each city, but our numbers represent all the judiciary’s courtrooms in each city for which we used judiciary and GSA data. We revised our final report to clarify that the number of courtrooms in table 2 were for cities, some of which have more than one existing courthouse. For example, in Chattanooga, Tennessee, the AOUSC revised our number of courtrooms from six to four possibly because there are only four courtrooms in the Joel W. Solomon Federal Building and United States Courthouse from which the judiciary is seeking to relocate district and magistrate judge’s chambers and courtrooms. However, there are six courtrooms in Chattanooga because the bankruptcy judges’ chambers and courtrooms are located in a leased former post office/customs house. 10. AOUSC stated that the criteria of needing two or more courtrooms in order to recommend constructing a new courthouse pertains to the housing strategy recommendations contained in a district’s Long Range Facilities Plan, and that the next step is the completion of a GSA feasibility study. However, AOUSC is describing the new AMP process. The fact remains that most projects on the current 5-year plan were selected based on their evaluation under the judiciary’s previous capital-planning process, which did not include the courtroom shortage criteria. As a result, those courthouses slated for new construction under the old process and those selected under the new process are not comparable and do not represent the judiciary’s highest priorities. 11. AOUSC noted that when projects on the 5-year plan have a shortfall of one courtroom as opposed to two, the GSA feasibility study concluded that new courthouse construction was recommended. Our draft report observed that although a project may not qualify for new courthouse construction under the AMP process, GSA may determine through a feasibility study that new construction is the most cost-efficient, viable solution despite the fact the courthouse in question did not rise to the top in the selection process. 12. According to AOUSC, two projects were removed from the 5-year plan because their space needs had changed and not because their rankings dropped. Our draft report correctly stated that reduced space needs contributed to the removal of these projects from the 5-year plan. 13. AOUSC questioned if we reviewed any of the Long Range Facility Plans produced as part of the AMP process and the previous capital planning process. We reviewed these judiciary documents and have revised the description of our methodology discussed in appendix I to include the names of the documents related to the judiciary’s capital-planning process that we reviewed while developing this report. Specifically, we added the Long Range Facility Plans, Facility Benefit Assessments, Citywide Benefit Assessments, Urgency Evaluations, and the 5-year plan. 14. AOUSC stated that our assessment that the AMP process partially aligns with the leading capital practice related to “needs assessment and gap identification” was a gross error. According to AOUSC, it is not the judiciary’s role to generate cost estimates, and they believe that our partially aligns assessment is too low. While GSA is responsible for estimating the costs of courthouse projects, we continue to believe that the judiciary’s capital- planning process partially—not fully—aligns with this leading practice. GAO and Office of Management and Budget (OMB) guidance has established that estimates of life-cycle costs are necessary for accurate capital planning. The judiciary’s 5-year plan lists GSA estimated costs, but they are incomplete. Specifically, the cost estimates do not include all project phases—site acquisition, building design, and construction. In addition, the judiciary no longer includes the estimated cost of rent in its 5-year plan even though they have estimated costs for all project phases and rent. We believe this omission denies stakeholders and congressional decision makers complete information on judiciary construction-program costs. In addition, our draft report notes that these estimates are not life-cycle costs, which would also have to be included for the cost estimate to be comprehensive. 15. AOUSC disagreed with our assessment that the AMP process partially aligns with the leading capital practices related to “alternatives evaluation” because the judiciary does evaluate options with an emphasis on the least costly option. AOUSC also indicated that we did not consider Long Range Facility Plans in making this determination. We did consider Long Range Facility Plans, and continue to believe that the judiciary’s capital-planning process partially aligns with this leading practice. GAO and OMB guidance established that leading organizations carefully consider a wide range of alternatives. Our draft report noted that the AMP process evaluates some alternatives, such as renovating existing courthouses to meet needs, but the judiciary provided no evidence that it considered other viable options, such as courtroom sharing in existing courthouses, even though courtroom sharing is required in new courthouses. 16. AOUSC disagreed with our assessment that the AMP process partially aligns with the leading capital practices related to establishing a “review and approval framework with established criteria for selecting capital investments” because our draft report indicated that the judiciary has established such a framework. We continue to believe that the judiciary’s capital-planning process partially aligns with this leading practice because while we were able to discern that there are review and approval criteria in the AMP process, we found no evidence that the judiciary’s current 5-year plan applies those criteria. Specifically, the judiciary established the criterion that courthouses need to have a shortage of two or more courtrooms to qualify for a new courthouse construction project. However, 10 of the 12 projects recommended for new construction on the 5-year plan do not qualify under this criterion. 17. AOUSC stated that we used incorrect and inflated estimates for project costs. We sought to provide total project cost estimates for each project on the 5- year plan. Our draft report uses estimates that the judiciary provided for total project costs and rent, which we adjusted for inflation to the current fiscal year. In response to our statement of facts, AOUSC provided a revised table (reprinted on p. 51 of this report). However, the data in the table that AOUSC provided were incomplete and they did not include supporting documentation. Consequently, we continue to use the most current, complete estimates of the total project costs and rent available. 18. AOUSC stated that the estimates of total project costs were provided to them by GSA. We added GSA to the source of table 1. 19. AOUSC stated that the judiciary has implemented changes to address recommendations from our 2010 report (GAO-10-417). GAO has a process for following up and closing previous recommendations. We have not yet assessed the extent to which the judiciary’s actions have fulfilled the recommendations from our 2010 report. We will, however, consider this and all other information from the judiciary when we determine whether to close the recommendations from our 2010 report. We plan to examine this recommendation in the summer of 2013. 20. According to AOUSC, the projects on the 5-year plan are fully justified under its previous “stringent” process that preceded the AMP process. However, as we have previously noted, the former process had shortcomings and in our opinion does not represent a process that the Congress should rely upon for making capital budget decisions. The new AMP process will, when complete, likely provide Congress with greater assurance that the judiciary’s construction priorities represent the highest priority needs. We addressed the difference in funding for the projects on the current 5-year plan in comment 2. 21. Judge Wood stated that the number of judges in Savannah may change. For each project, we used data provided by AOUSC. However, in our 2010 report (GAO-10-417), we found that the judiciary often overestimated the number of future judges it would have in planning for new courthouses. 22. According to Judge Wood, it is inappropriate to subject the Savannah courthouse to the AMP process when over $6 million has already been spent on design services. We found, and the AOUSC agreed, in comments to our draft report, that the Savannah courthouse has four courtrooms and four judges. Consequently, it does not qualify for new construction under the AMP criterion. In addition, according to GSA, the original courthouse design from 1998, to which Judge Wood refers, is old and outdated. As a result, if the project moves forward, the government would need to spend additional money to redesign a new courthouse for Savannah. 23. Judge Wood noted the poor condition of the existing Savannah courthouse and the need for a repair and alterations project to address deferred maintenance issues. We toured this courthouse and noted many of the same deficiencies. Our draft report noted that regardless of whether a project is on the 5-year plan, GSA is responsible for ensuring that courthouses are adequately maintained. In addition, as the current plan for the Savannah project is to continue to use the existing courthouse and build an annex, deferred maintenance in the existing courthouse would still need to be addressed if the plan moved forward. 24. Judge Wood noted that the existing Savannah Courthouse was built in 1899 and has several deficiencies to Design Guide standards. Our draft report noted that some existing courtrooms may not meet Design Guide standards for size. However, as we also note, according to AMP guidance, a disparity between space in an existing facility and the Design Guide standards is not justification for facility alteration and expansion. 25. Judge Wood noted several security concerns in the existing Savannah Courthouse. See comment 7. 26. Judge Wood noted that the Savannah Courthouse project preceded the AMP process and the courthouse needs an additional courtroom and judge’s chamber. We address the judiciary’s previous capital planning process and judge and courtroom counts in Savannah in comments 20 and 21, respectively. 27. Judge Wood attached photos documenting some of the building condition problems at the Savannah Courthouse, and those are reprinted on pages 49 and 50. See comment 23. 28. AOUSC provided changes to the courtroom numbers in table 2 from our draft report. As we explained in comment 9, we changed the table to make clear that the courtroom count refers to the number of courtrooms citywide, not just in one courthouse. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Keith Cunningham, Assistant Director; George Depaoli; Colin Fallon; Geoffrey Hamilton; James Leonard; Faye Morrison; and Sara Ann Moessbauer made key contributions to this report.
Rising costs and fiscal challenges have slowed the multibillion-dollar courthouse construction program of the judiciary and the General Services Administration (GSA). In 2006, the judiciary developed AMP to address increasing costs and incorporate best practices and has evaluated about 67 percent of its courthouses under the new system. As requested, GAO assessed changes introduced with AMP. GAO examined: (1) the extent to which the AMP process aligns with leading practices and provides information needed for informed decision making and (2) the extent to which courthouse projects recommended for funding in fiscal years 2014 to 2018 were assessed under the AMP process. GAO compared the judiciary's capitalplanning practices with leading practices, analyzed courthouseplanning documents, and interviewed officials from the judiciary and GSA. GAO visited three courthouses selected because they were highly ranked by the judiciary for replacement, although observations from these site visits cannot be generalized. The Asset Management Planning (AMP) process represents progress by the federal judiciary (judiciary) in better aligning its capital-planning process with leading capitalplanning practices, but its 5-year plan for fiscal years 2014 to 2018--the document the judiciary uses to request courthouse construction projects--lacks transparency and key information on how projects qualify for new construction, alternatives the judiciary considered, and their cost. For example, the plan lists costs for the next phase of the 12 recommended courthouse projects, which have several phases, but does not list previous funding or ongoing annual costs for the projects. As a result, the plan lists about $1 billion in costs for the 12 projects, but the projects would actually cost the federal government an estimated $3.2 billion over the next 20 years. Congress has appropriated a small share of the money needed for the projects, and most will need design changes before construction can begin. As a result, there is a risk that congressional funding decisions could be made without complete and accurate information. However, with this information, decision makers could weigh current-year budget decisions within the context of projects' expected future costs, spur discussion and debate about actions to address them, and put the judiciary's requests in context with other federal spending. Ten of the 12 recommended projects were not evaluated under the AMP process. Judiciary officials said that they did not want to delay the current projects or force them to undergo a second capital-planning process after they had already been approved. Two courthouse projects from a previous 5-year plan that were assessed under AMP were removed from the list and are now ranked behind more than 100 other courthouse construction projects. Furthermore, 10 of the 12 recommended construction projects do not qualify for a new courthouse under the AMP criterion, which requires that new courthouses need two or more additional courtrooms. These conditions call into question the extent to which the projects remaining on the 5-year plan represent the judiciary's most urgent projects and whether proceeding with these projects represents the most fiscally responsible proposal. While 10 additional AMP evaluations would involve some additional costs, not conducting those evaluations could involve spending $3.2 billion over the next 20 years on courthouses that may not be the most urgent projects.
Background Medicaid is the third largest social program in the federal budget and one of the largest components of state budgets. States and CMS share responsibility for instituting financial practices for the Medicaid program that are in compliance with applicable rules, laws, and regulations. In general, the federal government matches state Medicaid spending for medical assistance according to a formula based on each state’s per capita income. The federal contribution ranged from 50 to 77 cents of every state dollar spent on medical assistance in fiscal year 2004. For most state Medicaid administrative costs, the federal match rate is 50 percent. For skilled professional medical personnel, 75 percent federal matching is available. States are responsible for providing the state share of Medicaid funding and submitting plans, budgets, and expenditure reports to CMS that accurately report on the administration of their Medicaid programs and how they expend Medicaid funds. CMS is responsible for reviewing the states’ plans, budgets, expenditures, and operations to ensure compliance with all applicable laws and regulations. Each state develops its own administrative structure and establishes its own eligibility standards, scope of covered services, and payment rates in accordance with Medicaid statute and within broad federal guidelines. States are required to describe the nature and scope of their programs in a comprehensive plan submitted to CMS, with federal funding depending on CMS’s approval of the plan. State Medicaid plans specify the services to be provided and how the state will establish the amount it will pay for those covered services. Amendments to states’ plans are also subject to approval by CMS. Table 1 shows the amount of state and federal expenditures for Medicaid for fiscal years 2003 and 2004, the most recent years for which data are available. CMS’s Center for Medicaid and State Operations (CMSO) shares Medicaid program administration and financial management responsibilities with the 10 CMS regional offices. Two divisions in CMSO’s Finance, Systems, and Budget Group—the Division of Financial Management (DFM) and DRSF—have primary responsibility for Medicaid financial management. Figure 1 outlines CMS’s organizational structure related to Medicaid. DFM’s mission includes effectively administering the Medicaid program budget and grants, financial management policy, and administrative cost policy processes. Among other things, DFM staff in the central office are responsible for (1) determining and issuing state grant awards based on regional decision reports resulting from reviews of budget and expenditure reports, (2) reconciling state expenditure and budget reports, (3) reviewing and approving draft focused financial review reports, and (4) preparing annual financial management workplans based on input from regional offices. DRSF’s responsibilities include, but are not limited to (1) reviewing state plan amendments that involve reimbursement, (2) providing training to and coordinating the work of the funding specialists, (3) providing technical assistance to states on institutional and noninstitutional reimbursement, and (4) identifying and addressing state financing practices that could inappropriately increase federal Medicaid costs. CMS has approximately 65 regional financial analysts who are responsible for performing activities such as (1) reviewing state quarterly budget estimates and expenditure reports, (2) preparing decision reports that document approvals for federal reimbursement or deferrals or disallowances of claims for federal reimbursement, (3) assisting in assessing issues that put federal Medicaid dollars at risk and determining which issues to review in a fiscal year, (4) performing focused financial reviews, (5) providing technical assistance to the states on financial matters, and (6) serving as liaison to the states and audit entities. CMS has about 90 funding specialists who are responsible for, among other things, (1) gaining an understanding of their assigned state’s organizational structure, program structure, and budget process related to the state’s Medicaid program; (2) assisting in reviews of state plan amendments; (3) conducting reviews of state financing practices; and (4) providing technical assistance to the states. States submit quarterly budget and expenditure reports to CMS. The financial analysts in the 10 regional offices have traditionally reviewed these reports and prepared a Regional Office Decision memorandum which they submit to DFM in the central office. In some regions, the new funding specialists now have responsibility for reviews of state budget reports. Also, in some cases, the funding specialists assist financial analysts with reviews of state expenditure reports. Steps Taken to Improve Oversight Activities but Some Previously Identified Weaknesses Remain CMS has undertaken several steps to improve its Medicaid financial management activities including its efforts to oversee state claims for federal reimbursement and to identify payment errors. CMS hired about 90 funding specialists who are examining high-risk state funding practices and working with states to eliminate those practices that inappropriately increase federal costs. CMS also created a new unit, DRSF, which reviews state plan amendments for reimbursement to identify and work with states to eliminate payment methodologies that could result in higher federal costs. CMS has continued to use focused financial reviews and OIG audits to identify inappropriate state claims for federal reimbursement and recommend changes to states’ internal control practices. In addition, CMS recently established a new performance goal for its Medicaid financial management staff to reduce cumulative questionable federal reimbursement by 10 percent in fiscal year 2006. These and other recent efforts represent improvement in CMS’s oversight activities and address weaknesses and recommendations we identified in our 2002 report. However, it is too soon to assess the impact they will have on improving overall financial management and addressing emerging issues that put federal Medicaid dollars at risk because some have just recently been initiated, and results are not known yet. Further, there are other previously identified weaknesses that the agency has not addressed. CMS has not instituted mechanisms to measure how the risk of inappropriate federal reimbursement has changed as a result of corrective actions taken. CMS also has not incorporated the use of the MSIS in its oversight of state claims or other systems projects intended to help improve its analysis capabilities. Further, CMS has not developed profiles to document information on state fraud and abuse controls to use in its oversight of state claims. Finally, CMS has not developed a strategic plan to guide its financial management activities. Because these issues are important to further improving and sustaining CMS’s oversight activities, we reiterate our prior recommendations in these areas. Additional Staff and Creation of New Division Have Improved Oversight Activities In late 2004, CMS began hiring for 100 new funding specialist positions. These new staff have enabled CMS to perform more in-depth reviews of high-risk issues. The funding specialists’ positions were established to help CMS gain a better understanding of how states budget for and finance their portion of Medicaid expenditures and help CMS proactively identify state payment and funding practices that could result in inappropriate claims for federal reimbursement or increased federal costs. These new funding specialists augment the activities of approximately 65 financial analysts in 10 regional offices who had previously performed many of the state financial oversight activities, including assisting the financial analysts with reviews of state budget and expenditure reports. In addition, the funding specialists performed activities that have enabled CMS to collect and summarize more information on states’ Medicaid programs to help CMS target its oversight efforts to high-risk issues such as certain payment arrangements that have been problematic in the past. A major activity of the funding specialists during their first year was the completion of state funding profiles. These profiles document the states’ Medicaid programs’ organizational structure, programmatic structure, and budget process. For many years, states only needed to provide general information on their payment methodologies, so these newly created profiles provide more detail to help CMS in its review and oversight of states’ financial issues. For example, the profiles describe the sources of each state’s nonfederal share of Medicaid funds and state payment methodologies; and include a “watch list” section where the funding specialists can highlight significant funding-related concerns that may need to be addressed in the future. For example, one state profile identifies a concern about the state’s lack of oversight of the certified public expenditures certification process for hospitals. This type of information can be helpful in ensuring proper review of future state plan amendments, among other things. CMS officials told us the state funding profiles have been made available to all CMS staff through CMS’s intranet, and said the profiles will be updated annually to account for changes in state programs, thus allowing CMS to have current information. In addition to completing state funding profiles and reviewing state budgets and expenditures, the funding specialists carry out other oversight activities, including the following: meeting with state Medicaid officials and monitoring state legislative activity, including hearings, budget sessions, and committee meetings related to states’ Medicaid programs and proposed bills to proactively identify issues that need CMS attention; reviewing state payment arrangements that CMS previously deemed problematic and that the states agreed to end to determine if the arrangements have in fact ended; assisting in the resolution of OIG audit findings; providing technical assistance to the states concerning funding and attending training and workshops to learn about and stay abreast of CMS policy and operations. Directing the activities of the new funding specialists is one of the efforts of the central office’s DRSF, which was created in early 2005. CMS established DRSF to consolidate responsibility for all state Medicaid payment policy and funding issues. A role of DRSF is to ensure that state plan amendments for reimbursement of noninstitutional and institutional services are consistently reviewed and that CMS policy is consistently applied across the nation. The activities of DRSF have improved CMS’s ability to effectively deploy its resources to carry out more targeted oversight activities. DRSF’s National Institutional Reimbursement Team and the Non-Institutional Payment Team are part of CMS’s effort to collect information on states’ funding methodologies before approving state plan amendments, including high-risk payment methodologies that have been troublesome in the past. DRSF reviews all institutional reimbursement state plan amendments before they are approved by the Director of CMSO, thus eliminating the decentralized approval process that had been in place at all 10 regional offices. This has helped to clarify the lines of authority and responsibility for the state plan amendment process—states still submit amendments to their respective region for review but they are approved by CMS’s central office. DRSF also helped clarify responsibilities between central and regional office staff by using the 10 central office funding specialists as liaisons to each of the 10 regional offices. The DRSF funding specialists help to ensure that regional funding specialists are informed and kept up to date on funding policies and matters. The funding specialists also help in conducting a series of monthly calls that DRSF has instituted between the regions and central office financial management staff to improve communication and coordination. These calls help to ensure that all staff stay informed and up to date on matters that impact state claiming and the approval of state plan amendments. These activities, which we consider significantly underway, help improve CMS’s ability to better target its oversight activities and specifically address the recommendation in our 2002 report to increase in-depth oversight of areas of higher risk as identified from the risk assessment efforts and apply fewer resources to lower risk areas. See appendix II for a complete listing of our prior recommendations and our assessment of whether or not each has been fully addressed by CMS’s actions to improve its oversight activities. These activities also help address our overarching concerns that CMS’s organizational structure created challenges to effective oversight because of unclear lines of authority and responsibility between the regions and the central office. Focused Financial Reviews and OIG Audits Continued to Identify Problems and Needed Corrective Actions In 2001, CMS began a risk analysis process to identify Medicaid issues that put federal dollars at risk and address those issues by conducting focused financial reviews or referring the issues to the OIG for its review. Since then, at the beginning of each fiscal year, central office and regional office financial management staff work together to identify risks and plan focused financial reviews of the issues identified. CMS’s financial management staff consider factors such as the amount of dollars involved, involvement of consultants, and time elapsed since last audit to identify risk areas. CMS’s analyses provide insight into what some of the continuing problematic Medicaid issues and potential emerging issues are. Table 2 shows which areas have consistently been identified as needing in- depth review in fiscal years 2003 through 2006. The focused financial reviews of the issues identified from the risk analyses have helped CMS identify billions of dollars in unallowable costs outside of those detected through the review of quarterly expenditure reports, as well as deficiencies in states’ financial management practices. In fiscal years 2003 and 2004, focused financial reviews resulted in CMS questioning or disallowing about $1.3 billion and about $1 billion, respectively, of state claims for federal reimbursement, according to CMS. The value of these reviews lies not just in identifying disallowances but also in providing feedback on policy issues and programmatic vulnerabilities, and in elevating the attention of both states and federal staff. CMS conducted about 57 focused financial reviews each year from fiscal years 2003 through 2005. Starting in fiscal year 2006, the number of planned focused financial reviews almost doubled from fiscal year 2005 due to the inclusion of planned reviews to be done by the funding specialists. We reviewed 35 of the 113 focused financial reviews performed by regional office financial analysts in fiscal years 2003 and 2004 to assess (1) the consistency with which the reviews were performed and reported on and (2) the extent to which states took actions to address the issues identified by CMS. We concluded that the 35 review reports were generally consistent across the regions. CMS also provided information to support that states are taking the recommended actions to address the issues identified. CMS issued reports to the states that contained recommendations requesting the states to (1) return federal reimbursement that CMS determined was not allowable (disallowances), (2) provide additional documents for CMS to determine the allowability of questionable claims (deferrals), or (3) improve certain state controls or processes. CMS gets additional coverage of risk areas from the reviews conducted by HHS’s OIG. During fiscal years 2003 through 2005, CMS contracted with OIG using funds from the HCFAC account to conduct 20 or more audits each year of issues identified from the risk assessment process. We reviewed interagency agreements between CMS and OIG for fiscal years 2003 through 2005 that provided over $3 million of HCFAC funds each year for OIG to do 20 or more audits each year relating to Medicaid issues. The interagency agreements supplemented OIG’s overall efforts to monitor Medicaid. Table 3 shows the issues that OIG agreed to audit in selected states pursuant to the interagency agreements for fiscal years 2003 through 2005. We reviewed 21 audits done by OIG in fiscal year 2004 pursuant to the interagency agreement to assess (1) the extent of the additional coverage given to issues identified by CMS as high risk and (2) the extent to which states took actions to address the issues identified by OIG. OIG identified about $13.6 million that it believed was inappropriate federal reimbursement to the states in 15 of the 21 audits. States returned about $4.5 million of disallowed claims identified in 10 of the 15 audits; CMS was still pursuing the remaining $9.1 million as of the end of our field work. OIG also made numerous other recommendations to states to improve their internal controls such as implementing controls to identify and prevent duplicate payments and complete reconciliation procedures for overpayments in a timely manner. Goal for Reducing Questionable Federal Reimbursement Helps Promote Accountability CMS has recently developed a specific goal aimed at reducing questionable federal reimbursement and evaluating its oversight activities. CMS has established a goal to reduce by 10 percent in fiscal year 2006 the amount of federal reimbursement that has been questioned by CMS or OIG. CMS is collecting data on questionable claims for federal reimbursement identified from sources such as quarterly expenditure reviews, focused financial reviews, and OIG audits. According to a CMS official, as part of this process, CMS has identified a baseline amount of about $8 billion dollars in cumulative questionable federal reimbursement, which represents state claims that (1) CMS has determined may not be allowable or has deferred payment pending review of additional support from the states, or (2) OIG has questioned as a result of an audit. The goal for fiscal year 2006 is to resolve at least 10 percent of this $8 billion by (1) recovering amounts ultimately determined to be unallowable or (2) determining after further review that the claims are allowed. CMS officials acknowledge that the goal may not be attainable each year given the varying facts and circumstances of the questionable amounts. However, if properly established and tracked, goals of this nature should help in improving the effectiveness of CMS oversight activities. CMS has also included the goal to reduce by 10 percent the amount of questionable federal reimbursement in the fiscal year 2006 performance agreements of CMS senior financial managers in the central office. According to CMS officials, it will continue to hold managers accountable for this type of goal each fiscal year. CMS has also included specific goals and performance standards in regional office financial managers’ performance agreements. For example, one regional office has a goal for its managers to ensure that the financial analysts and funding specialists complete nine focused financial reviews and five funding source reviews in fiscal year 2006. CMS has improved its processes for tracking its financial management activities and the attainment of the goals it has set. The Financial Management Activities Report (FMAR) tracks the amount of regional office resources (staff time, personnel costs, and travel costs) spent on the various categories of activities in the financial management workplans. The Financial Issues Report tracks all questionable state claims for reimbursement identified by regional financial analysts and funding specialists in focused financial reviews, quarterly expenditure reviews, and any other activities that could result in a disallowance or deferral of state claims, including findings from OIG reports. The Financial Performance Spreadsheet is the CMS tool used to track the fiscal year 2006 goal to resolve 10 percent of the amount of cumulative, questioned claims for federal reimbursement. These actions, which we consider significantly underway, help improve CMS’s ability to monitor, measure, and evaluate its financial oversight activities and specifically address the following recommendations from our 2002 report: Include specific Medicaid financial oversight performance standards in senior managers’ performance agreements. Collect, analyze, and compare trend information on the results of oversight control activities, particularly deferral and disallowance determinations, focused financial reviews, and technical assistance. Use the information collected above to assess overall quality of financial management oversight. Other Efforts Help CMS’s Oversight of Medicaid Finances CMS has initiated two other programs to help carry out its responsibility at the federal level for helping ensure the propriety of Medicaid finances and comply with the Improper Payments Information Act of 2002—the Payment Accuracy Measurement pilot project, which was initiated in July 2001 and is now called the Payment Error Rate Measurement (PERM) project, and the Medicare-Medicaid data match project. Under the PERM program, states use a CMS-developed methodology to measure state Medicaid payment errors. By fiscal year 2007, CMS plans to have a national Medicaid payment error rate based on a sample of states and claims within those states. Under PERM, states will be expected to ultimately reduce their payment error rates over time by better targeting their activities to prevent and detect improper payments made to providers. Under the Medicare-Medicaid data match project, CMS facilitates the sharing of information between the Medicare and Medicaid programs by matching Medicare and Medicaid claims information on providers and beneficiaries to identify improper billing and utilization patterns which could indicate fraudulent schemes. These two projects, which we consider significantly underway, have helped CMS’s efforts to oversee state Medicaid finances and specifically address the following two recommendations from our 2002 report: Complete efforts to develop an approach to payment accuracy reviews at the state and national levels. Incorporate advanced control techniques, such as data mining, data sharing, and neural networking, where practical to detect potential improper payments. Some Previously Identified Weaknesses in Oversight Activities Have Yet to be Addressed While CMS has taken a number of actions that improve its oversight and address several weaknesses we identified in our prior report, there are previously identified weaknesses that the agency has not yet addressed. Specifically, CMS has not instituted mechanisms to measure how the risk of inappropriate federal reimbursement has changed as a result of corrective actions taken. In addition, CMS has not incorporated the use of the MSIS database into its oversight of states claims or other systems projects intended to improve its analysis capabilities. CMS also has not developed profiles to document information on state fraud and abuse controls to use in its oversight of state claims. Finally, CMS has not developed a strategic plan specific to its Medicaid financial management activities. Measuring how risks have changed—In our 2002 report, we recommended that CMS develop and institute mechanisms to make risk assessment a continuous process and to measure whether risks have changed as a result of corrective actions taken to address them. CMS has processes in place to identify risks, and management has established procedures to mitigate important risks, such as detailed reviews of certain high-risk issues. However, CMS’s processes still do not have the elements of risk management that are key to assessing whether actions to mitigate risks need to be adjusted either because (1) they are not effective, (2) they are effective but need to be expanded, or (3) they are no longer needed because the risks have been resolved or reduced to a tolerable level. For example, CMS identified several Medicaid issues as part of its current risk assessment process that have been the subject of focused financial reviews across several states, for several years—issues such as those related to claims for skilled professional medical personnel, family planning, and school-based administrative services. As discussed earlier, CMS has issued reports to the states on these issues that contained recommendations requesting the states to (1) return federal reimbursement that CMS determined was not allowable (disallowances), (2) provide additional documents for CMS to determine the allowability of questionable claims (deferrals), or (3) improve certain state controls or processes. However, CMS’s current risk assessment process does not indicate how the corrective actions taken to address these issues have changed their assessment of risk or their future strategies for mitigating the risk that these issues pose. To CMS’s credit, it has recently taken steps to change policies related to state claims for targeted case management services, an issue that has been the subject of multiple focused financial reviews. While it is not clear from CMS’s risk assessment why this issue was given a higher priority than other issues identified from its risk assessment, CMS officials explained that their process for determining what might be a high-risk issue comes from continuous coordination between financial management staff and Medicaid program staff that have in-depth program knowledge about Medicaid policy and procedures. The officials further explained that the results of their coordination and the fact that an issue is a high priority may not be noticeable to others until policy changes are included, for example, in HHS’s budget submission or other legislation that is signed by the President. Documenting how the outcomes of detailed reviews are used to determine whether additional or fewer corrective actions are needed is an important step in risk management. For fiscal year 2006, CMS is planning to conduct additional detailed reviews intended to ensure that states have stopped certain intergovernmental transfers and other funding practices that have resulted in billions of dollars in inappropriate federal reimbursement. It will be important to use the results of these follow-up reviews as a basis to determine whether its prevention and mitigation steps are adequate and effective and then to adjust them accordingly. Fully documenting the results of these types of activities will help inform planning for future mitigation efforts. Because CMS has not fully implemented mechanisms to measure how risks have changed as a result of actions to address the risks, we are reiterating our prior recommendation in this area. Improving analysis capabilities—In our 2002 report, we recommended that CMS use comprehensive Medicaid payment data that states must provide to the national MSIS database. Use of these data could improve CMS’s analysis capabilities. MSIS contains Medicaid program information including data on billions of claims. This database could be used to identify trends in certain Medicaid services from prior-year claims that could be useful in analyzing current-year state claims. According to a CMS official, CMS has not yet developed the ability to make these data available for use by the financial analysts and funding specialists in their oversight activities. Further, only a few CMS staff with the requisite systems capabilities are currently able to access and analyze the data. CMS officials said they plan to make these data more accessible in the future. Because CMS has not yet incorporated use of MSIS in its oversight activities, we are reiterating our prior recommendation. CMS also has not yet completed two other systems projects intended to help improve its analysis capabilities. CMS started to develop the: (1) Transactions, Information Inquiry, and Program Performance System project—an integrated financial management tool intended to link existing Medicaid data systems and tools; and (2) Automated Medicaid State Plans Project—a project to explore collecting electronic submission of state plans that would provide timely access to critical program information. CMS officials told us that due to funding constraints, these two projects have yet to be completed. Determining the systems projects needed to enhance CMS’s analysis capabilities is important given the challenges of evaluating state Medicaid expenditures and funding practices. Collecting and using information on state fraud and abuse control activities—In our 2002 report, we recommended that CMS enhance the information that it uses in its oversight of state claims by creating profiles that document each state’s activities to oversee its Medicaid program and prevent fraud and abuse. For example, we recommended that the profiles include information on provider screening procedures and payment accuracy studies. CMS currently collects some information on these and other state program integrity efforts as part of compliance reviews that are conducted by program integrity staff in DFM and the 10 regional offices. These compliance reviews are to assess whether state Medicaid program integrity efforts comply with federal requirements such as those governing provider enrollment, claims review, and coordination with each state’s Medicaid Fraud Control Unit. However, CMS officials told us that there is limited coordination between the staff that conduct the compliance reviews and the financial management staff that oversee state claims. Further, the compliance reviews have focused on state compliance and have not evaluated the effectiveness of the states’ fraud and abuse prevention and detection activities. CMS is starting to develop strategies as part of the recently created Medicaid Integrity Program that could address the weaknesses that we have identified. The Deficit Reduction Act of 2005, enacted in February 2006, provided for the creation of a Medicaid Integrity Program and required CMS to develop a comprehensive plan for how it would implement the program. CMS officials have recently begun to develop the plan and have included proposals for hiring contractors to assess states’ program integrity activities. Information on states’ activities to oversee their Medicaid programs and prevent fraud and abuse is important to determine the appropriate level of federal oversight that should be applied to each state’s claims. Because CMS is just starting to develop its plan and results are not known yet, we are reiterating our prior recommendations in this area. Developing a strategic plan to guide Medicaid financial management activities—In our 2002 report, we reported that CMS was starting several initiatives, similar to what we are currently reporting, to bring about improvements in its financial management activities and oversight. At the time of our 2002 review, CMS did not have a written strategic plan that described its many oversight activities and initiatives and the staff responsible for implementing them. Therefore, we recommended that CMS develop a written plan and strategy for Medicaid financial oversight. However, CMS still has not published a comprehensive plan that describes the many aspects of its Medicaid financial management strategy and its plans for continuing and sustaining its recent improvement efforts. A strategic plan is a key management tool that can help clarify organizational priorities and unify agency staff in the pursuit of shared goals. Strategic plans are the starting point and basic underpinning for a system of program goal-setting and performance measurement. In accordance with the Government Performance and Results Act of 1993 (GPRA), a multiyear strategic plan articulates the fundamental mission (or missions) of an organization, and lays out its long-term general goals for accomplishing that mission, including the resources needed to reach these goals. The clearer and more precise these goals are, the better able the organization will be to maintain a consistent sense of direction, regardless of leadership changes. HHS prepares a strategic plan as required by GPRA. The HHS strategic plan contains eight broad program performance goals related to the missions and programs of its operating divisions. However only one goal relates to Medicaid financial management—an overall goal for all HHS programs to “achieve excellence in management practices.” Unlike the Medicare program that started publishing a separate comprehensive plan for financial management in fiscal year 2001 that outlined problems and plans to address weaknesses in the Medicare program’s internal controls, oversight, and financial systems, the Medicaid program has not developed its own plan for financial management that includes an appropriate level of detail to be useful as a tool to guide its financial managers. Medicaid officials told us that they have several planning documents— such as the annual financial management work plans, the FMAR, and the Financial Issues Report that we previously discussed—that they use in managing financial management activities. While these documents provide information on aspects of CMS’s financial management activities, they do not clearly define the mission of Medicaid financial management, lay out the goals for continuously implementing the mission, or provide a complete description of the operational processes, skills, technology, and other resources required to meet CMS’s financial management goals and objectives. Without a strategic plan, CMS lacks an appropriate “roadmap” to guide activities for ensuring sound financial management of the Medicaid program. Therefore, we are reiterating our recommendation in this area. Use of HCFAC Funds to Enhance Medicaid Oversight Initiatives During fiscal years 2003 through 2005, CMS received almost $46 million from the HCFAC account that it has used to help fund programs related to its oversight of Medicaid. Congress enacted the HCFAC program as part of the Health Insurance Portability and Accountability Act of 1996 to consolidate and strengthen ongoing efforts to combat fraud and abuse in health care programs, including the Medicare and Medicaid programs. The legislation required the establishment of the national HCFAC program and it established the HCFAC account within the Medicare Federal Hospital Insurance Trust Fund, which is funded by appropriations out of the Trust Fund. The HCFAC program is administered by HHS and the Department of Justice and is designed to coordinate federal, state, and local law enforcement activities with respect to health care fraud and abuse. HHS’s OIG, the Federal Bureau of Investigation, and the Medicare Integrity Program receive direct appropriations from the HCFAC account, while the Medicaid program must request funds from the HCFAC account and compete with other HHS programs, such as the Administration on Aging and the Office of General Counsel, for allocations from the discretionary part of the HCFAC account. Table 4 shows the discretionary HCFAC funds available to CMS in fiscal years 2003 through 2005 and the portion allocated to the Medicaid program run by CMSO for Medicaid financial management projects. CMSO used this money to help fund projects related to its oversight of Medicaid. Table 5 shows the various projects for the 3 fiscal years and the amounts allocated to those projects. The HCFAC account provided about $12 million to CMS for the funding specialists for fiscal years 2004 and 2005. The funding specialists have been funded on an annual basis with appropriations from the HCFAC account. There is the chance that adequate funding might not be provided through the HCFAC process in any given year for the funding specialists; thus CMS officials have told us they would like to pursue ways of making the funding specialist positions permanent. CMS officials told us that there was a provision in its fiscal year 2007 budget submission, but the provision was rejected during department-level discussions, so the funding specialists will continue to be funded on an annual basis with HCFAC funds. CMS officials also told us that some of the turnover of funding specialist staff was due to the uncertainty of funding and whether the positions would become permanent. Creating permanent funding specialist positions is important, given how CMS has been using them in performing reviews of high-risk issues. Other Medicaid projects included in table 5 that CMS used HCFAC funds for include: interagency agreements between CMS and OIG for OIG audits of high-risk issues such as family planning services in managed care, skilled professional medical personnel, upper payment limits, school-based claims, home- and community-based services, and Medicaid administrative costs reported by state agencies other than the Medicaid single state agency; Medicare-Medicaid data match project developed to identify improper billing and utilization patterns by matching Medicare and Medicaid claims information on providers and beneficiaries; Payment Accuracy Measurement, PERM, and SCHIP Error Rate Pilot Projects, which allow states to test a methodology to determine improper payment error rates in their SCHIP and/or Medicaid programs; Transaction, Information, Inquiry and Program Performance System to develop and enhance an integrated financial management tool linking existing CMSO data systems and tools containing critical financial, statistical, administrative, and other data; an organizational study of Medicaid financial processes within CMS done by OIG under an interagency agreement with OIG; a project referred to as the Annuities Project, which used both qualitative and quantitative research methods to develop a comprehensive picture of states’ experience with the use of annuities as an asset-sheltering device by Medicaid applicants and their spouses; a Waiver Management System Database project, which updated a current Waiver Management System Database; and a project to research options for automating the Medicaid state plan process from the creation and submission of state plan amendments at the state level through approval at the central office and regional offices. We obtained documentation to support the use of HCFAC funds for the above projects. Conclusions Since we last reported in 2002, CMS has made improvements to the processes it uses in its efforts to oversee states and identify payment errors. Efforts undertaken, such as the hiring of the funding specialists, consolidating the review of reimbursement state plan amendments, and the Medicare-Medicaid data match project have enhanced CMS’s ability to identify issues that put federal Medicaid dollars at risk. While CMS’s actions address previously identified weaknesses and recommendations from our 2002 report related to (1) targeting resources to higher risk areas, (2) monitoring performance, (3) establishing mechanisms for ensuring accountability, (4) developing an approach to payment accuracy reviews and (5) incorporating advanced control techniques, it is too soon to assess the impact they will have on improving overall financial management and addressing emerging issues that put federal Medicaid dollars at risk because the results of some efforts are not known yet. In addition, several weaknesses remain in CMS’s oversight that could be addressed by implementing our prior recommendations that remain open. Specifically, CMS still lacks processes to adjust oversight activities for changes in risk; therefore, we reiterate our prior recommendation related to measuring whether risks have changed as a result of corrective actions to address them. Also, because CMS has not yet addressed weaknesses we identified in its analysis capabilities, we reiterate our prior recommendation for CMS to incorporate using MSIS data in its analysis of state claims. We also reiterate our prior recommendations to CMS for collecting and using information on state fraud and abuse control activities because this information is important to determining the appropriate level of federal oversight of state claims. The absence of a strategic plan could hinder CMS in sustaining its current efforts and addressing the weaknesses that we have identified. Therefore, we reiterate our prior recommendation that CMS develop a strategic plan specific to Medicaid financial management. Also, CMS may not have the staff and systems needed to continuously identify and target high-risk issues. Therefore, we stress the importance of creating permanent funding specialist positions and determining what systems projects are needed to improve their analysis capabilities. Recommendations for Executive Action To further improve and sustain CMS’s oversight of state claims, including its ability to identify and address emerging issues, we recommend that the Administrator of CMS take the following two additional actions: Create permanent funding specialist positions. Determine what systems projects are needed to further enhance data analysis capabilities. Agency Comments and Our Evaluation In written comments on a draft of this report, which are reprinted in appendix III, CMS agreed with our findings and recommendations and stated that it will continue examining issues raised in this report, including prior recommendations from our 2002 report that are still outstanding. CMS also stated that it will work to implement the two recommendations made in this report. CMS expressed its support for our recommendation to create permanent funding specialist positions, which are currently funded with HCFAC dollars, and stated it will consider alternative approaches to provide adequate resources. CMS further stated it will follow our second recommendation and begin the process of determining the system projects that are needed to further enhance data capabilities. CMS also provided additional information on several of the activities we reported on, including additional activities of the funding specialists and actions being taken on our prior recommendations. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Health and Human Services, Administrator of CMS, Inspector General of HHS, and other interested parties. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8341 or calboml@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors are acknowledged in appendix IV. Appendix I: Scope and Methodology To identify the extent to which the Centers for Medicare & Medicaid Services (CMS) has improved its oversight, including its ability to identify and address emerging issues that put federal Medicaid dollars at risk, we performed work at CMS headquarters and two regional offices. We reviewed and assessed aspects of CMS’s financial oversight processes, which include identifying high-risk areas in order to develop an annual regional office financial management workplan and conducting focused financial reviews of high-risk areas. We reviewed 35 of the 113 focused financial reviews conducted by CMS regional offices for fiscal years 2003 and 2004. We selected reviews of specific issues that were reviewed across regions and fiscal years, such as disproportionate share hospital payments and school-based administrative services. We did not select certain issues, such as upper payment limits and intergovernmental transfers, because these issues have been well-covered in other reports and by CMS’s actions. We looked for consistency of the reviews among regions and fiscal years and the extent to which states implemented CMS’s recommendations. We obtained and reviewed documentation showing the activities and work performed by the new funding specialists hired by CMS during 2004 and 2005 as part of its efforts to improve its financial management of the Medicaid program. We reviewed our prior reports and reports by the Department of Health and Human Service’s Office of Inspector General (OIG) and others. We also reviewed interagency agreements between CMS and OIG. We interviewed OIG staff, and CMS officials and staff at the CMS central office in Baltimore, Maryland, and two regional offices—New York and Chicago. We selected the New York and Chicago regional offices to visit based on the number of focused financial reviews we selected to review that were performed by these regions. Sixteen of the 35 focused financial reviews we selected to review were performed by these two regions; the remaining 19 focused financial reviews were done by seven other regional offices. We also considered the Comptroller General’s Standards for Internal Control in the Federal Government. To determine how CMS used funds from the Health Care Fraud and Abuse Control (HCFAC) account for fiscal years 2003 through 2005, we obtained from CMS a list of Medicaid projects that were funded from the HCFAC account in fiscal years 2003 through 2005. We obtained and examined documentation from CMS such as invoices; grant awards; interagency agreements; and accounting, budget, and payroll records that support the information provided by CMS on how it spent HCFAC funds for fiscal years 2003 through 2005. We also reviewed the HCFAC program and funding legislation, 42 U.S.C. §§ 1320a-7c, 1395i(k). We requested written comments on a draft of this report from the Administrator of CMS or his designee. His written comments are reprinted in appendix III. We conducted our review from February 2005 to May 2006 in accordance with generally accepted government auditing standards. Appendix II: Status of Prior Recommendations Appendix III: Comments from the Centers for Medicare & Medicaid Services Appendix IV: GAO Contact and Staff Acknowledgments Acknowledgments Staff members who made key contributions to this report include Kimberly Brooks (Assistant Director), Theresa Bowman, Lisa Crye, Abe Dymond, Diane Morris, Michelle Smith, and Edward Tanaka. Related GAO Products Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse. GAO-06-578T. Washington, D.C.: March 28, 2006. Medicaid Fraud and Abuse: CMS’s Commitment to Helping States Safeguard Program Dollars Is Limited. GAO-05-855T. Washington, D.C.: June 28, 2005. Medicaid: States’ Efforts to Maximize Federal Reimbursements Highlight Need for Improved Federal Oversight. GAO-05-836T. Washington, D.C.: June 28, 2005. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. Health Care Fraud and Abuse Control Program: Results of Review of Annual Reports for Fiscal Years 2002 and 2003. GAO-05-134. Washington, D.C.: April 29, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Medicaid Program Integrity: State and Federal Efforts to Prevent and Detect Improper Payments. GAO-04-707. Washington, D.C.: July 16, 2004. Medicaid: Intergovernmental Transfers Have Facilitated State Financing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 2003. Medicaid Financial Management: Better Oversight of State Claims for Federal Reimbursement Needed. GAO-02-300. Washington, D.C.: February 28, 2002.
Medicaid--the federal-state health care financing program--covered over 56 million people at a cost of $295 billion in fiscal year 2004, the latest fiscal year for which complete data are available. The Centers for Medicare & Medicaid Services (CMS) is the federal agency responsible for overseeing states' Medicaid programs and ensuring the propriety of expenditures reported by states for federal reimbursement. In 2002, GAO reported on weaknesses in CMS's oversight of Medicaid financial management and made recommendations to CMS to strengthen its oversight process. In fiscal year 2003, CMS started receiving funds from the Health Care Fraud and Abuse Control (HCFAC) program to help improve Medicaid financial management. GAO was asked to evaluate CMS's financial management activities, including following up on prior recommendations. In this report, GAO examined (1) the extent to which CMS has improved its ability to identify and address emerging issues that put federal Medicaid dollars at risk and (2) how CMS used funds for Medicaid from the HCFAC account. CMS has undertaken several steps to improve its Medicaid financial management activities, including its efforts to oversee state claims for federal reimbursement and to identify payment errors. CMS hired about 90 funding specialists, thus enhancing its ability to address high-risk state funding practices that inappropriately increase federal costs. CMS also created a new unit that centralized responsibility for approving state plan amendments related to reimbursement. CMS continued to identify billions of dollars in questionable federal reimbursement through focused financial reviews. CMS also set goals aimed at reducing questionable federal reimbursement and holding financial managers accountable and enhanced its internal processes for tracking results of its financial management activities. These and other efforts, such as CMS's approach for measuring payment errors under the Improper Payments Information Act, represent improvements in the processes that CMS uses in its oversight of states. While these actions also address previously identified weaknesses and recommendations from our 2002 report, it is too soon to assess the impact they will have on improving overall financial management and addressing emerging issues that put federal Medicaid dollars at risk because some have just recently been initiated and results are not known yet. Further, there are a number of previously identified weaknesses that the agency has not yet addressed. Specifically, CMS has not instituted mechanisms to measure how the risk of inappropriate federal reimbursement has changed as a result of corrective actions taken. In addition, CMS has not incorporated the use of the Medicaid Statistical Information System database into its oversight of states' claims or other systems projects intended to improve its analysis capabilities. Further, CMS has not developed profiles to document information on state fraud and abuse controls to use in its oversight of state claims. Finally, CMS has not developed a strategic plan specific to its Medicaid financial management activities. Because these issues are important to further improving and sustaining CMS's oversight activities, we reiterate and build on our prior recommendations in these areas. During fiscal years 2003 through 2005, CMS received almost $46 million from the HCFAC account that it used to help fund programs related to its oversight of the Medicaid program, including about $12 million for the funding specialists for fiscal years 2004 and 2005. The funding specialist positions have been funded on an annual basis with appropriations from the HCFAC account. There is the chance that adequate funding might not be provided through the HCFAC process in any given year for the funding specialists; therefore, creating permanent funding specialist positions is important. CMS used the other $34 million for other projects such as researching options for automating the Medicaid state plan process, and interagency agreements with the OIG to conduct audits of high-risk areas. GAO obtained documentation to support the use of HCFAC funds for these projects.
Introduction The U.S. Department of Agriculture (USDA) manages a wide array of programs that affect the lives of all Americans and millions of people around the world. USDA relies on a multitude of financial management systems to help operate its complex organization which, in fiscal year 1994, managed $146 billion in assets and accounted for $75 billion in expenses. To more efficiently manage these programs, the Department of Agriculture Reorganization Act of 1994 authorized USDA to regroup complementary programs from 43 component agencies into 29 agencies under seven overall mission areas. The seven mission areas are: (1) farm and foreign agricultural services, (2) rural economic and community development, (3) food, nutrition, and consumer services, (4) natural resources and environment, (5) research, education, and economics, (6) food safety, and (7) marketing and regulatory programs. According to USDA, its reorganization will also consolidate or eliminate 1,100 of its more than 14,000 field offices. USDA’s Financial Management System Responsibilities The Chief Financial Officers (CFO) Act of 1990 vested agency CFOs with the responsibility for overseeing all financial management activities relating to the programs and operations of the agency. This includes the responsibility for developing and maintaining an integrated agency accounting and financial management system that provides for (1) complete, reliable, consistent, and timely information that is prepared on a uniform basis, (2) the development and reporting of cost information, (3) the integration of accounting and budgeting information, and (4) the systematic measurement of performance. In March 1993, the former USDA Secretary decided to establish the Office of the CFO to oversee all financial management activities relating to the programs and operations of the Department, including USDA’s departmentwide financial management systems. The Office of the CFO also manages the National Finance Center (NFC). NFC develops, manages, and operates the financial management systems that support the budgeting and accounting functions for most of USDA’s salaries and administrative expenses, and performs most of USDA’s administrative systems functions such as payroll and property. Although the Office of the CFO manages NFC, it shares responsibility for many of the NFC systems with the Assistant Secretary for Administration and various user groups. Most of USDA’s large component agencies develop and manage their program accounting and budgeting systems independently. These component agency systems account for most of USDA’s annual expenses. USDA’s Long-standing Financial Management System Weaknesses We and USDA’s Office of Inspector General (OIG) have previously reported on USDA’s numerous component agency and NFC financial management system weaknesses. As a result of these audit findings, USDA, over the past several years, has reported many financial management system material weaknesses and nonconformances. For example, in its fiscal year 1994 Federal Managers’ Financial Integrity Act report, USDA cited 22 financial management system material nonconformances, some dating back to 1988. These nonconformances were related to both NFC’s financial management systems and those managed by component agencies. As a result of these weaknesses, in 1994 and 1995, the Office of Management and Budget (OMB) reported as a high-risk area USDA’s aged and outmoded financial systems, inadequate financial system controls, ineffective central system planning and installation, and inaccurate financial reports. In addition, the vast majority of USDA’s financial management systems do not meet financial management system standards set by OMB, the Department of the Treasury, and the Joint Financial Management Improvement Program (JFMIP). Most of USDA’s financial management systems were developed in isolation without common guidelines, definitions, or oversight and using incompatible accounting and data standards. In addition, these systems are not integrated; do not provide policy, program management, and operating staff with necessary financial data in a timely manner; and do not provide USDA with a common language for financial management. As a result, USDA can draw similar information from different systems and obtain different results. Also, the lack of integration and standardization makes sharing or merging information across systems and organizations very difficult. As a result of these weaknesses, many USDA component agencies also maintain duplicative, costly, and time-consuming unofficial or “cuff” records and systems. In addition, as we mentioned in our recent letter to the Secretary of Agriculture, these weaknesses have also resulted in delayed financial statement preparation and audits. FISVIS Goals and Strategy To address USDA’s pervasive financial management system weaknesses and carry out its financial management system responsibilities under the CFO Act, in March 1993, the Department initiated the Financial Information Systems Vision and Strategy (FISVIS) project. The FISVIS team is composed of officials from the Office of the CFO, component agencies, and NFC. FISVIS’ ultimate goal is to have a single, integrated, and seamless financial management system implemented by 1998. USDA established several vision statements for the FISVIS project, including the following: Policy, management, program, and operating personnel will have access to timely, accurate, reliable, consistent, and complete financial information when and in the form they need it. Agencies will retain the flexibility to develop and maintain financial and mixed systems to support their mission. Implementation of the FISVIS effort will result in streamlined operations and, therefore, in increased efficiency. Budget, program, and financial data will be integrated. The Department and agencies will work cooperatively to meet agency financial information needs and departmental requirements. The Office of the CFO intends to accomplish its FISVIS vision by using an incremental approach, based on a foundation that would achieve, and then build on, early successes. To achieve its FISVIS vision statements, USDA identified five major strategies: (1) provide communication, oversight, and project management, (2) develop and implement departmentwide financial standards and definitions, (3) develop and implement a foundation system, (4) assist owners of feeder systems to integrate their systems into the Foundation system, and (5) support interim improvement efforts. Objectives, Scope, and Methodology The objectives of our review were to assess whether the FISVIS project will (1) resolve USDA’s major financial management system weaknesses and (2) consolidate USDA’s separate financial and mixed systems that perform similar functions, as well as reengineer USDA’s financial processes. To assess whether FISVIS will resolve USDA’s current financial management system weaknesses, we first identified these weaknesses by reviewing our and the OIG’s consolidated USDA and component agency financial statement audit reports and other audit reports. We also reviewed USDA’s Federal Managers’ Financial Integrity Act report, its 5-year Financial Management Plan prepared pursuant to the CFO Act, and other pertinent documents. After identifying USDA’s financial management system weaknesses, we assessed the FISVIS October 1993 strategy document and implementation plan and interviewed the former and current FISVIS project manager, CFO, and Deputy CFO to determine USDA’s strategy to address these weaknesses. We also interviewed OIG officials to determine whether they believed that the FISVIS strategy would address their audit findings. In addition, we interviewed an OMB budget examiner for USDA to discuss USDA’s financial management system problems and FISVIS’ strategy for addressing them. We also reviewed the USDA Financial and Accounting Standards Manual and the USDA Financial Management Information Architecture Document, and assessed the Foundation system’s procurement by reviewing the General Service Administration’s Financial Management System Software Multiple Award Schedule contract and other relevant procurement documents. In addition, we interviewed the General Service Administration’s Contracting Officer, a Department of the Treasury Financial Management Service official, the Contracting Officer and Contracting Officer’s Technical Representative for the Foundation system, and USDA Office of Information Resource Management Acquisition Review Team officials. Because USDA had not completed acceptance testing of the Foundation system software by the end of our review, we did not assess the implementation of the Foundation system. We also reviewed the June 1995 proposed rule on the Office of the CFO’s delegation of authority. We interviewed the CFO, Deputy CFO, and other officials about the Office of the CFO’s role involving component agency and NFC financial management systems. We also interviewed the senior financial officials of USDA’s largest component agencies and reviewed their financial management system plans. In addition, we assessed the Consolidated Farm Service Agency’s plans to use the Foundation system contract to procure the same software and interviewed the manager of this project and other pertinent officials. To assess whether the FISVIS project will address consolidating financial and mixed systems that perform similar functions and reengineering USDA’s financial processes, as prescribed by our draft Federal Financial Management Systems Review Methodology, we developed an inventory of USDA’s component agencies and NFC financial management systems. We asked the component agencies and NFC to characterize the functions that these financial management systems perform by the functions and definitions listed in the JFMIP’s Framework for Federal Financial Management Systems. We also identified USDA’s planned financial management system improvement and business process reengineering efforts by interviewing component agency and NFC officials as well as reviewing budgeting, planning, and other pertinent documents. We performed our work at the Department of Agriculture in Washington, D.C.; FISVIS project team headquarters in Alexandria, Virginia; and the National Finance Center in New Orleans, Louisiana. We also visited the Consolidated Farm Service Agency in Kansas City, Missouri; the Rural Economic and Community Development mission area in Washington, D.C., and St. Louis, Missouri; the Forest Service in Rosslyn, Virginia; the Natural Resources Conservation Service in Washington, D.C.; the Animal and Plant Health Inspection Service in Hyattsville, Maryland; the Agricultural Research Service in Greenbelt, Maryland; and the Food and Consumer Services in Alexandria, Virginia. Our work was performed between September 1994 and July 1995, in accordance with generally accepted government auditing standards. We requested written comments from the Secretary of Agriculture on a draft of this report. In response, we received written comments from USDA’s Chief Financial Officer. These comments are discussed, along with our evaluation, in chapter 4 and are reprinted in appendix I. USDA Has Made Progress but FISVIS Will Likely Not Resolve USDA’s Current Financial System Problems While USDA has made laudable progress toward implementing the initial phase of the FISVIS project, its ability to achieve the ultimate goal of a single, integrated financial management system is doubtful. The Office of the CFO has initiated several actions to begin to correct USDA’s many financial management system problems. However, because the Office of the CFO has not set up a mechanism to enforce its financial system standards, FISVIS’ ultimate success is highly dependent on the voluntary compliance of USDA’s component agencies and NFC. In addition, the Office of the CFO has not implemented a configuration management policy or version control process for the Foundation system software. Without these policies and processes, efforts to implement common data and common transaction processing, a necessary step to achieve a single, integrated financial management system, will be significantly hampered and future system maintenance costs increased. Office of the CFO Is Making Progress in Accomplishing Initial Phase of FISVIS The Office of the CFO has demonstrated strong leadership by moving forward with the initial phase of the FISVIS project. This phase includes developing departmental financial system standards and purchasing a commercial off-the-shelf Foundation Financial Information System. Completing this phase will be a major step in addressing USDA’s problem with nonintegrated financial management systems because it will establish common data definitions and transaction processing. This is a necessary step towards the development of a single, integrated financial management system—FISVIS’ major goal. OMB Circular A-127 requires agencies to establish and maintain a single, integrated financial management system that includes common data element definitions and common transaction processing. Furthermore, the JFMIP’s Framework for Federal Financial Management Systems states that without a single, integrated financial management system, poor policy decisions are more likely to occur due to inaccurate or untimely information; managers are less likely to be able to report accurately to the President, the Congress, and the public on government operations in a timely manner; scarce resources are more likely to be directed towards the collection of information, rather than to the delivery of the intended programs; and modifications to financial management systems, necessary to keep pace with rapidly changing user requirements, cannot be coordinated and managed properly. USDA’s current financial management systems do not contain common data element definitions and common transaction processing. To address this problem, in April 1994, the Office of the CFO and the FISVIS team released USDA’s first departmental financial standards—the USDA Financial and Accounting Standards Manual and the USDA Financial Management Information Architecture Document (each of these documents has been subsequently updated). These documents contain governmentwide and USDA-specific financial accounting requirements and are intended to establish a structure for satisfying USDA’s financial management business needs. USDA based its standards documents on federal financial management requirements and USDA-specific requirements developed at joint requirement planning meetings attended by representatives from the Office of the CFO, NFC, and the component agencies. In December 1994, USDA took another step towards implementing a single, integrated financial management system by awarding a contract to American Management Systems, Inc. (AMS) for the Foundation system. The requirements for the Foundation system were based on federal financial management and USDA-specific requirements. The Office of the CFO estimates that it will cost $90 million over 8 years (including the cost of the contract and USDA’s internal and support service costs) to install, implement, and maintain the Foundation system. The AMS software is a commercial off-the-shelf system procured through the General Services Administration’s Financial Management System Software Multiple Award Schedule. The Foundation system will perform general ledger management, cost management, receipt management, payment management, funds management, and financial reporting. USDA is now in the process of implementing the Foundation system. In January 1995, AMS installed its software at NFC. The Department is evaluating and testing this software as well as developing individual component agency implementation strategies. The Office of the CFO plans a phased-in implementation approach starting with five of its organizations (four component agencies and the Office of the CFO), with the rest of the Department to follow. Each of the five organizations will work with the FISVIS team to tailor an implementation plan to its unique operation. This includes making decisions on such items as which system functions the organization will use and defining its account classification structure. When fully implemented, the Foundation system will receive data from USDA’s component agency and NFC feeder systems. In July 1995, the CFO stated that his goal is to implement the Foundation system departmentwide by the end of fiscal year 1997. The Office of the CFO Cannot Ensure FISVIS’ Success Because the full implementation of FISVIS hinges on the voluntary compliance of component agencies with the financial management system standards, the Office of the CFO cannot ensure the project’s success. Historically, USDA’s departmental oversight of component agencies’ financial management systems has been weak; however, USDA plans to increase the CFO’s authority and responsibilities over component agency financial management systems. In June 1995, USDA published a proposed rule in the Federal Register that gives the CFO overall responsibility for the Department’s financial management systems and includes new responsibilities consistent with the CFO Act and OMB’s implementing guidance, such as approving component agency financial management systems design and enhancement projects, as well as overseeing and recommending approval of component agency financial management budgets. According to the CFO’s July 25, 1995, testimony before the House Subcommittee on Government Management, Information and Technology, the new authorities will give the CFO, for the first time, real responsibility for component agencies financial systems. While it is too early to evaluate the ultimate effect that the CFO’s proposed new authority and responsibilities may have, the Office of the CFO currently continues to have a limited role regarding component agency financial management systems compliance with the new financial standards. For example, although the CFO instructed the component agencies in November 1994 to ensure that their financial management systems conform with USDA Financial and Accounting Standards Manual requirements, the Office of the CFO has not established a structure or process to enable it to enforce compliance. OMB’s guidance on implementing the CFO Act states that the CFO’s authority should include ensuring compliance throughout the Department and its component parts with (1) applicable accounting standards and principles and (2) financial information and systems functional standards. In addition, USDA needs a process to ensure compliance because USDA’s component agencies and NFC stated that they had not reviewed their financial and mixed systems to determine whether they comply with the USDA Financial and Accounting Standards Manual. Further, the component agencies and NFC had scheduled only 15 percent of their systems to be reviewed, although many stated that they would perform such reviews in the future and the Office of the CFO stated that NFC is now beginning to review its financial systems as part of the implementation of the Foundation system. The Office of the CFO also plans to implement a financial and accounting standards administration function to help component agencies implement the financial standards. For example, according to the CFO, one of the tasks of the financial and accounting standards administration function will be to assist component agencies and staff offices with incorporating USDA’s financial standards into their new or reengineered financial and mixed system development projects. The Office of the CFO tasked a contractor with drafting a financial and accounting standards directive addressing compliance with USDA’s financial standards. The Office of the CFO also has a limited role over component agency financial management system development efforts. In November 1994, the CFO instructed the component agencies to implement the USDA Financial and Accounting Standards Manual and the USDA Financial Management Information Architecture Document during any financial management system development efforts, but the Office of the CFO’s ability to ensure that these standards are built into these system development efforts is limited. For example, three component agencies told us that the CFO did not have a role in their system development efforts. Four component agencies stated that the Office of the CFO has a role in their system development efforts through its membership in their Acquisition Review Teams. Each member of these review teams must approve a component agency’s financial management system acquisition plan before the acquisition can proceed. However, the CFO and Deputy CFO stated that the Acquisition Review Team process is not an effective tool to review component agency financial management system development efforts because many important decisions are made prior to the Acquisition Review Team’s involvement. For example, the requesting agency develops an alternatives and benefit/cost analysis, which is then presented at an Acquisition Review Team meeting. Moreover, not all system development efforts go through the Acquisition Review Team process. For example, Forest Service has an on-going personnel system development effort, with an estimated 5-year cost of about $2.5 million, that did not undergo the Acquisition Review Team process. According to officials who administer the review process, the major component agencies and NFC do not always seek approval through the Acquisition Review Team process because they often perform their system development efforts in-house and do not procure systems. In addition, in some cases, the Office of the CFO did not participate in a financial management system Acquisition Review Team case. For example, the Office of the CFO did not participate in the Acquisition Review Team for the Natural Resources Conservation Service’s Financial Management System, which is estimated to cost $96 million over 10 years. According to the former USDA official who set up the team, the Office of the CFO was invited to join the Acquisition Review Team but chose not to participate. However, USDA’s Deputy CFO stated that he was unaware of the Natural Resources Conservation Service’s effort until we brought it to his attention. As a result of our bringing this development effort to the Deputy CFO’s attention, the Natural Resources Conservation Service must now obtain the Office of the CFO’s written approval before any system, or part of a system, is developed under this effort. According to the Deputy CFO, if provided sufficient resources, the Office of the CFO would work with component agencies’ to evaluate their current financial management systems and system development efforts. As of May 24, 1995, the Office of the CFO had designated five and a half full-time equivalents for departmentwide financial systems, policy, and procedures coordination, with three full-time positions vacant (which the CFO is trying to fill). In its fiscal year 1996 budget request, USDA asked for eight additional staff years for the Office of the CFO to implement financial systems oversight, correct deficiencies in the Department’s financial management systems, and provide better stewardship over USDA’s resources. While the Office of the CFO has a limited number of positions designated for financial management system reviews, NFC and the component agencies employ several thousand financial management personnel. The Office of the CFO’s fiscal year 1995 authorized staffing level was 1,425, of which 1,340 were stationed at NFC. However, according to the CFO and Deputy CFO, most of the NFC personnel are (1) generally computer programmers, operations accountants, and clerks who do not have the types of skills necessary to perform financial management system reviews and (2) needed for on-going NFC work. In addition to the Office of the CFO’s staff, as of March 1995, the component agencies employed about 2,900 accounting and budget personnel. While the Office of the CFO has not performed a review of USDA’s financial management staffing needs for USDA as a whole, the CFO and Deputy CFO agreed that there may be opportunities for USDA to redistribute or use temporary assignments of some of the Department’s financial management personnel to perform financial management system reviews. Until the CFO can ensure that USDA’s component agencies and NFC have implemented the departmentwide financial standards, USDA will continue to have nonintegrated financial management systems that contain incompatible and inconsistent financial data. As a result, the new Foundation system will merely summarize unreliable component agency and NFC financial data and USDA’s financial management systems will continue to be high risk. The Office of the CFO Has Not Established a Configuration Management Policy and Version Control Process The Office of the CFO has not established a configuration management policy or version control process to help manage and control the Foundation system software modifications and version updates. Configuration management policies and version control can lower USDA’s future costs by minimizing changes to the contractor’s original software version and ensuring software development efforts are not duplicated at multiple sites. Moreover, the JFMIP Framework for Federal Financial Management Systems calls for agencies to place common software under version control. The Office of the CFO’s contract for the Foundation system software allows component agencies to procure the same AMS software. As of July 1995, only one component agency, the Consolidated Farm Service Agency (CFSA), had decided to procure this software through a task order to the contract. CFSA plans to spend about $174.5 million over 11 years to procure, implement, modify, and operate this software at the National Computer Center where CFSA’s other financial management systems are housed. Most of the $174 million will be nonacquisition related, such as accounting and clerical staff operating costs. Although both the Office of the CFO and CFSA intend to modify their respective copies of the Foundation system software, the Office of the CFO does not have a configuration management policy. Such a policy would address procedures for (1) ensuring that a proposed software modification is necessary, (2) determining whether a modification should result in a change to the baseline software or be implemented in a separate module, and (3) ensuring that software at multiple locations remain synchronized. The Office of the CFO also does not have a version control process. An effective version control process would ensure that either the same software releases are used or that different releases are managed effectively. This is particularly important in cases where more than one organization is managing and operating copies of the same software. The Office of the CFO and CFSA have recognized the importance of configuration management and version controls. In July 1995, they agreed to prepare a configuration management plan that, according to the Deputy CFO, will include a version control process. FISVIS Strategy Does Not Address Consolidating Financial Systems and Reengineering Processes The FISVIS strategy does not attempt to consolidate or eliminate overlapping financial management systems or reengineer existing financial processes across component agencies. USDA has over 100 financial management systems that perform many similar functions. This environment will continue to exist even after FISVIS because USDA is planning to spend hundreds of millions of dollars to replace or redesign these systems without a financial management systems plan to consolidate these systems on a departmentwide basis. In addition, although many component agencies plan to reengineer their own financial processes, USDA has not placed the CFO in a leadership role that would help ensure that financial processes are reengineered from a departmentwide perspective. USDA Supports Many Overlapping Financial Management Systems As table 3.1 illustrates, USDA has 115 financial management systems—62 financial systems and 53 mixed systems. Most of these systems are independently managed by USDA’s component agencies and NFC. In fiscal year 1994, USDA spent over $187 million to operate and maintain these systems. As table 3.2 illustrates, these 115 systems perform many similar and overlapping functions. As discussed in chapter 1, because many of these systems are inadequate, many component agency field offices also maintain informal or “cuff” systems that perform the same functions as the “official” systems. For example, Forest Service told us that they had six “national” financial management systems. However, these six did not include the more than 100 systems that Forest Service regions and stations maintain. For example, Forest Service regional offices use a Project Work Planning System that includes a budget allocation function. According to Forest Service, this system is not a “national” system and was developed by an individual regional office because of the inadequacies of Forest Service’s and NFC’s financial management systems. In other cases, regional offices and stations maintained systems for their specific office. USDA Does Not Plan to Consolidate or Eliminate Overlapping Systems USDA’s overlapping systems are likely to continue, even after FISVIS is fully implemented, because the FISVIS strategy does not address consolidating or eliminating overlapping systems on a departmentwide basis. Instead, the FISVIS strategy provides for component agencies to meet their specific needs by developing and managing their own financial management systems. However, this decision could be costly because component agencies plan to spend hundreds of millions of dollars over the next several years redesigning or replacing the current financial management systems without the guidance of an overall departmentwide financial management systems architecture. The following are examples of on-going or planned financial management system development efforts by various component agencies: The Natural Resources Conservation Service estimated that its Financial Management System effort will cost about $96 million over its 10-year life cycle. CFSA has several development efforts planned, including the Core Accounting System, estimated to cost $174.5 million over its 11-year life cycle. The Rural Housing and Community Development Service has several on-going development efforts, including (1) the New Guaranteed Loan System, estimated to cost $62 million over its 17-year life cycle and (2) the Dedicated Loan Origination/Servicing System, estimated to cost $285 million over its 15-year life cycle. Because the FISVIS strategy does not include a financial management system architecture, once USDA’s many financial management system development efforts are completed, USDA will continue to have a multitude of financial management systems that perform similar functions but that may not be integrated or tied together. In order to implement a single, integrated financial management system—required by the CFO Act and OMB and a major goal of FISVIS—OMB specifies that agencies should plan and manage their financial management systems in a unified manner. A critical step in accomplishing this is the development of a financial management systems architecture. According to JFMIP, a financial management systems architecture serves as a blueprint for the logical combination of financial and mixed systems to provide the budgetary and financial management support for program and financial managers. Through the process of developing this architecture, USDA would determine where savings could be achieved by consolidating systems that perform the same or similar functions. As early as December 1993, the Office of the CFO recognized the need for an overall plan to guide the modernization of USDA’s financial information systems. However, according to the Deputy CFO, such a plan was not developed because of a lack of resources. We believe that USDA could find areas where it could reduce costs by consolidating or eliminating overlapping systems through a mechanism such as cross-servicing. For example, USDA supports 91 financial and mixed systems that perform the same functions as the AMS software packages being installed at NFC (the Foundation system) and the National Computer Center (CFSA’s Core Accounting System). However, only 16 of these 91 systems will be either fully or partially replaced by the AMS software. In fiscal year 1994, USDA spent over $160 million to operate and maintain these 91 systems, of which about $13 million pertained to the 16 systems that the AMS software will fully or partially replace. Through cross-servicing, some of the operating costs of the remaining 75 systems could be eliminated or reduced. USDA currently performs successful cross-servicing for some administrative systems. For example, NFC cross-services payroll, personnel, and other administrative services for a diverse group of USDA and non-USDA agencies. Agencies serviced through NFC have achieved significant savings by avoiding redundant systems development and design initiatives and by reducing annual maintenance and processing costs. In one case, the Department of Commerce estimated that it avoided system development expenditures totaling $11 million for a payroll/personnel system and a personal property system, as well as achieving annual processing cost savings of $2 million per year. USDA could also reduce its systems development costs through joint development efforts. In this regard, the National Performance Review report on financial management noted that federal agencies not in compliance with OMB Circular A-127, such as USDA, should consider other alternatives including joint agency development efforts before investing in new systems. FISVIS Generally Does Not Include a Departmentwide Strategy for Reengineering Existing Financial Processes The CFO Act directs agency CFOs to “oversee all financial management activities relating to the programs and operations of the agency.” However, USDA’s CFO does not have primary responsibility for reengineering USDA’s financial processes, and the FISVIS strategy generally does not address reengineering USDA’s financial processes from a departmentwide perspective. Process reengineering is a management technique for achieving dramatic improvements in cost, quality, and/or customer service by rethinking and redesigning major business processes. The Department of Agriculture Reorganization Act of 1994 offers USDA an excellent opportunity to eliminate or simplify inefficient processes and consolidate those that affect multiple mission areas or component agencies. Under the Reorganization Act, USDA is authorized to make substantive organizational changes and is required to reduce staff and consolidate component agencies’ financial organizations. Further, according to USDA’s National Performance Review report, USDA’s existing financial processes discourage efficient use of resources, indicating that savings are possible through reengineered processes. Many of USDA’s component agencies and mission areas have realized that their existing financial processes should be reengineered and have independently initiated financial process reengineering projects from their own perspective. For example, according to agency representatives, (1) CFSA is planning to reengineer its financial processes as part of an overall project to modernize its financial management systems, (2) the Natural Resources and Conservation Service is planning a complete reorganization of the agency and expects to reengineer financial processes as part of the reorganization, and (3) the Forest Service is considering reengineering selected financial processes such as outyear budget planning and small purchasing. Although the CFO Act requires agency CFOs to direct and oversee agency financial management operations, USDA did not assign the CFO the responsibility for developing a departmentwide financial management reengineering plan nor for reviewing and approving component agency and mission area reengineering efforts. Instead, USDA assigned the Assistant Secretary for Administration to be responsible for overseeing the reengineering of both administrative and financial processes. In a January 1995 letter to USDA’s Deputy Secretary, we expressed concern about this arrangement and suggested that the Assistant Secretary for Administration’s responsibilities not include reengineering financial processes. Although USDA recognized the overlapping responsibilities between the CFO and Assistant Secretary for Administration, the Deputy Secretary (in the capacity of Acting Secretary) decided that the Assistant Secretary should continue to be responsible for reengineering financial processes in consultation with the CFO. In addition, to date, the Assistant Secretary for Administration has concentrated on reengineering some of the Department’s administrative functions and has not developed departmental plans for reengineering financial processes. Recent experiences at the Department of the Interior illustrate the importance of CFO leadership in financial process reengineering. Interior is comprised of several component agencies with different missions and programs, much like USDA. According to Interior officials, when Interior acquired and implemented a foundation financial management system to integrate its disparate component agencies’ financial management systems, as USDA is currently doing, it did not reengineer its financial processes at the same time. However, Interior found that in order to achieve the full benefits of implementing a foundation financial management system and integrating its component agency systems, it needed to reengineer its financial processes from a departmentwide perspective. To overcome this hurdle, in 1994, Interior’s CFO led an effort to establish a partnership with Interior’s component agency senior financial managers to begin defining existing financial processes in preparation to reengineer, standardize, and consolidate financial processes from a departmentwide perspective. Although we have not evaluated Interior’s efforts in this area, an official there stated that factors important to the success of this effort included (1) placing the CFO in a leadership role for overseeing the reengineering of financial processes and (2) requiring component agencies’ senior financial managers to work cooperatively with each other and the CFO to plan and manage reengineering efforts from a departmentwide perspective. By not having a departmentwide strategy for reengineering its financial processes, USDA risks losing the savings and other benefits that are available through reengineering those financial processes that are departmentwide or that cross multiple mission areas or agencies. An example of the potential benefits that could be derived from reengineering on a departmentwide basis is the process for transferring funds among component agencies. The Office of the CFO sponsored a business process reengineering analysis that estimated that if USDA reengineers this labor-intensive process on a departmentwide basis, it could save about half of the $8 million it costs per year to transfer funds. There may be other excellent opportunities to reengineer financial processes departmentwide. USDA’s fiscal year 1994 Federal Managers’ Financial Integrity Act report highlights material weaknesses in financial processes, such as debt collection and funds control, that cut across component agency and mission area lines. Further, reengineering financial processes from a departmentwide perspective could ease the burden created by USDA’s planned downsizing of its financial organizations. Reengineering would also help streamline and consolidate financial processes across agencies and mission areas, enabling fewer personnel to perform the processes without losses in effectiveness. Process reengineering experts caution that if an organization reorganizes and reduces staff levels without also rethinking and reengineering the underlying processes or functions the staff perform, it risks reducing operational efficiency and service delivery. In addition, in our recent report on downsizing strategies, officials from a private company stated that, while downsizing, organizations have to address their work processes. Another company’s official observed that if an organization simply reduces the number of employees without changing its work processes, staffing growth will recur eventually. Additionally, by not planning and managing financial process reengineering from a departmental perspective, USDA runs the risk that the new financial processes developed independently by the component agencies will not be adequately supported by the financial management systems acquired or developed under FISVIS. JFMIP’s Framework for Federal Financial Management Systems cautions that financial management systems planning efforts, such as FISVIS, should consider the implications of reengineering related financial processes. Significant changes in existing financial processes, such as those that can be brought about by the component agencies’ planned reengineering efforts, can require commensurate changes in the supporting financial management systems. It is therefore critical that the Department’s financial process reengineering efforts be closely coordinated with any financial management systems development efforts planned under FISVIS, or USDA may find that the newly deployed software is working at cross purposes with the reengineered processes. Should this occur, USDA would have to incur additional costs to modify the new software or develop new systems in order to support the reengineered processes. Conclusions and Recommendations Conclusions USDA’s CFO and Deputy CFO have provided strong leadership in identifying and attempting to correct the multitude of financial management system problems at the Department. However, many of these problems are still not likely to be resolved because the CFO’s ability to enforce the new financial system standards is limited in that the CFO has not yet been given the authority mandated by the CFO Act and OMB’s implementing guidance. In addition, the CFO also has not developed a configuration management policy and version control process for the Foundation system software to help manage copies of this software and reduce future maintenance and development costs. Until a single USDA organization is given the requisite authority, no assurance exists that the transition to a fully modernized and integrated financial management system will be effective, expeditious, and economical. The USDA Reorganization Act also provides the Department with an historic opportunity to evaluate its financial management system needs departmentwide, revise the FISVIS strategy to consolidate overlapping financial and mixed systems, and reengineer its financial processes where it is economical to do so. However, because component agencies plan to spend hundreds millions of dollars to replace and redesign their existing financial and mixed systems without considering such consolidations, USDA will not likely solve its financial management problems in a cost- effective manner and could be needlessly spending millions of dollars on new systems. In addition, USDA has not provided the CFO a leadership role in reengineering the Department’s financial processes. Without such a role, the CFO’s ability to establish partnerships with component agencies to develop cost-effective departmentwide financial process reengineering projects will be hampered. Recommendations We recommend that the Secretary of Agriculture: Expeditiously implement the proposed delegation of authority to provide the CFO with the authority to oversee all financial management activities relating to the programs and operations of the agency, including approving component agency financial management system design and enhancement projects. Require that the CFO (1) establish review teams to determine whether USDA’s current and future component agency and NFC financial and mixed systems are in compliance with the USDA Financial and Accounting Standards Manual and the USDA Financial Management Information Architecture Document and (2) take action to bring component agencies into compliance with the standards. One way USDA could undertake this task with existing resources is to create temporary teams of Office of the CFO, NFC, and component agency personnel. Direct the CFO to develop and implement a configuration management policy and version control process to ensure that the Foundation system baseline software is effectively managed. Direct the CFO to update the FISVIS strategy to include a financial management systems architecture that sets forth the financial management needs of USDA’s new organizational structure, and establish a detailed strategy to meet these needs. This plan should also identify opportunities to streamline and/or consolidate financial management systems across agencies and mission areas. Direct the CFO to review each of the component agencies on-going and planned financial management system development efforts and report to the Secretary whether each of these efforts are necessary, consistent with the FISVIS initiative, and cost-effective from a departmentwide perspective, or whether they should be consolidated with other financial management systems or development efforts. This would include, but not be limited to, determining that the component agencies’ needs cannot be met by the Foundation system. If the CFO determines that any individual system development effort is not needed, the Secretary should suspend it. Delegate to the CFO authority and responsibility for (1) developing a departmentwide financial management reengineering strategy that would include identifying the technical assistance and training necessary to successfully carry out reengineering activities and (2) reviewing and approving the reengineering of all departmental and component agency financial processes and require component agencies’ senior financial managers to work with the CFO to ensure that their reengineering efforts are planned and managed from a departmentwide perspective. Agency Comments and Our Evaluation In providing written comments on a draft of this report, USDA emphasized that the modernization of its financial management systems is one of the Secretary’s top priorities and is an integral part of USDA’s overall reorganization. USDA further stated that our recommendations, when implemented, will strengthen USDA’s capabilities to modernize and upgrade its financial systems. Specifically, USDA agreed to implement all but one of our recommendations, although the Department was concerned about finding the resources to implement two of our recommendations. USDA did not agree to implement our recommendation that the CFO be provided the authority and responsibility for developing a departmentwide financial management reengineering strategy and for reviewing and approving all departmental and component agency financial process reengineering efforts. USDA agreed with our recommendations to (1) expeditiously implement the proposed CFO’s delegation of authority, (2) develop and implement a configuration management policy and version control process, and (3) update the FISVIS strategy to include a financial management systems architecture that would identify opportunities to streamline and/or consolidate financial management systems across agencies and mission areas. USDA also agreed with the need to address two other recommendations; however, USDA expressed concern about the lack of resources available to implement these recommendations. For example, in discussing the Department’s written comments, the CFO stated that the Office of the CFO’s ability to implement our recommendation on forming review teams to determine whether USDA’s current and future financial and mixed systems conform with the Department’s financial standards would be contingent on available resources. Similarly, although USDA agreed with our recommendation to review component agency financial management system development efforts, it stated that the CFO would perform such reviews as resources are available. We believe that the Secretary’s designation of financial management systems as a top departmental priority and recognition that their modernization is an integral part of the Department’s overall reorganization warrant the resources—either through permanent or temporary staff reallocations—necessary to review USDA’s financial systems. As we discuss in chapter 2, the Department employs over 4,000 accounting and budget personnel within the Office of the CFO and the component agencies. We believe USDA may be able to redistribute or temporarily reassign some of these staff to implement these recommendations. In addition, as we discuss in chapter 3, USDA has many overlapping financial management systems. Therefore, our recommendation to review each of the component agencies ongoing and planned financial management system development efforts could result in significant monetary savings. The time to perform such a review is now, before USDA spends a significant amount of money implementing financial management systems that may not be needed. These savings could, in turn, be used to fund other needed USDA financial management improvement efforts. In its written comments, USDA stated that most of its current financial and mixed systems do not comply with the Department’s financial standards and agreed that USDA needed to bring them into compliance. Although USDA stated it would consider our recommendation to establish review teams to determine financial management system noncompliance, it also planned to evaluate other options, such as agency self-certifications, to address this issue because it believed staff resources may not be available. However, we believe that the breadth of USDA’s current noncompliance with these standards and the lack of specific component agency plans to evaluate their systems for such compliance attests to the need for the CFO to establish review teams to independently identify areas of noncompliance and recommend actions to correct these deficiencies. USDA did not agree to grant the CFO the authority and responsibility for developing a departmentwide financial management reengineering strategy and for reviewing and approving all departmental and component agency financial process reengineering efforts to ensure that the efforts are planned and managed from a departmentwide perspective. USDA stated that the Secretary delegated responsibility to the Assistant Secretary for Administration for reengineering USDA’s administrative systems—which encompass financial processes—under the Modernization of Administrative Processes program. USDA’s comments also noted that the Office of the CFO and the Assistant Secretary for Administration established a Board of Directors (with the Assistant Secretary for Administration as the Chairperson and the CFO as the Vice-Chairperson) to provide policy guidance and direction for the Modernization of Administrative Processes program. We agree that reengineering USDA’s administrative processes should remain the primary responsibility of the Assistant Secretary for Administration. We also agree that administrative and financial processes and systems are often related. Therefore, we applaud the Secretary for establishing a Board of Directors for the Modernization of Administrative Processes program. Nevertheless, we continue to believe that implementing our recommendation on reengineering is necessary because, even under the Board of Director’s process, the responsibility for reengineering USDA’s financial processes does not rest with the CFO, who is tasked by the CFO Act with overseeing all financial management activities relating to the programs and operations of the agency. We expressed this concern in a January 1995 letter to the Deputy Secretary. In addition, because USDA’s financial processes and financial systems are inextricably linked, it is imperative that changes to either be managed and planned in an integrated manner. Therefore, we believe that the CFO should have the primary departmental leadership role in reengineering USDA’s financial processes. If the CFO is given this leadership role, USDA will strengthen both its financial management systems development and financial process reengineering activities since a single person could be held accountable and responsible for both areas. In addition, USDA could help ensure that financial process reengineering efforts are consistently managed and controlled departmentwide.
GAO reviewed whether the Department of Agriculture's (USDA) Financial Information Systems Vision and Strategy (FISVIS) project will: (1) resolve major USDA financial system weaknesses; and (2) consolidate the separate USDA financial and mixed systems that perform similar functions, as well as reengineer USDA financial processes. GAO found that: (1) the USDA Chief Financial Officer (CFO) has demonstrated strong leadership and progress in implementing the initial phase of the FISVIS project; (2) CFO has issued departmentwide financial management standards and has signed a contract for the central Foundation Financial Information System; (3) it is unclear when USDA financial and mixed systems will be brought into compliance with the new financial standards, since CFO does not have the authority or a mechanism to enforce compliance and will have to rely on the voluntary cooperation of the USDA component agencies; (4) although USDA is planning to give CFO greater responsibility and authority over its financial systems, the major component agencies will manage their own financial systems with limited CFO oversight; (5) CFO lacks a configuration management policy and version control process for the Foundation system software to lower costs and prevent duplication; (6) FISVIS does not address eliminating or consolidating the many USDA systems that perform similar functions and has not been revised to consider reengineering financial management processes on a departmentwide basis in light of the ongoing USDA reorganization; and (7) although USDA has not addressed all aspects of its financial management systems, it plans to spend hundreds of millions of dollars to redesign or replace many of its existing financial and mixed systems.
Background The Bureau’s mission is to provide comprehensive data about the nation’s people and economy. The 2010 census enumerates the number and location of people on Census Day, which is April 1, 2010. However, census operations begin long before Census Day and continue afterward. For example, address canvassing for the 2010 census will begin in April 2009, while the Secretary of Commerce must report tabulated census data to the President by December 31, 2010, and to state governors and legislatures by March 31, 2011. The decennial census is a major undertaking for the Bureau that includes the following major activities: Establishing where to count. This includes identifying and correcting addresses for all known living quarters in the United States (address canvassing) and validating addresses identified as potential group quarters, such as college residence halls and group homes (group quarters validation). Collecting and integrating respondent information. This includes delivering questionnaires to housing units by mail and other methods, processing the returned questionnaires, and following up with nonrespondents through personal interviews (nonresponse follow-up). It also includes enumerating residents of group quarters (group quarters enumeration) and occupied transitional living quarters (enumeration of transitory locations), such as recreational vehicle parks, campgrounds, and hotels. It also includes a final check of housing unit status (field verification) where Bureau workers verify potential duplicate housing units identified during response processing. Providing census results. This includes tabulating and summarizing census data and disseminating the results to the public. Role of IT in the Decennial Census Automation and IT are to play a critical role in the success of the 2010 census by supporting data collection, analysis, and dissemination. Several systems will play a key role in the 2010 census. For example, enumeration “universes,” which serve as the basis for enumeration operations and response data collection, are organized by the Universe Control and Management (UC&M) system, and response data are received and edited to help eliminate duplicate responses using the Response Processing System (RPS). Both UC&M and RPS are legacy systems that are collectively called the Headquarters Processing System. Geographic information and support to aid the Bureau in establishing where to count U.S. citizens are provided by the Master Address File/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) system. The Decennial Response Integration System (DRIS) is to provide a system for collecting and integrating census responses from all sources, including forms and telephone interviews. The Field Data Collection Automation (FDCA) program includes the development of handheld computers for the address canvassing operation and the systems, equipment, and infrastructure that field staff will use to collect data. Paper-Based Operations (PBO) was established in August 2008 primarily to handle certain operations that were originally part of FDCA. PBO includes IT systems and infrastructure needed to support the use of paper forms for operations such as group quarters enumeration activities, nonresponse follow-up activities, enumeration at transitory locations activities, and field verification activities. These activities were originally to be conducted using IT systems and infrastructure developed by the FDCA program. Finally, the Data Access and Dissemination System II (DADS II) is to replace legacy systems for tabulating and publicly disseminating data. Comprehensive Testing Improves Chances of a Successful Decennial Census As stated in our testing guide and the Institute of Electrical and Electronics Engineers (IEEE) standards, complete and thorough testing is essential for providing reasonable assurance that new or modified IT systems will perform as intended. To be effective, testing should be planned and conducted in a structured and disciplined fashion that includes processes to control each incremental level of testing, including testing of individual systems, the integration of those systems, and testing to address all interrelated systems and functionality in an operational environment. Further, this testing should be planned and scheduled in a structured and disciplined fashion. Comprehensive testing that is effectively planned and scheduled can provide the basis for identifying key tasks and requirements and better ensure that a system meets these specified requirements and functions as intended in an operational environment. Dress Rehearsal Includes Testing of Certain Systems and Operations In preparation for the 2010 census, the Bureau planned what it refers to as the Dress Rehearsal. The Dress Rehearsal includes systems and integration testing, as well as end-to-end testing of key operations in a census-like environment. During the Dress Rehearsal period, running from February 2006 through June 2009, the Bureau is developing and testing systems and operations, and it held a mock Census Day on May 1, 2008. The Dress Rehearsal activities, which are still under way, are a subset of the activities planned for the actual 2010 census and include testing of both IT and non-IT related functions, such as opening offices and hiring staff. The Dress Rehearsal identified significant technical problems during the address canvassing and group quarters validation operations. For example, during the Dress Rehearsal address canvassing operation, the Bureau encountered problems with the handheld computers, including slow and inconsistent data transmissions, the devices freezing up, and difficulties collecting mapping coordinates. As a result of the problems observed during the Dress Rehearsal, cost overruns and schedule slippage in the FDCA program, and other issues, the Bureau removed the planned testing of several key operations from the Dress Rehearsal and switched key operations, such as nonresponse follow-up, to paper-based processes instead of using the handheld computers as originally planned. Bureau Is Making Progress in Key System Testing, but Lacks Plans and Schedules Through the Dress Rehearsal and other testing activities, the Bureau has completed key system tests, but significant testing has yet to be done, and planning for this is not complete. Table 1 summarizes the status and plans for system testing. Bureau Has Conducted Limited Integration Testing, but Has Not Developed 2010 Test Plans and Schedules for Integration Testing Effective integration testing ensures that external interfaces work correctly and that the integrated systems meet specified requirements. This testing should be planned and scheduled in a disciplined fashion according to defined priorities. For the 2010 census, each program office is responsible for and has made progress in defining system interfaces and conducting integration testing, which includes testing of these interfaces. However, significant activities remain to be completed. For example, for systems such as PBO, interfaces have not been fully defined, and other interfaces have been defined but have not been tested. In addition, the Bureau has not established a master list of interfaces between key systems, or plans and schedules for integration testing of these interfaces. A master list of system interfaces is an important tool for ensuring that all interfaces are tested appropriately and that the priorities for testing are set correctly. As of October 2008, the Bureau had begun efforts to update a master list it had developed in 2007, but it has not provided a date when this list will be completed. Without a completed master list, the Bureau cannot develop comprehensive plans and schedules for conducting systems integration testing that indicate how the testing of these interfaces will be prioritized. With the limited amount of time remaining before systems are needed for 2010 operations, the lack of comprehensive plans and schedules increases the risk that the Bureau may not be able to adequately test system interfaces, and that interfaced systems may not work together as intended. Bureau Has Conducted Limited End-to-End Testing as Part of the Dress Rehearsal, but Has Not Developed Testing Plans for Critical Operations Although several critical operations underwent end-to-end testing in the Dress Rehearsal, others did not. As of December 2008, the Bureau had not established testing plans or schedules for end-to-end testing of the key operations that were removed from the Dress Rehearsal, nor has it determined when these plans will be completed. These operations include enumeration of transitory locations, group quarters enumeration, and field verification. The decreasing time available for completing end-to-end testing increases the risk that testing of key operations will not take place before the required deadline. Bureau officials have acknowledged this risk in briefings to the Office of Management and Budget. However, as of January 2009, the Bureau had not completed mitigation plans for this risk. According to the Bureau, the plans are still being reviewed by senior management. Without plans to mitigate the risks associated with limited end-to-end testing, the Bureau may not be able to respond effectively if systems do not perform as intended. Bureau Lacks Sufficient Executive-Level Oversight and Guidance for Testing As stated in our testing guide and IEEE standards, oversight of testing activities includes both planning and ongoing monitoring of testing activities. Ongoing monitoring entails collecting and assessing status and progress reports to determine, for example, whether specific test activities are on schedule. In addition, comprehensive guidance should describe each level of testing and the types of test products expected. In response to prior recommendations, the Bureau took initial steps to enhance its programwide oversight; however, these steps have not been sufficient. For example, in June 2008, the Bureau established an inventory of all testing activities specific to all key decennial operations. However, the inventory has not been updated since May 2008, and officials have no plans for further updates. In another effort to improve executive-level oversight, the Decennial Management Division began producing (as of July 2008) a weekly executive alert report and has established (as of October 2008) a dashboard and monthly reporting indicators. However, these products do not provide comprehensive status information on the progress of testing key systems and interfaces. Further, the assessment of testing progress has not been based on quantitative and specific metrics. The lack of quantitative and specific metrics to track progress limits the Bureau’s ability to accurately assess the status and progress of testing activities. In commenting on our draft report, the Bureau provided selected examples where they had begun to use more detailed metrics to track the progress of end-to-end testing activities. The Bureau also has weaknesses in its testing guidance. According to the Associate Director for the 2010 census, the Bureau did establish a policy strongly encouraging offices responsible for decennial systems to use best practices in software development and testing, as specified in level 2 of Carnegie Mellon’s Capability Maturity Model® Integration. However, beyond this general guidance, there is no mandatory or specific guidance on key testing activities such as criteria for each level or the type of test products expected. The lack of guidance has led to an ad hoc—and, at times—less than desirable approach to testing. Implementation of Recommendations Could Help Ensure Key Testing Activities are Completed In our report, we are making ten recommendations for improvements to the Bureau’s testing activities. Our recommendations include finalizing system requirements and completing development of test plans and schedules, establishing a master list of system interfaces, prioritizing and developing plans to test these interfaces, and establishing plans to test operations removed from the Dress Rehearsal. In addition, we are recommending that the Bureau improve its monitoring of testing progress and improve executive-level oversight of testing activities. In written comments on the report, the department had no significant disagreements with our recommendations. The department stated that its focus is on testing new software and systems, not legacy systems and operations used in previous censuses. However, the systems in place to conduct these operations have changed substantially and have not yet been fully tested in a census-like environment. Consistent with our recommendations, finalizing test plans and schedules and testing all systems as thoroughly as possible will help to ensure that decennial systems will work as intended. In summary, while the Bureau’s program offices have made progress in testing key decennial systems, much work remains to ensure that systems operate as intended for conducting an accurate and timely 2010 census. This work includes system, integration, and end-to-end testing activities. Given the rapidly approaching deadlines of the 2010 census, completing testing and establishing stronger executive-level oversight are critical to ensuring that systems perform as intended when they are needed. Mr. Chairman and members of the subcommittee, this concludes our statement. We would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. Contacts and Staff Acknowledgements If you have any questions about matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov or Robert Goldenkoff at (202) 512-2757 or goldenkoffr@gao.gov. Other key contributors to this testimony include Sher`rie Bacon, Barbara Collier, Neil Doherty, Vijay D’Souza, Elizabeth Fan, Nancy Glover, Signora May, Lee McCracken, Ty Mitchell, Lisa Pearson, Crystal Robinson, Melissa Schermerhorn, Cynthia Scott, Karl Seifert, Jonathan Ticehurst, Timothy Wexler, and Katherine Wulff. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Decennial Census is mandated by the U.S. Constitution and provides vital data that are used, among other things, to reapportion and redistrict congressional seats and allocate federal financial assistance. In March 2008, GAO designated the 2010 Decennial Census a high-risk area, citing a number of long-standing and emerging challenges, including weaknesses in the U.S. Census Bureau's (Bureau) management of its information technology (IT) systems and operations. In conducting the 2010 census, the Bureau is relying on both the acquisition of new IT systems and the enhancement of existing systems. Thoroughly testing these systems before their actual use is critical to the success of the census. GAO was asked to testify on its report, being released today, on the status and plans of testing of key 2010 decennial IT systems. Although the Bureau has made progress in testing key decennial systems, critical testing activities remain to be performed before systems will be ready to support the 2010 census. Bureau program offices have completed some testing of individual systems, but significant work still remains to be done, and many plans have not yet been developed (see table below). In its testing of system integration, the Bureau has not completed critical activities; it also lacks a master list of interfaces between systems and has not developed testing plans and schedules. Although the Bureau had originally planned what it refers to as a Dress Rehearsal, starting in 2006, to serve as a comprehensive end-to-end test of key operations and systems, significant problems were identified during testing. As a result, several key operations were removed from the Dress Rehearsal and did not undergo end-to-end testing. The Bureau has neither developed testing plans for these key operations, nor has it determined when such plans will be completed. Weaknesses in the Bureau's testing progress and plans can be attributed in part to a lack of sufficient executive-level oversight and guidance. Bureau management does provide oversight of system testing activities, but the oversight activities are not sufficient. For example, Bureau reports do not provide comprehensive status information on progress in testing key systems and interfaces, and assessments of the overall status of testing for key operations are not based on quantitative metrics. Further, although the Bureau has issued general testing guidance, it is neither mandatory nor specific enough to ensure consistency in conducting system testing. Without adequate oversight and more comprehensive guidance, the Bureau cannot ensure that it is thoroughly testing its systems and properly prioritizing testing activities before the 2010 Decennial Census, posing the risk that these systems may not perform as planned.
Background OMB plays a key role in helping federal agencies manage their IT investments by working with them to better plan, justify, and determine how much they need to spend on IT projects and how to manage approved projects. In particular, the Clinger-Cohen Act of 1996 requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. In addition, the Clinger-Cohen Act places responsibility for managing IT investments with the heads of agencies and establishes chief information officers to advise and assist agency heads in carrying out this responsibility. To help carry out its oversight role and assist the agencies in carrying out their responsibilities, OMB developed its Management Watch List in 2003 and its High-Risk List in 2005 to focus executive attention and to ensure better planning and tracking of the major IT investments. The Management Watch List identifies projects at federal agencies that are poorly planned, i.e., projects with weaknesses in their funding justifications, which are known as exhibit 300s. Because of the focus on the funding justifications, projects on the Management Watch List specifically concern the process by which agencies select projects to invest in. OMB places projects on the High-Risk List when they require special attention from oversight authorities and the highest level of agency management. These projects are not necessarily “at risk” of failure, but may be on the list because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. The High-Risk List also includes projects that are performing poorly (i.e., high-risk projects with reported performance shortfalls). High-risk projects are identified as having performance shortfalls if one or more of the following performance evaluation criteria are not met: (1) establishing baselines with clear cost, schedule, and performance goals; (2) maintaining the project’s cost and schedule variances within 10 percent; (3) assigning a qualified project manager; and (4) avoiding duplication by leveraging inter-agency and governmentwide investments. Projects on the High-Risk List, therefore, require disciplined and effective oversight to ensure that performance shortfalls, if any, are addressed. GAO, Information Technology: OMB and Agencies Need to Improve Planning, Management, and Oversight of Projects Totaling Billions of Dollars, GAO-08-1051T (Washington, D.C.: July 31, 2008). projects (totaling about $3 billion) were considered both poorly planned and poorly performing. OMB took several steps to address our recommendations to improve the identification and oversight of Management Watch List and High-Risk List projects; however, further action is needed, including, for example, identifying the deficiencies (i.e., performance shortfalls) associated with the high-risk projects. On April 28, 2009, we testified that the future of the Management Watch List and High-Risk List was uncertain because OMB officials stated that they had not decided if the agency plans to continue to use these lists. We noted that OMB needs to decide if it is going to continue to use the Management Watch List and High-Risk List and, if not, that OMB should promptly implement other appropriate mechanisms to help direct and oversee IT investments in the future. In response, the Federal Chief Information Officer testified that OMB would determine how to better oversee poorly planned and performing projects by the end of June 2009. Investment Management Framework Calls for Boards to Select and Oversee IT Investments Federal agencies face significant challenges in planning for and managing their IT systems and networks. These challenges can be addressed, in part, by the use of systematic management processes to select, control, and evaluate the investments. To further support the implementation of such processes, we developed an IT investment management (ITIM) framework for agencies to use. It is based on our research of IT investment management practices of leading private and public sector organizations and can be used to determine both the status of an agency’s current IT investment management capabilities and the additional steps that are needed to establish more effective processes. The framework consists of progressive stages of maturity for any given organization relative to its selection and oversight responsibilities. We have used the framework in many of our reports, and a number of agencies have adopted it. The ITIM maturity framework cites the establishment of “one or more IT investment management boards” as a fundamental step in establishing a mature capital planning process. The framework states that a departmentwide IT investment review board (IRB) composed of senior executives from both IT and business units should be responsible for defining and implementing the department’s IT investment governance process. This department-level IRB is to provide selection and oversight of department IT projects to ensure that the department’s portfolio of projects meets mission needs at expected levels of cost and risk. Selecting projects involves identifying and analyzing projects’ risks and returns before committing any significant funds to them and selecting those that will best support the agency’s mission needs; overseeing projects involves reviewing the progress of projects against expectations and taking corrective action when these expectations are not being met. To ensure that agencies’ department-level boards are using a disciplined selection and oversight process, the ITIM framework also states that, among other things, the department-level board should: select new investments and reselect ongoing investments; perform regular reviews of each project’s performance against stated expectations; and receive data associated with a project’s actual performance (including cost, schedule, benefit, and risk performance). Importantly, according to the ITIM framework, while these functions can be performed by subordinate boards, the department-level IRBs must maintain ultimate responsibility for and visibility into the subordinate boards’ activities. Prior Reviews Have Identified Weaknesses in Executive-Level Board Involvement in Selection and Oversight We have previously reported that federal agencies face challenges in effectively managing their IT investments. Specifically, in January 2004, we reported that, although most of the major agencies in our review had IRBs responsible for defining and implementing their investment management processes, the agencies did not always have the mechanisms in place for these boards to effectively control their investments. We made recommendations to the agencies regarding those practices that were not fully in place. More recently, in 2008, we reported that the Social Security Administration had not fully developed policies and procedures for management oversight of its IT projects and systems, such as elevating problems to the department-level IRB. We also reported that the Social Security Administration had not tracked corrective actions for underperforming investments and had not reported the actions to the department-level IRB. To address these weaknesses, we recommended that the agency strengthen and expand the board’s oversight responsibilities for underperforming projects and evaluations of projects and establish a mechanism for tracking corrective actions for underperforming investments. Major Federal Agencies Have Guidance for Selection and Oversight of IT Investments, but Two Agency Boards Lack Business Unit Representation The 24 major federal agencies have guidance calling for department-level IRBs to select and oversee IT investments pursuant to OMB guidance required by the Clinger-Cohen Act, and specified in practices laid out in the ITIM framework. However, while all of the agencies had department- level IRBs, the board membership for two agencies did not include business unit (i.e., mission) representation. Agency Guidance Calls for Department-Level IRBs to Select Projects Each of the agencies had documented guidance that called for a department-level IRB to perform selection of the projects to be included in the agency’s IT investments. For example, according to the Department of the Treasury’s guidance, its department-level IRB is to consider investment scoring results and recommendations that are provided to it by the Chief Information Officer Council (a subordinate board) and select which investments will be included in Treasury’s IT investment portfolio. The Department of Transportation’s recently issued IT investment management policy delegates responsibility for project selection, as well as project oversight, to its component-level investment review boards, but requires its components to establish and/or document the existence of their boards, specifies the roles and responsibilities these boards are to have, and establishes specific metrics to be used by the department-level IRB to measure the performance of the component boards. Agency Guidance Calls for Department-Level IRBs to Oversee Projects As with project selection, each of the agencies had documented guidance that called for the department-level IRB to conduct an oversight reviews of projects, and the frequency of these reviews varied (see fig. 1 for a breakdown of the frequency of oversight reviews specified in agencies’ guidance). For 20 of the 24 agencies, the guidance allowed the delegation of oversight reviews to other entities. In these cases, the agencies had guidance in place to help ensure that these other entities were effectively carrying out their responsibilities. At the remaining four agencies—the National Science Foundation, Small Business Administration, Department of State, and the U.S. Agency for International Development —project oversight was to be primarily performed by the department-level IRB. By having guidance specifying department-level IRB selection and oversight of projects, agencies recognize the importance of involving those who have the ultimate responsibility and accountability for the organization’s success in key project decisions. Two Agencies’ Department- Level Boards Lack Business Unit Representation It should be noted, however, that while all of the agencies had guidance requiring department-level IRBs to be responsible for selecting and overseeing projects, the boards at the Departments of Commerce and Labor did not include senior executives from business units (e.g., line or mission units) as called for in the ITIM framework. Specifically, these boards consisted of executives from IT and other department mission support units, such as the Chief Financial Officer, Director of Budget, or Controller, as well as administrative officers, but did not have appropriate line or mission representation from the organizations’ business units. We have previously reported that because allocating resources among major IT investments may require fundamental trade-offs among a multitude of business objectives, portfolio management decisions are essentially business decisions and therefore require sufficient business representation on the department-level IRB. The two agencies with boards that did not include senior executives from business units offered the following rationales for this practice. The Department of Commerce reported that it does not include nontechnical program representatives on its department-level IRB because it would be impractical to have fair representation of all 12 of the major agencies and the dozens of major programs comprising the department. In addition, Commerce reported that it is run on a federated basis, putting responsibility on each of the department’s operating units to prioritize its own investments in determining which should be reviewed by the department. Finally, Commerce stated that it does not prioritize among investments from its different operating units; instead, departmental officials work with each operating unit to ensure that the investment and investment strategy being recommended is optimum for meeting that operating unit’s mission. We have previously reported that using this approach of giving responsibility to subordinate units should include appropriate department-level involvement, either through review and approval of their investments that meet certain criteria or through awareness of the subordinate unit’s investment management activities. We believe that this corporate visibility should be provided by a board composed of executives from both business and IT units to ensure that decisions made are in the best interest of the entire department. In addition, while Commerce’s practice may not be to prioritize among the investments at the department level, the department has ultimate responsibility for the success of its operating units’ investments and the department-level IRB should therefore include business representation to ensure that decisions made are in the best interest of the agency. The Department of Labor reported that the senior IT and administrative executives who serve on its department-level IRB, have in-depth, detailed, and expert knowledge of their units’ missions and business objectives and are capable of representing their units’ interests. However, we have previously reported that IT and administrative executives responsible for mission support functions do not constitute sufficient business representation because, by virtue of their responsibilities, they are not in the best position to make business decisions. Until these agencies adjust their board memberships to include representation from their business units, they will not have assurance that the department-level IRB includes those executives who are in the best position to make the full range of decisions needed to enable the agency to carry out its mission most effectively. Many Projects Did Not Receive a Department-Level IRB Selection or Oversight Review Although all the major agencies had guidance calling for a department- level IRB selection or oversight review, many of the projects we examined did not receive one of these reviews. Specifically, 12 of the 24 projects identified by OMB as being poorly planned in 2007 (accounting for about $4.9 billion) did not receive a selection review, and 13 of 28 poorly performing projects in 2007 (amounting to about $4.4 billion) did not receive an oversight review by the department-level IRB. Furthermore, 6 of the 11 projects identified as being both poorly planned and poorly performing, with nearly $3.7 billion in funding in the President’s fiscal year 2008 budget request, received neither a selection review nor an oversight review. Half of the Poorly Planned Projects Did Not Receive a Selection Review by a Department-Level IRB Of the 24 poorly planned projects in 2007 that we reviewed, 12 projects did not receive a selection review, while 12 were reviewed by the department- level IRB. The requested funding level for these 24 poorly planned projects was about $7.3 billion. The 12 projects that were reviewed by a department-level IRB accounted for approximately $2.4 billion, while the 12 projects not reviewed accounted for about $4.9 billion, about two thirds out two thirds of the total requested funding for the 24 projects (see fig. 2 and table 1). of the total requested funding for the 24 projects (see fig. 2 and table 1). We assessed five projects as not having received department-level IRB selection reviews because the agencies did not provide evidence of such reviews. Agencies offered varying reasons for why selection reviews had not been performed for the remaining seven. Table 1 shows whether projects we reviewed received a selection review from the department- level IRB and lists reported reasons why no review was performed, where applicable. Following are details on the reasons why the 12 projects did not receive a department-level IRB review: A project belonging to Commerce’s USPTO was not reviewed by the department-level IRB, according to the agency, because the USPTO is a performance-based organization (PBO), and therefore its projects are not required to be reviewed by the department-level IRB. According to the legislation that established the USPTO as a PBO, the office is subject to the policy direction of the Secretary of Commerce, but it otherwise retains responsibility for decisions regarding the management and administration of its operations and exercises independent control of its budget allocations and expenditures, personnel decisions and processes, procurements, and other administrative and management functions. According to the Department of Education, the Common Services for Borrowers project did not receive a selection review by the department- level board because it is under the oversight of the Federal Student Aid Executive Leadership Team. In written comments on a draft of this report, however, the department stated that it plans to bring all of its IT investments under the department-level board’s oversight. The Department of Homeland Security did not provide evidence of a selection review for its two projects but noted that it was reengineering its investment management process to include department-level IRB reviews of projects at key milestone decision points. Although NASA stated that its three projects were governed by oversight bodies, the documentation provided did not show evidence that reviews had been performed by the appropriate department-level review board. At the Nuclear Regulatory Commission, a lower-level board performed the selection reviews. According to the agency’s guidance, the department- level board should have performed the reviews. It stated that this board only gets involved when the lower-level board believes issues need to be elevated. However, NRC’s guidance does not specify when issues need to be elevated to the department-level IRB. In addition, the agency did not provide any examples of cases when issues had been elevated to the department-level IRB. Officials from the Department of Transportation’s Office of the Chief Information Officer could not provide a reason why a department-level board selection review of its projects had not been performed. In commenting on a draft of this report, the agency stated that it planned to have this project reviewed in detail by its departmental-level board. The Department of the Treasury’s projects did not receive a department- level IRB selection review because this board was not active during the time frame we considered during our review. The department, however, has since then reestablished its department-level IRB. About Half of the Poorly Performing Projects Did Not Receive an Oversight Review by the Department- Level IRB About half of the poorly performing projects in 2007 we reviewed did not receive an oversight review by a department-level IRB. Of the 28 projects, 13 did not receive an oversight review by the department-level IRB, while 15 did. The President’s requested fiscal year 2008 funding for the 28 projects totaled approximately $4.7 billion. The 15 projects that received a review represented approximately $0.3 billion, or 7 percent of the total $4.7 billion funding request, while the 13 poorly performing projects that were not reviewed totaled nearly $4.4 billion, or 93 percent of the total requested funding. (See fig. 3 and table 2.) Table 2 shows whether projects received oversight reviews, as well as reported reasons why no review was performed, where applicable. Agencies provided several reasons why the 13 projects did not receive oversight reviews, including some which were not consistent with sound management practices: One Defense project’s funding was below the financial threshold required for a review by the department-level IRB, consistent with the agency’s guidance. However, in May 2007 and May 2009, we reported that DOD’s guidance and practices did not provide for sufficient oversight and visibility into component-level investment management activities, including component reviews of investments such as this project. We made recommendations to DOD to address these weaknesses, which DOD has yet to fully implement. Another Defense project was reportedly being rebaselined (meaning that its cost, schedule, and performance goals were being modified to reflect a change in the scope of the work) and therefore had not received a review by the department-level IRB. This project, however, continues to be funded and therefore could have benefited from a department-level oversight review. According to the Department of Education, the two projects we reviewed did not receive oversight reviews by the department-level IRB because they were under the oversight of the Federal Student Aid Executive Leadership Team. As noted earlier, in written comments on a draft of this report, the department stated it plans to bring all of its IT investments under the department-level board’s oversight. While DHS provided evidence that a lower-level board had agreed to submit the DHS-Infrastructure project to the department-level IRB for review, the agency did not provide evidence that this review had been performed. The department also stated that SBInet and US-VISIT projects had received an oversight review by the department-level IRB, but did not provide sufficient evidence to support this, including information presented to the board for review. In March 2009, however, DHS officials told us that they had recently made changes to their investment review process and, as part of these changes, were planning to improve the documentation associated with department-level IRB reviews. A Nuclear Regulatory Commission project should have received a review by the department-level IRB according to the agency’s guidance, but officials told us that, in practice, this board only gets involved when the lower-level board elevates issues. However, agency officials were unable to provide us with any examples where the lower-level board had elevated issues about the project to the IRB. The Department of the Treasury’s projects did not receive a department- level IRB oversight review because this board was not active during the time frame we considered during our review. The department, however, has since then reestablished its department-level IRB. According to the U.S. Agency for International Development, its project did not receive an oversight review because it has not been able to proceed due to lack of funding. We agree that an oversight review was not warranted since there was no activity on the project. A Veterans Affairs project was not reviewed because the IRB is not required to review projects in the operations and maintenance stage. Instead, oversight of projects in this stage is the responsibility of the Office of the Chief Information Officer. However, the IRB does not oversee this office’s review activities. According to the ITIM framework, boards should ensure projects are reviewed throughout their life cycle. In addition, they must maintain ultimate responsibility for and visibility into the activities of groups that carry out their functions. About Half of the Projects That Were Both Poorly Planned and Poorly Performing Received Neither a Selection Review Nor an Oversight Review Six of the 11 projects that were identified as being both poorly planned and poorly performing in 2007 did not receive a selection or an oversight review by the departmental-level IRB. Funding requests for fiscal year 2008 for these 6 projects accounted for about $3.7 billion (see table 3). Without consistent involvement of department-level IRBs in selecting and overseeing projects that have been identified as poorly planned or poorly performing, agencies incur the risk that these projects will not improve, which could lead to potentially billions of federal taxpayer dollars being wasted. Conclusions Department-level investment review boards’ involvement in selecting and overseeing their agencies’ IT projects is critical to ensuring that these projects meet mission needs and that federal funds are not wasted. To their credit, the 24 major federal agencies have established guidance calling for department-level boards to perform project selection and oversight reviews. However, department-level boards for two agencies did not include representation from their business units and therefore did not have assurance that the board included all of the executives who are in the best position to make the full range of decisions needed to enable the agency to carry out its mission most effectively. While having selection and oversight guidance is a good step, it is only worthwhile if effectively implemented. The fact that many poorly-planned or performing projects were not reviewed by department-level boards is particularly alarming considering that they represent, in total, about $6 billion in funding and that the Management Watch List and High-Risk List were established specifically to draw management attention to such projects. Until agencies ensure that their department-level review boards are consistently involved in selecting and overseeing these projects, they will continue to incur the risk that the projects will not improve and that potentially billions of federal taxpayer dollars will be wasted. Recommendations for Executive Action To ensure that IT projects are effectively managed, we are making recommendations to the agencies whose practices were not consistent with sound management practices. Specifically, we recommend that the Secretaries of Commerce and Labor ensure their department-level review boards include business unit (i.e., mission) representation; the Chairman of the Nuclear Regulatory Commission direct the Executive Director for Operations to define conditions for elevating issues related to project selection and oversight to its department-level IRB; and the Secretary of Veterans Affairs define and implement responsibilities for the department-level IRB to oversee projects in operations and maintenance. In addition, we are recommending that the Secretaries of the Departments of Defense, Education, Homeland Security, Transportation, Treasury, and Veterans Affairs, the Administrator for the National Aeronautics and Space Administration, the Chairman of the Nuclear Regulatory Commission, and the Administrator for the U.S. Agency for International Development ensure that the projects that are identified in this report as not having received departmental-IRB selection or oversight reviews receive these reviews. Agency Comments and Our Evaluation We sent a draft of this report to the 24 major agencies and received a response from 20. Of these 20, 15 provided comments, and 5 stated they did not have any comments (we had not made any recommendations to these agencies, which were the Department of Health and Human Services, the Department of State, the Environmental Protection Agency, the National Science Foundation, and the Office of Personnel Management). Of the 15 agencies that provided comments, 11 generally agreed with our recommendations, and 1 (the Department of Justice) did not. Three agencies (the Department of Housing and Urban Development, the Department of the Interior, and the Social Security Administration) provided views on various aspects of our report. Several agencies also provided technical comments, which we incorporated as appropriate. The agencies’ comments and our response are summarized below: In written comments on a draft of the report, the Department of Commerce’s Chief Information Officer, addressing our recommendation that the department ensure that its department-level review board include business unit (i.e. mission) representation, stated that the department had modified the membership structure of its investment review board to provide operating unit management with latitude in identifying senior managers most able to provide effective representation and, as a result had broadened its membership to include chief financial officers from certain operating units as well as the Deputy Director of the Bureau of the Census. The Department of Commerce’s comments are printed in appendix II. In written comments on a draft of the report, the Department of Defense’s Deputy Chief Information Officer concurred with our recommendation to ensure that the Defense Information System for Security receive an oversight review, stating that, going forward, it will ensure that the project receives all required IRB reviews. The department partially concurred with our recommendation to ensure its Integrated Acquisition Environment Shared Services Provider-Past Performance Information Retrieval System receive an oversight review, stating, as indicated in the report, that the project is below the threshold required for department-level IRB oversight. The department stated, however, that the project will be brought before the appropriate department-level IRB for compliance review if, and when it meets the financial threshold. The department also provided technical comments which we have incorporated as appropriate. The Department of Defense’s comments are printed in appendix III. In written comments on a draft of the report, the Department of Education’s Chief Information Officer, agreed with our recommendation to ensure that the two projects we identified in the report as not having received departmental-level IRB selection or oversight reviews receive such reviews, stating that the IRB will review the investments, render decisions as appropriate, and incorporate the results in the IT portfolio currently under review. The department also noted that, while the projects we reviewed were under the oversight of the Federal Student Aid’s Executive Leadership Team, they would be brought under the department’s oversight along with all other investments. The department disagreed with the statement that the projects reviewed did not receive a selection or oversight review, stating that they had been selected and reviewed by the Federal Student Aid’s Executive Leadership Team. In our report, we have clarified the discussion of these reviews by the Executive Leadership Team where appropriate. The Department of Education’s comments are reprinted in appendix IV. In written comments on a draft of this report, the Department of Homeland Security’s Director for Departmental GAO/OIG Liaison Office agreed with the recommendation to conduct department-level reviews of the three programs we reviewed and provided evidence of department Acquisition Review Board reviews for these programs during fiscal year 2008. The department disagreed with the assertion that the department- level review boards were not active in overseeing the three projects we examined during our review and provided decision memoranda—three of which we had not been provided before—as evidence of reviews by the boards in place for 2007, the time period we considered. However, in our report, we do not state that the department-level boards were not active. Rather, we note that the department did not provide sufficient evidence of department-level IRB reviews. We did not change our assessments for the three projects because the additional documentation received still did not provide sufficient evidence documenting the 2007 reviews. The documentation we have seen from more recent reviews more completely documents departmental-level IRB reviews and we have noted this in our report. The department also provided technical comments. The department’s comments are reprinted in appendix V. In written comments on a draft of this report, the Acting Chief Information Officer of the Department of Housing and Urban Development stated that the department-level IRB will maintain its disciplined process for program executives to participate in selecting and overseeing projects. We did not make any recommendations to the department. The Department of Housing and Urban Development’s comments are reprinted in appendix VI. In written comments on a draft of this report, the Department of the Interior’s Deputy Assistant Secretary for Budget and Business Management agreed with our conclusions that consistent involvement of department-level review boards in selecting and overseeing projects, particularly poorly performing projects, is important in safeguarding federal taxpayer dollars. The department also asked that the definition of high-risk projects reflect the fact that some investments designated as such are performing within acceptable thresholds but require heightened awareness and oversight by investment review boards because of their importance. To address this comment, we have added OMB’s criteria for designating projects as high-risk to our report background. We did not make any recommendations to the Department of the Interior. The Department of the Interior’s comments are reprinted in appendix VII. In written comments on a draft of this report, the Department of Justice’s Assistant Attorney General for Administration disagreed with our recommendation that it ensure its department-level review board include business unit representation and provided clarification on the role and responsibilities of the Deputy Attorney General who chairs the board and on the participation of component executives in the board’s decisionmaking process. Based on this clarification, we agree that the board provides adequate business unit representation. We have noted this change in our report and removed the related recommendation. In its comments, the department also took issue with our use of the term “poorly performing” to characterize the projects we reviewed. We are not implying as the department states that these projects are “near failing.” We have clarified our use of the term in the report and, in the case of the Sentinel project—which we have reviewed— acknowledged progress made in managing the project. The Department of Justice’s comments are reprinted in appendix VIII. In written comments on a draft of this report, the Department of Labor’s Assistant Secretary for Administration and Management addressed our recommendation to ensure that its department-level review board include business unit representation by acknowledging that the board does not include senior executives from business units and stating that, while it believes the executives on the board effectively represented the business interests of their respective organizations, it will consider appropriate and efficient steps for including senior executives from business units as part of the board’s process. The Department of Labor’s comments are reprinted in appendix IX. In e-mail comments on a draft of this report, the Department of Transportation’s Director of Audit Relations addressed our recommendation to ensure that the projects we identified as not having received department-level IRB selection or oversight reviews receive these reviews by stating that actions are underway to schedule a summer IRB meeting to review the entire budget year 2011 portfolio of IT investments, and that the Combined IT Infrastructure investment which we reviewed is expected to be reviewed in detail. In written comments on a draft of this report, the Department of the Treasury’s Deputy Assistant Secretary for Information Systems and Chief Information Officer addressed our recommendation to ensure that the projects we identified as not having received department-level IRB selection or oversight reviews receive these reviews by noting recent efforts to reconstitute a department-level Executive Investment Review Board, increase the oversight role of its Chief Information Officer Council, and remediate weaknesses associated with the three projects we reviewed. The Department of the Treasury’s comments are reprinted in appendix X. In written comments on a draft of this report, the Secretary of the Department of Veterans Affairs concurred with our recommendations to define and implement responsibilities for the department-level IRB to oversee projects in operations and maintenance by noting that the Programming and Long Term Issues Board will include operational programs/projects in its program reviews for fiscal year 2010. The department also concurred with our recommendation to ensure that the project which we identified as not having received department-level IRB oversight reviews receive these reviews and stated that it will address actions to ensure this in its plan to address our recommendations. The Department of Veterans Affairs’ comments are reprinted in appendix XI. In written comments on a draft of this report, the National Aeronautics and Space Administration’s Associate Deputy Administrator partially concurred with our recommendation that projects which are identified in this report as not having received department-level IRB selection or oversight reviews receive these reviews stating that the departmental board will continue to review major IT investments that are not highly specialized in nature (this includes two of the four projects we reviewed), while another governing body will maintain responsibility for ensuring the overall successful performance of NASA’s program portfolio, including the highly specialized IT investments. We received information about the second governing body after we sent our report to NASA for comment. During the comment period, the agency also provided us additional documentation on the projects we reviewed. After reviewing this documentation, we have changed the reported reason column in table 1 from “department-level board was not active (i.e., it had not yet been established)” to “NASA did not provide evidence that a selection review had been performed by the appropriate department-level IRB” for the three projects we reviewed for selection. In addition, we changed the department-level IRB review column in table 2 for the Integrated Financial Management Improvement program from a “no” to a “yes.” NASA’s comments are reprinted in appendix XII. In written comments on a draft of this report, the Nuclear Regulatory Commission’s Deputy Executive Director for Corporate Management, Office of the Executive Director for Operations, agreed with our recommendation to define conditions for elevating issues related to project selection and oversight to its department-level IRB stating that the commission will review and enhance the existing guidance for project selection and oversight to ensure that its process is compliant with the intent of the Clinger-Cohen Act. This will include updating the Information Technology Business Council charter for project oversight reviews to include any necessary changes to the process or criteria for review by the Information Technology Senior Advisory Council. The commission also agreed with our recommendation to ensure that the National Source Tracking System which we identified as not having received a selection or oversight review by the department-level IRB receive such review. The Nuclear Regulatory Commission’s comments are reprinted in appendix XIII. In written comments on a draft of this report, the Commissioner of the Social Security Administration asked that we remove the Information Technology Operations Assurance project we reviewed from our report because it is not a poorly planned or poorly performing project. During the agency comment period, we informed the agency that we would be removing the project from our sample, and, based on clarification provided by the Associate Chief Information Officer that the project reported a positive cost variance, agreed that it should not be considered poorly performing. We did not make any recommendations to the agency. The Social Security Administration’s comments are reprinted in appendix XIV. In e-mail comments on a draft of this report, the U.S. Agency for International Development concurred with our recommendation to ensure that the project which we identified as not having received a department- level IRB oversight review receive this review. The agency noted, however, that the review might not occur if the project is not funded. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other interested congressional committees, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XV. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine whether (1) federal departments/agencies have guidance on the role of their department-level investment review boards (IRB) in selecting and overseeing information technology (IT) projects and (2) these boards are performing selection and oversight reviews of poorly planned and performing projects. To address the first objective, we reviewed the investment management guidance (including policy documents and board charters) of each of 24 agencies listed in the Chief Financial Officers (CFO) Act of 1990 (referred to in our report as “the 24 major agencies”). In reviewing the guidance, we determined the role department-level IRBs are expected to play in selecting and overseeing IT projects, updating the findings from our 2004 governmentwide review of agencies’ use of key investment management practices. We also reviewed the composition of the boards to determine whether they included senior executives from both IT and business (i.e., mission) units, in accordance with the GAO IT Investment Management framework which identifies the key practices for creating and maintaining successful investment management processes. For the second objective, we selected a sample of 48 IT projects that were identified as being poorly planned according to the Office of Management and Budget’s Management Watch List or reported as poorly performing on the High-Risk Lists or both. To provide a governmentwide perspective, we attempted to select one project from the 2007 Management Watch List and one project from the High-Risk List with performance shortfalls during 2007 for each of the 24 major agencies. We focused on the high-risk projects with performance shortfalls in the areas of cost and schedule since we had reported in September 2007 that these were the most frequently reported shortfalls. To obtain broader representation of agencies with high-risk projects, we also selected three High-Risk projects that had performance shortfalls in 2006. From these lists, we selected those projects with the highest funding levels according to the fiscal year 2008 President’s budget request. When an agency had a project on only one of the lists (i.e., only the Management Watch List or High-Risk List), we selected at least 2 projects from that list. For example, we selected 2 high-risk projects with shortfalls for the Environmental Protection Agency because the agency did not have any projects on the Management Watch List for the time frame we considered. Our selection process resulted in 26 projects from the Management Watch List, totaling about $7.4 billion in the fiscal year 2008 budget request, and 33 projects from the High-Risk List, totaling about $5.2 billion in the fiscal year 2008 budget request. Eleven of these projects, totaling about $4 billion, were on both lists. The Department of Energy and the National Science Foundation did not have any projects on the Management Watch List or on the High-Risk List with shortfalls and, therefore, we did not select any projects from these agencies. We removed two Management Watch List projects and five high-risk projects from our initial sample after sending the draft report to agency comment because we determined after further review and discussion with agencies that these projects had not been on the Management Watch List during 2007 or reported negative cost or schedule variances exceeding 10 percent between December 2006 and December 2007. This brought our sample of Management Watch List projects to 24 projects, totaling about $7.3 billion in the fiscal year 2008 budget request and 28 high-risk projects totaling about $4.7 billion in the fiscal year 2008 budget request and the number of projects on both lists to 11 projects totaling $4 billion in the fiscal year 2008 budget request. To determine whether department-level IRBs were performing selection and oversight reviews of poorly planned and performing projects, we requested evidence of board reviews for the 48 projects in our sample during the time they were either on the Management Watch List or High- Risk List. We analyzed the documentation obtained, and, when reviews had not been performed, we followed up with agencies to determine why the required reviews were not performed. For the oversight reviews, we determined whether project cost, benefit, schedule and risk data had been provided to the board, but we did not assess the reliability of this information. We conducted this performance audit from January 2008 to June 2009 in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Commerce Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Education Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the Department of Housing and Urban Development Appendix VII: Comments from the Department of the Interior Appendix VIII: Comments from the Department of Justice Appendix IX: Comments from the Department of Labor Appendix X: Comments from the Department of the Treasury Appendix XI: Comments from the Department of Veterans Affairs Appendix XII: Comments from the National Aeronautics and Space Administration Appendix XIII: Comments from the Nuclear Regulatory Commission Appendix XIV: Comments from the Social Security Administration Appendix XV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Sabine R. Paul, Assistant Director; William G. Barrick; Neil J. Doherty; Nancy E. Glover; Robert G. Kershaw; Lee A. McCracken; Tomas Ramirez; and Kevin C. Walsh made key contributions to this report.
The federal government expects to spend about $71 billion for information technology (IT) projects for fiscal year 2009. Given the amount of money at stake, it is critical that these projects be planned and managed effectively to ensure that the public's resources are being invested wisely. This includes ensuring that they receive appropriate selection and oversight reviews. Selection involves identifying and analyzing projects' risks and returns and selecting those that will best support the agency's mission needs; oversight includes reviewing the progress of projects against expectations and taking corrective action when these expectations are not being met. GAO was asked to determine whether (1) federal departments and agencies have guidance on the role of their department-level investment review boards in selecting and overseeing IT projects and (2) these boards are performing reviews of poorly planned and poorly performing projects. In preparing this report, GAO reviewed the guidance of 24 major agencies and requested evidence of department-level board reviews for a sample of 41 projects that were identified as being poorly planned or poorly performing. The 24 major federal agencies have guidance calling for department-level investment review boards to select and oversee IT investments. However, while all of the agencies had department-level boards, the board membership for the Departments of Commerce and Labor did not include business unit (i.e., mission) representation as called for by IT investment management best practices. Without business unit representation on their department-level boards, these agencies will not have assurance that the boards include those executives who are in the best position to make the full range of investment decisions necessary for them to carry out their missions most effectively. About half of the projects GAO examined did not receive selection or oversight reviews. Specifically, 12 of the 24 projects GAO reviewed that were identified by OMB as being poorly planned (accounting for $4.9 billion in the President's fiscal year 2008 budget request or two-thirds of the funding represented by the 24 projects) did not receive a selection review, and 13 of 28 poorly performing projects GAO reviewed (amounting to about $4.4 billion or 93 percent of the funding represented by the 28 projects) did not receive an oversight review by a department-level board. Agencies provided several reasons for not performing department-level board reviews, including some which were not consistent with sound management practices. Furthermore, 6 of the 11 projects in the sample identified as being both poorly planned and poorly performing, with over $3.7 billion in funding in the President's fiscal year 2008 budget request, received neither a selection review nor an oversight review. Without consistent involvement of department-level review boards in selecting and overseeing projects that have been identified as poorly planned or poorly performing, agencies incur the risk that these projects will not improve, potentially leading to billions of federal taxpayer dollars being wasted.
Background The Department of Defense (DOD) has several commissioning programs that it uses to bring new officers onto active duty, including the service academies, ROTC, and the services’ Officer Candidate Schools/Officer Training Schools (OCS/OTS). These programs vary in length, intensity, and content; the required period of active duty service incurred; and their cost to DOD. Each of the academies produces about 1,000 graduates a year. Consequently, if 5 percent of the graduates were to enter the guard/reserve, it would involve about 50 graduates a year from each of the 3 DOD academies. In 1996, the numbers of ROTC and OCS/OTS officers produced, respectively, in each of the services were: 2,887 and 350 in the Army, 857 and 1,383 in the Navy, 227 and 365 in the Marine Corps, and 1,637 and 646 in the Air Force. The reserve components have become increasingly central to the U.S. national defense strategy and have played an integral part in most recent military operations, including the Gulf War and Bosnia. The reserve component consists of various categories involving different degrees of participation. The policy proposal we examined specified that placement of academy graduates would be in an active reserve status, which includes only those in the selected reserve. The selected reserve includes those individuals in a part-time, paid drill status in either a reserve or National Guard unit, personnel in the Active Guard/Reserve (AGR) on active duty providing full-time support, and trained personnel called Individual Mobilization Augmentees (IMA) designated to fill specific positions during mobilization. Since AGR personnel are on active duty and IMA personnel are typically fully trained, we focused our examination of the policy proposal only on the drilling guard/reserve. (See app. II for further background on the reserve components.) Academy Graduates in the Drilling Guard/Reserve As of October 1, 1996, the drilling guard/reserve officer corps of 109,594 included 5,014 academy graduates, or about 4.6 percent (see fig. 1). This percentage compares to academy graduates comprising about 17.4 percent of the active duty officer corps (see fig. 2). The Navy reserve has the largest proportion of academy graduates at 10.3 percent, followed by the Air Force at 6.0 percent, the Marine Corps at 3.5 percent, and the Army at 2.6 percent. About 424 academy graduates were on full-time active duty in a reserve component under 10 U.S.C. 12301(d) and 32 U.S.C. 502(f) for the purpose of organizing, administering, recruiting, instructing, and training the reservists. See appendix III for additional details on the number of academy graduates serving in the selected reserve. Feasibility of Academy Graduates Serving in the Guard/Reserve Upon Graduation Concerns Raised Regarding Lack of Experience and Training for Immediate Reserve Duty DOD, the active services, and the reserve components, with the exception of the Army National Guard and the Air National Guard, stated that sending service academy graduates directly to the drilling guard/reserve without officer skill training or active duty experience would not enhance the capability of the reserve component. Newly commissioned officers, regardless of whether they come from the academies, ROTC, or OCS/OTS, are not fully prepared for direct entry into military jobs. The military education at the service academies and the other commissioning programs focus on preparing graduates to go into the active component. But these commissioning programs do not provide specific military occupational skills. The transition into the active service is considered a necessary part of completing an officer’s education. Also, DOD officials told us that those officers who enter the guard/reserves without active duty experience would likely be at a competitive disadvantage, which could negatively affect their long-term career potential as a member of the reserve component. An additional concern to the reserve components is funding for mandatory follow-on training for newly commissioned officers transferred directly to units after commissioning. The requirement to train these officers would shift to the respective component, imposing significant increases in training funds because the basic branch qualification courses involve active duty, with sometimes lengthy training. Direct Entry Into Guard/Reserve May Not Be Considered Adequate Payback for the Cost of Academy Education DOD, the service academies, and the reserves believe that serving in the drilling guard/reserve may not be considered by the Congress or the taxpayers to be sufficient recoupment for the cost of an academy education. The service academies spent about $762 million in fiscal year 1995 to produce 2,900 officers. The cost of producing an officer in the class of 1995 was $277,000 at the Military Academy, $218,000 at the Naval Academy, $283,000 at the Air Force Academy, and $82,000 for the scholarship ROTC program. The services’ OCS/OTS programs and the National Guard OCS programs are considerably less expensive. The Congress has expressed concern about ensuring an adequate payback for the cost of officer training. The minimum active duty service commitment for academy graduates is 5 years, and ROTC graduates are obligated to serve 4 years. The active duty service commitment for academy graduates was raised to 6 years, starting with the class entering the academies in 1992, in an effort to ensure a greater return for the cost of an academy education. But before the change took effect, the 6-year obligation was rolled back to 5 years in 1995 because of concerns that it would harm academy recruiting. DOD officials have raised the question of whether attendance at training for 2 days per month and an annual training requirement of about 14 days would provide an adequate payback for DOD’s investment of $218,000 to $283,000 in an academy graduate’s education. If an academy graduate’s 5-year service obligation was required to be served through drilling guard/reserve participation, it would amount to about 190 total days of service. That amount would provide an implicit payback rate for their education of between $1,147 and $1,489 per day of drilling guard/reserve service. Administrative and Practical Difficulties in Accessing Academy Graduates Directly Into Active Reserve Service Officials cited a number of administrative and practical difficulties that would have to be overcome to make direct accession of academy graduates into the reserves feasible. They cited problems regarding the absence of an employment placement process at the academies; placement of graduates into drilling guard/reserve units; enforcement of guard/reserve service obligations; development of a fair and efficient selection process for determining which academy graduates would go to the guard/reserve, additional funding to provide skill training; the need to increase Navy ROTC enrollments to take the place of the academy graduates on active duty; and limited capacity in the Naval Reserve to absorb additional officers. The academies send their commissioned graduates to active duty and therefore have had no need for a civilian job placement operation. However, since service in the drilling guard/reserve would entail only part-time service (1 weekend a month plus an annual 2-week training period), academy graduates headed for immediate placement in the guard/reserve would need to be offered assistance finding jobs. Job placement assistance for ROTC students who are not offered active duty assignments is handled the same way it is for other students by the college or university they attend. Regardless of the source of commission, there is no guarantee that graduates would take jobs that are geographically close enough to guard/reserve units with vacancies. Potential reservists cannot be directed to specific units with vacancies if they live beyond a certain distance from the unit/reserve training site. The current policy is that guard/reserve members must live within 50 miles, or a 90-minute commute, of their training sites. If multiple training periods are performed together and mess facilities are available at the site, the distance is extended to 100 miles. However, we were told that the Army National Guard makes exceptions to this policy in less populated states for highly qualified officers and enlisted candidates who are willing to travel greater distances. DOD and service officials told us it would be difficult to enforce participation in the drilling guard/reserve by academy graduates or others who decided to leave active guard/reserve service with some remaining service obligation. The guard/reserves depend upon voluntary service. Under current policy, guard/reserve officers with a valid reason, such as family hardship, can move from the drilling guard/reserve to an inactive status at any time. Also, the enforcement alternative of calling to active duty those members who fail to abide by their guard/reserve commitment would be counter to the proposal’s objectives. Sending academy graduates to the guard/reserve directly after graduation would create a dilemma regarding fair and efficient selection criteria. Presently, students select their service assignments based on class standing, with top performing cadets/midshipmen having preference to available assignments over lower performers. A determination would need to be made regarding whether immediate guard/reserve selection would be voluntary or involuntary. If voluntary, there would be at least two issues to consider: whether there should be any restrictions on eligibility and what would happen if less than 5 percent volunteered. If assignment to the guard/reserve was involuntary, academy officials expressed concerns about a negative impact on cadet/midshipman motivation and breaking faith with the promise of an active duty assignment following graduation. During the past 5 years, Air Force Reserve officer accessions have been primarily those with prior active service. Consequently, they have not planned or budgeted for training for officers without active duty experience. The costs of initial skill training for academy graduates would have to be programmed and budgeted by the Air Force Reserves. Sending 5 percent of academy graduates to the reserve components would require rescheduling a similar number of ROTC graduates to active service. Initially, this would be a problem for the Navy. Navy ROTC programs have not been producing any graduates for the reserve. Consequently, the Navy would not currently have a sufficient number of excess ROTC graduates to replace about 50 academy graduates a year diverted from active duty to reserve service. Since most Naval ROTC students are on scholarship, with long lead times between scholarship award and graduation, the implementation of such a policy would require additional funding and substantial lead time. Finally, Navy officials stated that there are too few billets in the Naval Reserve to accommodate the number of officers already seeking Naval Reserve participation. Taking some of those billets for newly commissioned ensigns coming directly from the Naval Academy would compound the problem. National Guard Has Vacancies at Junior Officer Grades Army National Guard officials stated that they have about 2,261 vacancies at the first and second lieutenant grade levels and believe the vacancies could be partially filled by academy graduates entering directly after commissioning. The Air National Guard has about 200 entry-level officer vacancies a year, particularly in technical occupations, that could be filled by newly commissioned officers directly after graduation. Both the Army and the Air National Guard have recently been recruiting ROTC graduates who were commissioned but were not offered active duty service. The Army Guard brought 283 ROTC graduates directly into drilling guard service in 1994 and 852 in 1996. The Air Guard brought in 15 ROTC graduates in 1995 and 40 applied in 1996. ROTC graduates entering the guard directly after commissioning are given the appropriate officer skill training. Efforts to Enhance the Capability of the Reserve Component The Army National Guard Combat Readiness Reform Act of 1992 provided several initiatives for enhancing the capability of the Army National Guard to deploy. Responding to the act, the Secretary of the Army established an objective of increasing the proportion of qualified prior active duty officers in the Army National Guard to 65 percent. However, as shown in table 1, the proportion of officers in the Army guard/reserve with 2 or more years of active duty service is only about 50 percent. The 65-percent goal has been suspended because under current manpower ceilings, increasing the percentage of experienced officers would require forced early retirement of guard officers with limited active duty experience. Another provision of the act, section 1112, allowed the Secretary to provide a program under which academy graduates and distinguished ROTC graduates could complete their military service obligation in the selected reserve. ROTC graduates with 2 years of service are allowed to serve the remainder of their obligation in the Army National Guard. This program has since been consolidated into the Voluntary Early Release/Retirement Program (VERRP) under category G. The number of academy and ROTC graduates leaving active duty before completing their initial active duty service obligation under VERRP are shown in tables 2 and 3. Those leaving active duty under category G before completing their military service obligation were required to serve out their remaining service obligation in the selected reserve. Those officers shown in the inactive reserve column qualified for VERRP under a category other than category G (e.g., having less than 1 year of initial active duty service obligation remaining) and were not required to serve in the selected reserves. These numbers indicate that there is a potential for the drilling guard/reserve to get junior officers through programs such as VERRP. Also, such officers would enter the guard/reserve already possessing military skill training and active duty experience. Conclusions The proposal to send up to 5 percent of service academy graduates directly to the drilling guard/reserve would likely encounter significant administrative and practical difficulties and be perceived as expensive. Reserve component capability would not be appreciably enhanced because the newly commissioned officers would not enter the guard/reserve with specific military skills or experience. Also, the small number of potential officer accessions proposed (about 50 per service per year) would not go far in relieving the junior officer needs of the National Guard. However, the program to attract academy- and ROTC-educated officers with 2 to 3 years active duty experience under the Army’s VERRP into the selected reserve appears to be relatively successful and offers the potential to access a number larger than 50 junior officers, who would be trained and experienced. Agency Comments and Our Evaluation DOD reviewed a draft of this report and concurred with our conclusions. DOD’s comments are reprinted in appendix III. Scope and Methodology To evaluate the feasibility of sending service academy graduates directly to the drilling guard/reserve, we interviewed officials at the Office of the Secretary of Defense, the service headquarters, the service academies, reserve headquarters, and the National Guard Bureau about the potential benefits and difficulties in accessing academy graduates directly into the drilling guard/reserve. The Office of the Secretary of Defense provided the cost data for the service academies and ROTC program. The information on the number of officers and types of commissions for the services and the drilling guard/reserve was provided by the individual services from their personnel databases. The VERRP results were provided by the Office of the Chief of Staff, U.S. Army, Congressional Activities Division. We did not independently verify the data provided. We conducted our work from November 1996 to February 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Superintendents of the Military, Naval, and Air Force academies. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-5140. The major contributors to this report were William E. Beusse, Lawrence E. Dixon, and Jeanett H. Reid. Reserve Components The reserves consist of three major categories: the Ready Reserve, the Standby Reserve, and the Retired Reserve. The Ready Reserve comprises three groups—the Selected Reserve, the Individual Ready Reserve, and the Inactive National Guard (see table I.1). The military members of the Ready Reserve are organized in units, or as individuals, both of which are liable for recall to active duty to augment the active forces in time of war or national emergency. The Selected Reserve includes the drilling National Guard and reservists assigned to units, full-time support personnel, and individual mobilization augmentees. Under the total force policy, reserve component forces are considered an integral part of the U.S. Armed Forces and essential to implementation of the U.S. defense strategy. Reductions in the size of the active force and increased U.S. participation in peace operations since the end of the Cold War have increased reliance on the reserve forces, as illustrated by the inclusion of reserve component units in war-fighting contingency plans and peacetime operations. Training of the Guard/Reserve As part of their service obligation, most guard/reserve members are required to participate in prescribed training activities. Members of the Selected Reserve are required to participate in training to maintain their readiness and proficiency. Each year they must participate in at least 48 4-hour inactive duty training periods—the equivalent of 24 8-hour days, or 12 weekends a year. They must also participate in annual training periods of about 2 weeks, which is generally done during one consecutive period. However, some reservists, particularly those in the Air Force and the Navy components, often fulfill the annual training requirement during several shorter periods. Members of the Individual Ready Reserve and Inactive National Guard are not required to meet the same training requirements as members of the Selected Reserve. However, they are required to serve 1 day of duty each year to accomplish screening requirements and may participate voluntarily in inactive duty training. Members of the Retired Reserve are not subject to mandatory training. However, they are encouraged to participate voluntarily to maintain their readiness. Active Duty and Drilling Guard/Reserve Military Officers Table II.5: Active Duty Guard/Reserves Serving Under 10 U.S.C. Section 12301(d) Comments From the Department of Defense The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the policy and cost implications of up to 5 percent of each military service academy's graduating class serving in the reserve with a corresponding increase in the number of Reserve Officers Training Corps (ROTC) graduates serving on active duty, focusing on: (1) the number of academy graduates serving in an active status in the reserve component; (2) the feasibility and implications of a proposal to have academy graduates serve in a drilling status in the reserve component without having served on active duty as a means of enhancing the capability of the guard/reserves; and (3) other means through which the reserve components are recruiting junior officers. GAO noted that: (1) as of October 1, 1996, 5,014 service academy graduates were serving in the active reserve components; (2) additionally, 424 academy graduates were on active duty with a reserve component performing full-time Active Guard/Reserve support functions under the authority of 10 U.S.C. 12301(d) and 32 U.S.C. 502(f); (3) about 4.6 percent of the officers in the drilling guard/reserves were academy graduates compared to 17.4 percent of the active forces; (4) Department of Defense (DOD), service, and academy officials, with the exception of those representing the National Guard, believe that sending academy graduates to the drilling guard/reserves upon graduation would be counterproductive; (5) they pointed to the need for new officers, regardless of their commissioning source, to receive skill training and experience before they can be productive guard/reserve members; (6) since the academies are the most expensive source of new officers, concerns were expressed that sending academy graduates to the reserves before they complete their active duty obligation would not produce a sufficient payback for the cost of their education; (7) DOD officials additionally cited a number of administrative and practical problems that would require policy changes at the academies and the selected reserves; (8) National Guard officials, however, noted that they have vacancies for officers in the junior officer grades and believe that the assignment of academy graduates directly to the National Guard would be feasible; (9) based on their experiences with programs for new ROTC graduate accessions, National Guard officials believe that the policy and administrative difficulties in accessing academy graduates could be managed; (10) the reserve components presently receive academy graduates through normal attrition as academy-produced officers join the drilling guard/reserves after completing their obligated active duty service; (11) in addition, efforts to downsize the active duty force have had a side benefit of enhancing the capability of the reserve component by getting more trained and experienced officers into active reserve status; (12) recently, these early release programs have been opened to graduates from the academies and the ROTC; and (13) since 1994, the Army National Guard Combat Readiness Reform Act of 1992 has allowed the Army to bring in 482 academy graduates and 108 graduates from the ROTC with 2 to 3 years of experience to serve the remainder of their military service obligations in the selected reserves.
GAO Contact and Staff Acknowledgement For further information on this statement, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include, Divya Bali, Lee Carroll, Glenn C. Fischer, Jon Ludwigson, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal 2008, the Department of the Interior (Interior) collected over $22 billion in royalties and other fees related to oil and gas. Interior's Bureau of Land Management (BLM) and Minerals Management Service (MMS) manage federal onshore and offshore oil and gas leases, respectively. Acquiring a federal lease gives the lessee the rights to explore for and develop the oil and gas resources under the lease, including drilling wells and building pipelines that may lead to oil and gas production. This statement focuses on findings from a number of recent GAO reports on federal oil and gas management. GAO has made numerous recommendations to Interior, which the agency generally agreed with and is taking steps to address. However, two important issues remain unresolved. Specifically, GAO made one recommendation and one matter for Congressional consideration that together call for a comprehensive reevaluation of how Interior manages federal oil and gas resources. Interior has not undertaken such a comprehensive review and until this is done, the public cannot have reasonable assurance that federal oil and gas resources are being appropriately managed for the public good. In recent years, GAO has conducted numerous evaluations of federal oil and gas management and found many material weaknesses. Specifically: In September 2008, we reported that (1) Neither BLM nor MMS were meeting statutory obligations or agency targets for conducting inspections of certain leases and metering equipment used to measure oil and gas production. (2) MMS's royalty IT system and processes lacked several important capabilities, including monitoring adjustments made by companies to their self-reported production and royalty data and identifying missing royalty reports in a timely manner. (3) MMS's use of compliance reviews, which are more limited in scope than audits, led to an inconsistent use of third-party documents to verify that self-reported industry data are correct. (4) MMS's annual reports to the Congress did not fully describe the performance of the royalty-in-kind program and, in some instances, may have overstated the benefits of the program. (5) The federal government receives one of the lowest shares of revenue for oil and gas resources compared with other countries and Interior has not systematically re-examined how the federal government is compensated for extraction of oil and gas for over 25 years. In October 2008, we reported that (1) Some states do more than Interior to structure leases to reflect the likelihood of oil and gas production, which may encourage faster development. In June 2005, we reported that (1) BLM has encountered persistent problems in hiring and retaining sufficient and adequately trained staff to keep up with workload as a result of rapid increases in oil and gas operations on federal lands and poor workforce planning. In recent reports, GAO has made a number of recommendations to improve the accuracy of royalty measurement and collections and the overall management of federal oil and gas resources. Interior generally agreed with our recommendations and is trying to implement them but implementation is ongoing and it is too early to assess the effectiveness these efforts.
Background Because of its abundance and historically low cost, coal is an important fuel source in the United States, accounting for about 20 percent of total energy use in 2011. Nearly all coal consumed in the United States is produced domestically, and coal represents about 29 percent of all domestically produced energy. U.S. coal production generally increased since 1960 and reached its highest level in 2008. Advancements in mining technology and a shift to using surface mines to a greater extent than underground mines has boosted coal’s overall productivity and enabled production to increase even as the number of workers decreased. In 2011, half as many workers produced 24 percent more coal than in 1985, as shown in figure 1. Data from the Bureau of Labor Statistics indicate that about 86,200 people were employed in coal mining in the United States in 2011. In the United States, coal is primarily used to generate electricity—over 90 percent of coal was used to generate about 42 percent of electricity in 2011. The amount of electricity generated using coal has generally increased since the 1960s, but decreased recently due to a combination of a decline in overall electricity demand, shifts in the relative prices of fuels, and other reasons. (See fig. 2.) Meanwhile, coal’s share of total electricity generation has fluctuated over time. EIA has stated that several factors, including low oil prices during the late 1960s—which served to increase electricity generation from oil—and the oil price shocks of the 1970s have influenced the mix of fuel sources used to produce electricity. Two broad trends—recent environmental regulations and changing market conditions—are affecting power companies’ decisions related to coal-fueled electricity generating units. Regarding environmental regulations, as we have previously reported, since June 2010, EPA proposed or finalized several regulations that would reduce certain adverse health or environmental impacts, including impacts associated with coal-fueled electricity generating units.potentially significant implications for public health and the environment. One of the most significant regulations in terms of EPA’s estimated benefits and costs, EPA’s Mercury and Air Toxics Standards, establishes emissions limitations on mercury and other toxic pollutants. Mercury is a toxic element, and human intake of mercury, for example, through consumption of fish that ingested the mercury, has been linked to a wide range of health ailments. In particular, mercury can harm fetuses and cause neurological disorders in children, resulting in, among other things, impaired cognitive abilities. Other toxic metals emitted from power plants, such as arsenic, chromium, and nickel can cause cancer. EPA estimates that its finalized regulation would reduce mercury emissions from coal- fueled electricity generating units by 75 percent, as well as reduce SOemissions.retrofit generating units with controls to reduce pollutants and, when it is not economic to retrofit, may retire some generating units. In response to these regulations, power companies might Regarding broader market conditions, important market drivers have been weighing on the viability of coal-fueled electricity generating units. Key among these has been the recent decrease in the price of natural gas, which has made it more attractive for power companies to build new gas-fueled electricity generating units and to utilize existing units more. In addition, slow expected growth in demand for electricity in some areas has decreased the need for new generating units. Power companies may weigh the costs of any needed investments compared with the benefits of continuing to generate electricity at a particular unit. When the costs outweigh the benefits, a power company may decide to retire a unit rather than continue to operate the unit or install new pollution control equipment. The majority of coal produced in the United States is used domestically, though exports represent a small but recently growing fraction of U.S. coal production. In 2010, the United States exported 82 million tons of coal, which accounted for 8 percent of total production. As shown in figure 3, coal exports to European and Asian markets represented 76 percent of total U.S. coal exports in 2011. In 2011, total coal exports were up 31 percent compared with 2010, reaching 107 million tons, due largely to rising exports to Europe and Asia. This was the highest level of exports since 1991. In 2011, 35 percent of U.S. coal exports were of the types of coal typically used to produce electricity, the remainder were of metallurgical coals used in industrial processes, such as steelmaking. To better understand the potential future of the coal and electricity industries, the federal government, private companies, and others use models to project future industry conditions, including the future use of coal. For example, EIA, IEA, and IHS Global Insight produce long-term projections of electricity generation and generation from coal. Because the future depends on a multitude of factors that are difficult to predict, EIA assesses various scenarios with different assumptions about future conditions to better understand the range of potential future outcomes. For example, EIA’s primary scenario, called its “reference” scenario, is a business-as-usual estimate based on existing policies, known technology, and current technological and demographic trends. Additional scenarios make different assumptions about fuel prices, economic conditions, and government policies, among other things. Some of these scenarios are especially relevant to the question of coal’s future, because they address factors currently affecting the industry, such as the prices of coal and natural gas—a fuel that competes with coal—and possible future policies to address climate change. Appendix II presents further information about the major assumptions behind these forecasts and scenarios. Retirements, Retrofits, and New Construction May Result in a Smaller but Cleaner Coal- Fueled Electricity Generating Fleet The nation’s fleet of coal-fueled electricity generating units may have less total generating capacity in the future, and the fleet may be capable of emitting lower levels of pollutants, according to available information. These changes will be driven by industry plans to retire a significant number of units, install pollution control equipment on others, and build a few, new coal-fueled units that may emit lower levels of pollutants than the current fleet’s average emissions. Power Companies Are Planning to Retire a Significant Number of Older, Smaller, More Polluting Units According to forecasts we reviewed, power companies may retire a significant number of coal-fueled units in the future. In its reference scenario reflecting current policies, EIA projects that power companies may retire 49,000 MW of coal-fueled capacity from 2011 through 2035 (i.e., 15 percent of coal-fueled capacity in 2011). IHS Global Insight projects that power companies may retire 76,476 MW of capacity from 2011 through 2035 (i.e., 24 percent of coal-fueled capacity in 2011). Our statistical analysis of Ventyx data on announced retirement plans indicates that, among other things, companies are planning to retire units that are older, smaller, and more polluting. To assess the types of units that may be retired, we analyzed data on current power company plans to retire coal-fueled units. According to Ventyx data, power companies have already reported plans to retire 174 coal-fueled units with a total 30,447 MW net summer capacity through 2020—which accounted for 10 percent of coal-fueled capacity in 2011. As we have previously reported, this would be significantly more retirements than have occurred in the past–– almost twice as much coal-fueled capacity as retired in the 22 years from 1990 through April 2012. Based on our statistical analysis of these plans, power companies are more likely to plan to retire units that are older, smaller, and more polluting. (Appendix I provides further information on our statistical analysis, which included examining several other characteristics that may affect plans to retire units such as (1) whether power companies are traditionally regulated or operate in restructured markets and (2) a unit’s cost of generating electricity relative to regional prices.) Older. Power companies’ plans indicate they are more likely to retire older coal-fueled electricity generating units than newer units. Today’s fleet of operating coal-fueled units was built from 1943 through 2012, with the bulk of the capacity built in the 1970s and early 1980s. As shown in figure 4, units that power companies plan to retire are generally older, on average 54 years old compared with units with no retirement plans that average 39 years old. Some stakeholders we interviewed said that power companies are more likely to retire older units because these units may be reaching the end of their useful lives, can be less efficient at converting coal to electricity, and can be more expensive than newer units to retrofit, maintain, and operate. Smaller. The smaller a unit is, the more likely a power company is to be planning to retire it. (See fig. 5.) Size can be important when assessing the economics of additional investments needed to continue to operate coal-fueled units, as smaller units can be more expensive to retrofit, maintain, and operate on a per-MW basis. For example, some power companies may choose to install flue gas desulfurization units—known as scrubbers—to control SO and other air emissions. According to an EPA report, a typical 100 MW coal- fueled unit could incur capital costs 66 to 74 percent higher per MW to install a scrubber than a 700 MW unit. In addition, smaller generating units are generally less fuel-efficient than larger units. Units that are planned for retirement average 175 MW of capacity compared with units that are not planned for retirement that average 351 MW of capacity. Figure 5 shows the number of coal-fueled units by capacity in MW. Many Units May Be Retrofitted with Pollution Control Equipment As we reported in July 2012, power companies may retrofit many coal- fueled electricity generating units with new or upgraded pollution control equipment in response to new environmental regulatory requirements.Though the requirements and deadlines these regulations may establish for generating units are somewhat uncertain at this time, EPA’s analyses and two other studies we reviewed in our prior report suggest that one- third to three-quarters of all coal-fueled capacity could be retrofitted or upgraded with some combination of pollution control equipment, including scrubbers and other technologies to reduce SO, mercury, and other emissions. Once retrofitted with this pollution control equipment, the coal- fueled fleet would be capable of generating electricity and emitting much lower levels of pollution. For example, EPA projects that mercury emissions from coal-fueled electricity generating units will decrease by 75 percent as a result of its new regulatory requirements. Nevertheless, even the cleanest running coal-fueled unit may still be more polluting than generating units that use other fuel sources. For example, the 10 least- emitting coal-fueled units emitted over 10 times as much SO per million Btu compared with an average of 0.0006 for combined cycle units. Electricity generating units that rely on solar and wind sources produce no such emissions. Some New Generating Units May Be Built and Would Be Larger, Cleaner, and More Efficient Than the Fleet Overall Available information suggests that industry intends to build some new coal-fueled electricity generating units. According to Ventyx data, power companies have plans to build 42 new coal-fueled electricity generating units with 21,634 MW of capacity in various stages of planning or development (see fig. 7). However, as we have previously reported, developers generally have more planned projects than they complete. The total capacity of coal-fueled electricity generating units in the United States may decline in the future as less capacity is expected to be built than is expected to retire. As discussed, 49,000 to 76,476 MW of coal- fueled capacity is projected to retire by 2035 according to EIA and IHS Global Insight, respectively, and they project that 11,000 MW and 22,134 MW of new coal-fueled capacity will be added by 2035, respectively. EIA officials told us that new coal-fueled capacity in their projections is primarily expected in the next few years and represents units that are already planned or under construction. As less capacity is expected to be built than is expected to retire, total coal-fueled capacity is expected to decline in the future, as shown in figure 8. Coal's share of total electricity generating capacity was about 30 percent in 2011. In EIA’s reference scenario, coal's share of capacity declines to 25 percent in 2035 as retiring coal-fueled units are not fully replaced, and as 176,100 MW of other generating capacity is added in the future. Any coal-fueled units that are built in the future are likely to be larger, less polluting, and more fuel-efficient than the average of the coal-fueled fleet overall. Units that power companies are currently planning to build average 515 MW of net summer capacity, and the operating fleet averages 319 MW. Additionally, new units must install technologies to control emissions, and so are likely to emit lower levels of pollutants and thus be cleaner than the fleet overall. For example, generating units built after August 7, 1977, have had to obtain preconstruction permits that establish air emissions limits and require the use of certain emissions control technologies such as scrubbers to reduce emissions of SO.addition, some stakeholders we interviewed said that new coal-fueled units were likely to incorporate designs that are able to convert fuel to electricity more efficiently. Coal Likely to Remain a Key Fuel Source, but Future Use May Be Affected by Fuel Prices, Environmental Regulations, and Other Factors Coal is likely to continue to be a key fuel source for electricity generation in the United States, but its share as a source of electricity is expected to decline, and the future use of coal to generate electricity in the United States may be affected by several key factors that include the price of natural gas and other competing fuels, environmental regulations, and the demand for electricity, among others. In addition, several stakeholders we interviewed said that coal may increasingly be exported for use in other nations, though the extent of future exports is uncertain. Coal Likely to Continue to Be a Key Source of Electricity in the Future, though Its Share Is Generally Expected to Decline in the United States According to stakeholders we interviewed and projections by EIA, IEA, and IHS Global Insight, coal is likely to continue to be a key fuel source for U.S. electricity generation, but its share as a source of electricity is generally expected to decline in the future. Some stakeholders told us that, in the future, electricity generation from coal is likely to be displaced by generation from other fuel sources, particularly natural gas, but they still expect coal’s contribution to electricity generation to be significant. Furthermore, in its reference scenario, EIA estimates that coal will represent 38 percent of U.S. electricity generation in 2035 under current policies––down from 42 percent in 2011.amount of electricity generated using coal is expected to remain relatively constant over this same period under EIA’s reference scenario, growing by 0.1 percent annually. However, the amount of electricity generated using some other fuel sources, for example, natural gas and renewables, will increase at higher annual rates—1.4 percent and 2.3 percent respectively—diminishing coal’s total share of electricity generation. Agency Comments and Our Evaluation We met with EIA officials to discuss an early draft of this report and incorporated technical suggestions where appropriate. We also provided a draft of this report to EIA and EPA for formal comment. EIA and EPA did not provide written comments for inclusion in this report. EPA's Office of Air and Radiation did provide technical comments and stated that the report contained a very good description of many of the changes going on in coal and electricity markets that are affecting the use of coal to generate electricity. In its technical comments, EPA suggested that the draft’s emphasis on environmental regulations, particularly on the Highlights page, was misleading and not consistent with the rest of the report, which has a fuller discussion of many factors affecting the future use of coal. EPA stated that market changes, which we discuss in the report, would have significant impacts even in the absence of EPA's regulations. We do not agree that the report was misleading, but given that the Highlights page may be read without the benefit of the fuller discussion found in the report, we moved language from the body of the report to the Highlights page about other factors affecting the use of coal. EPA provided other technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrators of the EIA and EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Analysis of Characteristics of Coal-Fueled Generating Units That Power Companies Plan to Retire This appendix describes our statistical analysis of characteristics of coal- fueled electricity generating units, such as age and size, that are likely to affect power companies’ plans to retire certain units. We use this analysis to estimate the number and generating capacity of other coal-fueled units that power companies are likely to consider retiring. Methodology To test the hypothesis that power companies are likely to retire older, smaller, and more polluting coal units by 2020, we used logistic regression analysis. We analyzed industry data on all coal-fueled units owned by power companies that have already announced plans to retire one or more of these units. Using unit- and company-level data, primarily from company-reported databases, we developed a model depicting the relationship between companies’ announced plans to retire a unit and that unit’s characteristics—age, size, emissions rates of sulfur dioxide (SO) and nitrogen oxides (NO), and the regulatory status of the power company that owns the unit, specifically whether the company is traditionally regulated or operates in a restructured market. To estimate the number and generating capacity of additional units likely to be retired, we applied our model to a dataset consisting of coal-fueled units owned by power companies that have not announced any retirements. Model of Plans to Retire Coal-Fueled Generating Units In developing our model of power companies’ plans to retire coal-fueled units, we relied on economic theory, as well as discussions with stakeholders and our review of studies. Stakeholders included representatives from power companies, a coal company, industry associations, and nongovernmental organizations, and officials from federal and state agencies. Stakeholders and studies mentioned the following characteristics as likely unit-level determinants of power companies’ plans to retire a coal-fueled unit or keep it in operation: age; generating capacity; fuel efficiency (i.e., how efficiently a unit converts fuel to electricity) operating cost and profitability; pollution emission rates and whether a unit already has various types of emissions control equipment; and regulatory status. As a general matter, the larger, newer, more efficient, and less polluting a generating unit is, the more likely it is that a power company may to want to keep it in service and invest in retrofits that may be needed for it to comply with environmental laws or regulations. For example, if a large, new generating unit that a power company uses to meet a significant portion of customer demand is not in compliance with environmental regulations, retiring it would likely require replacing it with another unit of similar size. Doing so may be very costly, and retrofitting it with the requisite pollution control equipment may be a more economical choice. It is also reasonable to expect regulatory status to have some impact on power companies’ retirement plans because such plans could involve significant investments. For companies that are traditionally regulated, state public utility commissions review power companies’ plans for major investments in pollution control equipment in the case of a retrofit, or in replacement power generation capacity if it is needed after a unit is retired. Decisions by power companies in restructured markets are not subject to the same state public utility oversight. Furthermore, once state public utility commissions approve a traditionally regulated company’s plan to invest in major retrofits or replacement units, they allow it to charge rates to recover its investment costs. Companies operating in restructured markets have no such cost-recovery provisions, so their investments in retrofits or replacement units may be riskier. Our model does not include all the characteristics that stakeholders and studies identified as possible characteristics that power companies consider in deciding which coal-fueled units to retire. First, economic theory and our analysis of data on coal-fueled units indicate that there are interrelationships among some of these characteristics; for example, newer, larger electric generating units tend to be more fuel efficient, and this fuel efficiency contributes to lower operating costs. Hence, including all characteristics would be redundant and weaken the statistical results. Below, we discuss some specifications of the model with alternative sets of variables. Second, there are likely other characteristics that may influence power companies’ plans to retire generating units that we were unable to include in our statistical analysis. We discuss limitations of our model below. Data Used We used U.S. electricity data at the level of individual coal-fueled generating units that we obtained under contract from Ventyx, a company that maintains a proprietary database containing consolidated energy and emissions data from the Energy Information Administration (EIA),size, measured in megawatts (MW) of generating capacity; fuel efficiency; , and carbon dioxide; types of installed control equipment or whether owners plan to install control equipment in the future; various cost measures, including generating unit marginal cost; and regulatory status: equals 1 if the power company that owns the unit was traditionally regulated or 0 if the company was operating in a restructured market. We also used regional day-ahead market prices from the IntercontinentalExchange (ICE) company, and spot market prices from the Federal Energy Regulatory Commission (FERC) to calculate an average wholesale market price for the regional markets associated with each unit in our dataset. For each market region, we calculated a simple average of daily prices for the year 2011 from daily ICE price data. For some of the regions, however, there were no price data available from ICE, so we used the 2011 average spot market price from FERC. While our model does not include all the aforementioned characteristics, we used most of these characteristics in alternative specifications of the model and discuss two of these specifications below. Our complete dataset includes 959 coal-fueled units. This dataset includes only units that have a net summer generating capacity greater than 25 MW, making them subject to EPA emissions monitoring and reporting requirements. We excluded units that have not reported any electricity generation or SO or NO emissions over the past 5 years.the total 959 units, 482 units belong to power companies that have announced plans for retiring at least one coal-fueled unit. Results We used logistic regression (logit) analysis to analyze the characteristics that are affecting power companies’ retirement plans of coal-fueled electricity generation units. Regression analysis in general estimates the effect of a change in an independent variable on the outcome (dependent) variable, while holding other variables constant. Logit is a type of regression analysis for situations in which the dependent variable is a categorical variable—one that can take on a limited number of values—instead of a continuous, quantitative variable. In this case, the categorical variable is binary, which means that the choice is between only two outcomes. We estimated the logit regression equation for the subgroup of 482 coal- fueled generating units belonging to power companies that have announced plans to retire at least one coal-fueled unit. The dependent variable in our model is whether to retire or not retire a coal unit, and the independent variables are the (1) age of unit; (2) net summer capacity as a measure of unit size; (3) unit’s SO emissions per unit of heat input from the fuel used in the unit’s electricity generation, measured using millions of British thermal units (Btu); (4) unit’s NO emissions rate in lb/million Btu; (5) whether the power company that owns the unit is traditionally regulated or operates in a restructured market. Table 2 shows our resulting estimated equation and relevant statistics. These results generally confirm that smaller, older and more polluting units are more likely candidates for retirement. In the table above, the second column gives the estimated value of the coefficient, which describes the relationship between the independent variables and the likelihood of retirement. The remaining columns give the standard error and the significance level. For example, the coefficient on net summer capacity is negative, which means that an increase in capacity decreases the probability that a unit is planned for retirement. Furthermore, as shown in table 2, the estimated coefficient is significant at the 6 percent level. An estimated coefficient is typically considered statistically significant if the significance is less than 10 percent and very significant if it is less than 5 percent. Similarly, the coefficient on unit age is positive, which means that an older unit is more likely to be retired, and this coefficient estimate is significant at the 1 percent level. The coefficients on SO and NO emissions are also positive and significant at the 1 percent level. Using the resulting logit regression equation, we analyzed “marginal effects” of changes in each of the independent variables on plans to retire an “average” unit owned by a power company in (1) a traditionally regulated market and (2) a restructured market, and the “average unit,” for this purpose, is one with median values for age, size/net summer capacity, SO emissions rates, as shown in tables 3 and 4. For example, a 10 percent increase in the capacity of an average unit owned by a power company in a restructured market, from 193 to 212 MW, would decrease the probability of that unit’s retirement by about 2 percent, all other variables being held constant. For a unit owned by a power company in a traditionally regulated market, the same 10 percent would decrease the probability of retirement by about 1 percent. Note that the median values for units owned by power companies operating in traditionally regulated and restructured markets are not the same and that a 10 percent increase is therefore different. Analysis Indicates Units Power Companies Likely to Consider Retiring The next step in our analysis was to use the resulting logit regression equation to estimate the number and generating capacity of other coal- fueled units that companies are likely to consider retiring among units belonging to companies that have not, as of yet, announced plans to retire coal-fueled units. We also estimated the generation associated with these potential retirements in megawatt-hours (MWh). We assume that some or all of these companies are likely to retire coal-fueled units, but that they either have not decided which ones, or simply have not publicly announced their plans. We further assume that these companies have or will base their decisions on the same characteristics as the companies that have already made announcements. Table 5 shows our analysis of units that power companies may consider for retirement by 2020. As shown in table 5, for the group of coal-fueled units whose owners have not reported any coal-fueled unit retirements, our analysis indicates from 90 to 138 units may likely be considered for retirement by 2020. This range represents the 95 percent confidence interval around our point estimate of 114 units. In other words, our model indicates that there is a 95 percent probability that the actual number of units that will retire is within this range. These 90 to 138 units account for 15,700 to 25,200 MW of capacity and 91 to 151 million MWh of electricity generation. If we add these units to those that power companies have announced for retirement, the total of coal-fueled retirements could range from 264 to 312 units by 2020, amounting to from 46,100 to 55,600 MW of capacity and average annual generation of 241 to 301 million MWh.percentage terms, this would be 15 to 18 percent of the capacity and 13 to 16 percent of the generation of the current coal-fueled fleet of generating units. Limitations and Alternative Model Specifications This section discusses the limitations of our model and alternative model specifications that we tested. Limitations A major limitation of our model is that we used a nonrandom sample of the entire population of coal-fueled units to estimate the relationship between the characteristics of coal-fueled units and power companies’ plans to retire a unit. Our sample consisted of companies that announced plans to retire at least one unit but was not a random sample. It is possible that the companies that announced planned retirements and those that did not so announce differ in systematic ways that we do not observe from the data.bias. Such differences could result in omitted variable Another important limitation of our model is that we did not include all factors that contribute to power companies’ decision to retire coal-fueled units. Apart from unit-level considerations, major factors that affect a power company’s decision to retire a coal-fueled unit include fuel costs, environmental regulations, regional and local market considerations (e.g., expected future electricity demand and supply conditions, and transmission constraints), and technological developments in electricity generation and pollution control. For example, we did not take into account that planned unit retirements might make otherwise marginal units in some regions more valuable and less likely to retire. Companies that own coal-fueled units may have different expectations regarding these factors, which we did not consider in our analysis. Effectively, therefore, we assumed that power companies have very similar expectations regarding these factors. These above limitations could mean that our model does not accurately or fully reflect power companies’ unit retirement decisions. This would also mean that our estimates of how many unannounced units will retire may be inaccurate. For most of the limitations, the direction of bias in our model—the extent to which it may over- or under-estimate the likelihood of a unit retiring—is unclear. Addressing these limitations was beyond the scope of our review. Alternative Specifications To check the robustness of our model, we tested different specifications; that is, we ran logistic regressions using different sets of independent variables. For example, we tried specifications that included a measure of a unit’s fuel efficiency, and another representing whether a unit is planning to install pollution control equipment. We also tried a version with unit average capacity factors in recent years, a measure of how intensively a unit is utilized. Based on our results, none of these variables significantly improved the model. Below, we discuss two other alternative specifications in more detail. In one alternative specification, we used clustered standard errors. Our model assumes that each individual coal-fueled unit has a unique error term that is independent of every other unit. In this specification, we allow for the possibility that units owned by the same power companies may be related in unobserved ways and, therefore, the error terms may be correlated. As shown in table 6, the estimated coefficients in this alternative specification are very similar to our model, but the standard errors are generally bigger, and the estimated coefficients are generally less statistically significant. This is especially true for net summer capacity, which is no longer statistically significant at the commonly accepted 10 percent level. In a second alternative specification, we used adjusted marginal cost as a proxy for the profitability of a unit. Based on economic logic and what we heard from stakeholders, we expected some indicator of the cost and profitability of electricity generation to contribute significantly to the retirement decision. Table 7 shows a version with marginal cost adjusted for regional wholesale prices and an interaction term with marginal cost and regulatory status. We adjusted marginal cost by dividing it by the regional wholesale price to account for the fact that units are more or less valuable depending on regional wholesale electricity prices. The interaction term allows us to effectively estimate two coefficients for adjusted marginal cost, one for power companies in traditionally regulated markets, and one for power companies in restructured markets. We included an interaction term to account for the possibility that power companies in traditionally regulated and restructured markets view costs Indeed, as shown in table 7, the estimated adjusted marginal differently.cost coefficients differ—for power companies in restructured markets, the adjusted marginal cost coefficient is about 5.8, while the estimated coefficient for power companies in traditionally regulated markets is the adjusted marginal cost coefficient plus the interaction term (or 5.8 plus - 8.2 = -2.4). These results suggest that while higher adjusted marginal costs increase the probability of retirement of units owned by power companies in restructured markets, they decrease the probability for units owned by traditionally regulated power companies. The interpretation of these results is unclear. Regarding the costs of producing electricity, our findings differed for companies in restructured markets and companies that are traditionally regulated. Specifically, our results suggest that companies in restructured markets are more likely to retire units with higher adjusted marginal costs. In contrast, our results suggest that companies operating in regulated markets are less likely to retire units with higher adjusted marginal costs. A number of characteristics, not considered in our model, could provide alternative explanations for this difference. For example, it could be the case that the units in our sample have unique characteristics. One such potential case could be that units owned by power companies in traditionally regulated markets may be located in areas where concerns about the reliability of the electricity system are significant, and the costs of retrofitting an older generating unit are less costly than retiring it. Similarly, it could be that our sample contains a number of units located in areas with lower cost alternative suppliers or where prices are low— diminishing the attractiveness of even a relatively low-cost unit. Appendix II: Description of Selected Scenarios and Forecasts Table 8 describes key scenarios and assumptions in the EIA, IEA, and IHS Global Insight forecasts discussed in this report. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jon Ludwigson (Assistant Director), Mike Armes, Patrick Dudley, Philip Farah, Quindi Franco, Cindy Gilbert, Paige Gilbreath, Alison O’Neill, Kendal Robinson, Jeanette Soares, and Kiki Theodoropolous made key contributions to this report. Related GAO Products EPA Regulations and Electricity: Better Monitoring by Agencies Could Strengthen Efforts to Address Potential Challenges. GAO-12-635. Washington, D.C.: July 17, 2012. Air Emissions and Electricity Generation at U.S. Power Plants. GAO-12-545R. Washington, D.C.: April 18, 2012. Coal Power Plants: Opportunities Exist for DOE to Provide Better Information on the Maturity of Key Technologies to Reduce Carbon Dioxide Emissions. GAO-10-675. Washington, D.C.: June 16, 2010. Clean Coal: DOE’s Decision to Restructure FutureGen Should Be Based on a Comprehensive Analysis of Costs, Benefits, and Risks. GAO-09-248. Washington, D.C.: February 13, 2009. Climate Change: Federal Actions Will Greatly Affect the Viability of Carbon Capture and Storage As a Key Mitigation Option. GAO-08-1080. Washington, D.C.: September 30, 2008. Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002.
Coal is a key domestic fuel source and an important contributor to the U.S. economy. Most coal produced in the United States is used to generate electricity. In 2011, 1,387 coal-fueled electricity generating units produced about 42 percent of the nation's electricity. After decades of growth, U.S. coal production and consumption have fallen, primarily due to declines in the use of coal to generate electricity. According to the Environmental Protection Agency (EPA), using coal to generate electricity is associated with health and environmental concerns such as emissions of sulfur dioxide, a pollutant linked to respiratory illnesses, and carbon dioxide, a greenhouse gas linked to climate change. In response to recent environmental regulations and changing market conditions, such as the recent decrease in the price of natural gas, power companies may retire some units, which could affect the coal fleet's generating capacity--the ability to generate electricity--and the amount of electricity generated from coal. Power companies may also retrofit some units by installing controls to reduce pollutants. GAO was asked to examine (1) how the fleet of coal-fueled electricity generating units may change in the future in terms of its generating capacity and other aspects and (2) the future use of coal to generate electricity in the United States and key factors that could affect it. GAO conducted a statistical analysis of plans for retiring coal-fueled units, interviewed stakeholders, and reviewed information on industry plans and long-term forecasts by EIA and others. GAO is not making any recommendations in this report. Retirements of older units, retrofits of existing units with pollution controls, and the construction of some new coal-fueled units are expected to significantly change the coal-fueled electricity generating fleet, making it capable of emitting lower levels of pollutants than the current fleet but reducing its future electricity generating capacity. Two broad trends are affecting power companies' decisions related to coal-fueled generating units--recent environmental regulations and changing market conditions, such as the recent decrease in the price of natural gas. Regarding retirements, forecasts GAO reviewed based on current policies project that power companies may retire 15 to 24 percent of coal-fueled generating capacity by 2035--an amount consistent with GAO's analysis. GAO's statistical analysis, examining data on power companies that have announced plans to retire coal-fueled units, found that these power companies are more likely to retire units that are older, smaller, and more polluting. For example, the units companies plan to retire emitted an average of twice as much sulfur dioxide per unit of fuel used in 2011 as units that companies do not plan to retire. Based on the characteristics of the units companies plan to retire, GAO estimated additional capacity that may retire. In total, GAO identified 15 to 18 percent of coal-fueled capacity that power companies either plan to retire or that GAO estimated may retire--an amount consistent with the forecasts GAO reviewed. Regarding retrofits, the coal-fueled generating fleet may also become less polluting in the future as power companies install controls on many remaining units. Regarding new coal-fueled units, these are likely to be less polluting as they must incorporate advanced technologies to reduce emissions of regulated pollutants. Coal-fueled capacity may decline in the future as less capacity is expected to be built than is expected to retire. According to stakeholders and three long-term forecasts GAO reviewed, coal is generally expected to remain a key fuel source for U.S. electricity generation in the future, but coal's share as a source of electricity may continue to decline. For example, in its forecast based on current policies, the Energy Information Administration (EIA) forecasts that the amount of electricity generated using coal is expected to remain relatively constant through 2035, but it forecasts that the share of coal-fueled electricity generation will decline from 42 percent in 2011 to 38 percent in 2035. Available information suggests that the future U.S. use of coal may be determined by several key factors, including the price of natural gas and environmental regulations. For example, available information suggests that the price of coal compared with other fuel sources will influence how economically attractive it is to use coal to generate electricity. EIA assessed several scenarios of future fuel prices and forecasts that coal's share of U.S. electricity generation will fall to 30 percent in 2035 if natural gas prices are low or 40 percent if natural gas prices are high. In addition, some stakeholders told GAO that the future use of coal could be significantly affected if existing environmental regulations become more stringent or if additional environmental regulations are issued. For example, EIA forecasts that two hypothetical future policies that reduce carbon dioxide emissions from the electricity sector by 46 percent and 76 percent would result in coal's share of U.S. electricity generation falling to 16 and 4 percent in 2035, respectively. EPA provided technical comments that were incorporated as appropriate.
Background Overview of Relative Values and Their Relationship to Medicare Payment Rates for Physicians’ Services CMS changed the way it paid for Medicare physicians’ services starting in 1992 when it began transitioning from payment rates based on customary charges to payment rates based on the relative resources needed to provide each service. As part of this transition to a new relative value scale system, three types of relative values were defined—one for relative levels of physician work, one for PE, and one for malpractice (MP) expense—and CMS subsequently transitioned each type of relative value from the existing charge-based system to new resource-based relative values. In response to this transition, the AMA created the RUC in 1991 to provide recommendations to CMS for it to consider when establishing resource-based relative values. The RUC currently has 31 members, 21 of whom represent specialty societies with permanent seats on the RUC (including for cardiology, family medicine, and internal medicine) and 4 of whom represent specialty societies with rotating seats on the RUC (including primary care and other specialties not always represented, such as pediatric surgery). These members are supported by the Advisory Committee of over 100 appointed physician representatives who are responsible for coordinating with their respective specialty societies to develop relative value recommendations to present to the RUC. According to the AMA, RUC members and the Advisory Committee donate over $8 million in direct expenses each year, such as travel, meeting, and consulting costs. In addition, hundreds of physicians provide volunteer time to support the RUC’s process. Under the current relative value scale system, CMS determines the Medicare payment rate in a given year for most physicians’ services by summing a service’s three relative values—after adjusting for geographic differences in resource costs—and then multiplying the resulting sum by a conversion factor. Work relative values are based on the estimate of two main inputs: (1) the time the physician needs to perform the service (including pre- and postservice activities, or work performed before and after the service), and (2) the intensity of the service (including the physician’s mental effort and judgment, technical skill and physical effort, and psychological stress). In 2015, work relative values ranged from 0, for services that do not have any physician work, such as the technical component of imaging services, to 108.91, for the repair of a neonate diaphragmatic hernia. PE relative values are based primarily on estimates of (1) direct PE inputs (DPEI), which reflect the clinical labor, medical equipment, and disposable supplies needed to provide a specific service as well as the amount of time for which labor is required and equipment is used, and (2) indirect PE, which generally reflect overhead expenses not associated with a specific service. In 2015, DPEI costs ranged from $0, for services that do not have any direct practice expenses, to over $14,000, for a type of angioplasty. MP relative values are based on malpractice insurance premiums of the specialties that perform the service, weighted geographically and by specialty. The geographic- adjusted sum of the three relative values is then multiplied by a dollar value, called a conversion factor, which converts the service’s relative value to a payment rate; in 2015, the conversion factor was $35.80. (See fig. 1.) Thus, while relative values determine the payment rate of one service relative to another, they do not directly determine services’ Medicare payment rates. CMS establishes relative values annually, and the effect of any changes on CMS’s payment rates generally must be budget neutral. In particular, if any changes to relative values result in changes to annual estimated expenditures of more than $20 million, CMS is required to make adjustments to ensure that overall expenditures do not increase or decrease by more than this amount. However, certain adjustments may also be made to Medicare payment rates that are not subject to the budget neutrality limitation. For example, if the annual net reduction in expenditures resulting from the revision of relative values does not meet the savings target for that year (1.0 percent of Medicare physicians’ services payments in 2016 and 0.5 percent in 2017 and 2018), adjustments to reduce overall Medicare expenditures to achieve that target are not subject to the budget neutrality limitation. Overview of the RUC’s Process for Developing Relative Value Recommendations and How CMS Uses These Recommendations when Establishing Relative Values for Medicare Physicians’ Services The process to develop and establish relative values involves three main steps: (1) CMS, the RUC, and AMA’s Current Procedural Terminology (CPT) Editorial Panel identify services for RUC review, (2) the RUC works with specialty societies to use surveys and other methods to develop work relative value and DPEI recommendations for CMS for identified services, and (3) CMS reviews each RUC recommendation it receives to determine whether to use it when establishing relative values for physicians’ services. (See fig. 2.) Because this process involves substantial time and effort from multiple entities, it can often take several years from when a service is initially identified for RUC review to when CMS establishes a relative value for it based on the RUC’s recommendation. (See app. I for a case study describing the current process and timeline for establishing relative values for a specific service.) In total, for payment years 2011 through 2015, CMS reviewed 1,278 RUC work relative value recommendations for about 1,200 unique (new and existing) services. Step 1: Services Identified for RUC Review Each year, the CPT Editorial Panel, the RUC, and CMS each identify services for the RUC to review. The CPT Editorial Panel identifies new services and existing services that it has recently revised for RUC review and sends a list of these (This list includes services identified as being services to the RUC.in the same family as a new or revised service that the CPT Editorial Panel determines warrants concurrent review in order to help ensure relativity across the family of services.) The CPT Editorial Panel holds three meetings to decide on new or revised services for a given payment year (generally in the spring, fall, and winter preceding the payment year when the CPT change would take effect), and sends the list of services to the RUC after each meeting. The RUC identifies potentially misvalued services for RUC review by applying a set of criteria, called “screens,” to Medicare physicians’ services. Like the CPT Editorial Panel, the RUC also has three meetings for a given payment year, and during these meetings a RUC workgroup determines which screens the RUC should use; generally, the RUC submits its recommendations for potentially misvalued services to CMS within a year or two of these services being identified. CMS, too, identifies potentially misvalued services for RUC review by choosing to implement screens from among criteria identified in statute and from public nomination. CMS then publishes a proposed list of potentially misvalued services in its annual proposed rule in the Federal Register (generally in July), and finalizes the list for RUC review in the final rule (which is generally published in November). Step 2: RUC Develops Recommendations After the RUC and specialty societies have determined which of the identified services they will develop recommendations on for the upcoming payment year, specialty societies use RUC-approved methods to develop recommendations for the RUC on work relative values and the set of DPEI and associated times and quantities, which the RUC then The considers before submitting the final recommendations to CMS.RUC has documented instructions for specialty societies to follow when developing their proposed recommendations. To develop work relative value recommendations, specialty societies use the RUC’s work survey instrument to survey a random sample of their members about, among other things, (1) the time required to perform a service, (2) the complexity and intensity of performing a service relative to a reference service, and (3) a total work relative value. Specialty societies then finalize their recommendations by applying a concept known as magnitude estimation to evaluate the survey data to determine whether the results for a service are consistent with the relative values of related services that were recently valued, and may make a recommendation that is different from the survey results if they are not. To develop DPEI recommendations, specialty societies primarily use PE expert panels composed of members of their societies who use their clinical knowledge, along with comparisons to other services, to develop recommendations on the clinical labor, medical equipment, and disposable supplies required for a service. For some services, the recommended amounts of time for DPEI are determined, in part, from the responses provided in the work survey instrument. Specialty societies do not make formal recommendations on other aspects of PE relative values, such as indirect PE or DPEI prices, though they may periodically provide the RUC with invoices to submit for CMS’s consideration, and the RUC may also periodically make recommendations on the overall methodology CMS uses to calculate PE relative values. The RUC recommends that specialty societies work together to develop recommendations if more than one society has an interest in the particular service identified for review. Specialty societies develop new work relative value and DPEI recommendations for new services; for revised and potentially misvalued services, specialty societies may recommend to increase, decrease, or maintain the existing values. Specialty societies submit the recommendations they develop and supporting documentation to RUC staff for discussion during one of the three RUC meetings each year. RUC members are assigned to prereview each recommendation before each meeting and provide feedback as needed; specialty societies may revise their recommendations on the basis of this feedback before presenting them to the RUC during the meeting. The RUC has documented criteria for reviewing specialty societies’ proposed recommendations, including a series of questions RUC members should use to guide their deliberations on services during RUC meetings. In general, members of the public may attend RUC meetings and observe RUC deliberations firsthand. Everyone who attends RUC meetings, including RUC members, must sign a confidentiality agreement. At these meetings, specialty societies present their work relative value recommendations to the entire RUC and their DPEI recommendations to the RUC’s PE Subcommittee, which, after its own review, makes recommendations to the entire RUC. During these discussions, RUC members may ask questions about the specialty societies’ proposed recommendations, such as the level of work required to perform the service, and the recommendations may be modified as a result. As part of these discussions, RUC members apply magnitude estimation to determine whether a recommendation for a service is consistent with the relative value for related services. CMS officials are invited to attend and participate in RUC meetings. CMS officials stated that they often make comments, ask questions, or remind the committees of established policy, but do not generally make suggestions regarding specialty societies’ recommendations. After the discussion period, each recommendation is voted on by RUC members. Proposed recommendations must reach a two-thirds majority vote of the RUC members to be accepted; approved recommendations are forwarded to CMS. RUC officials told us that, starting with its September 2014 meeting, the RUC sends CMS its recommendations after each RUC meeting. Prior to September 2014, CMS did not receive all of the RUC’s recommendations for a given payment year until the preceding spring. For payment years 2011 through 2015, CMS reviewed 1,278 RUC work relative value recommendations, of which 65 percent were for existing services (including potentially misvalued and revised services) and 35 percent were for new services. The number of work relative value recommendations reviewed each year varied from 187 to 337 with no consistent trend over time. Among the 833 RUC recommendations reviewed by CMS for existing services across the 5 payment years, over half of them were to maintain the current work relative value, and this was the most common type of recommendation made each year. (See fig. 3.) In instances when the RUC recommended an increase or a decrease to the current work relative value, the magnitude of the recommendation was typically large, especially among increases. For example, across the 5 payment years, 76 percent of the recommended increases and 65 percent of the recommended decreases were at least 10 percent of the current value, and almost a quarter of the recommended increases and 6 percent of the recommended decreases were at least 50 percent of the current value. Step 3: CMS Reviews RUC Recommendations and Establishes Relative Values CMS reviews and considers each of the RUC’s recommendations when valuing particular services and then publishes its relative value decisions—including whether it agrees with the RUC or decides an alternative value more accurately reflects the resources needed to provide that service—through rulemaking in the Federal Register. Because, until recently, CMS did not receive all of the RUC’s recommendations until the spring preceding the payment year, CMS did not have time to include a discussion of the RUC’s recommendations in its proposed rule addressing changes to the physician fee schedule each year, generally each July. Instead, CMS responds to the RUC’s recommendations, referring to them as interim final values, in the final rule it publishes, generally each November preceding the payment year for which the values would go into effect. However, CMS recently revised its timeline for reviewing RUC recommendations to give stakeholders more time to respond to RUC recommendations before CMS considers them and to give notice of the possible changes to payment rates for identified services. Beginning with payment year 2017, CMS will include the results of its review of RUC recommendations in the proposed rule, thus generally eliminating the need for interim final values. In rulemaking establishing payment values, CMS indicated it has reviewed RUC recommendations through multiple methods, including assessing the results of surveys and other supporting data submitted by the RUC, including assessing the methodology and data used to develop the recommendations; conducting a clinical review, which includes comparison with other physicians’ services to ensure relativity across services and to avoid anomalies, and review of relevant medical literature; analyzing other data sources with related information, such as claims considering information provided by other stakeholders. CMS also is authorized to use other methods to determine the relative values for services for which specific data are not available. After the publication of CMS’s decisions in the final rule, the RUC and other stakeholders have 60 days to provide comments. In the subsequent year’s final rule, CMS may choose to refine (revise) the values it initially established in response to these comments or other new information or to finalize the previously published interim final values. CMS refined, on average, 11 percent of the work relative values that the agency established between 2011 and 2014. During this period there was no consistent trend in the percentage of services refined over time, with the percentage of annual refinements ranging from 5 to 21 percent. Criteria Developed by GAO for Evaluating the RUC’s and CMS’s Current Processes In order to evaluate the RUC’s and CMS’s current processes for developing recommendations for and establishing relative values, respectively, we reviewed (1) applicable laws and regulations; (2) goals, policies, and procedures established by the RUC and CMS; (3) federal internal control standards; and (4) relevant reports and publications on these processes. Examples of these include legislation such as PAMA and PPACA; RUC documents describing its process for developing relative value recommendations and descriptions of CMS’s process for establishing relative values described in rulemaking; federal internal control standards pertaining to, for example, control activities and information and communications; and previous MedPAC and GAO reports. Based on our reviews of these documents, we then developed the following seven criteria included in table 1 to evaluate the current RUC and CMS processes against. Weaknesses in the RUC’s Data and in Its Relative Value Recommendation Process Present Challenges for Ensuring Accurate Medicare Payment Rates The RUC Process Prioritizes Review of Services It or CMS Identifies as Potentially Misvalued and Regularly Reviews Services the CPT Editorial Panel Identifies as New or Revised The RUC identifies potentially misvalued services by applying screens it has independently developed and by identifying additional services for review in response to requests from CMS. In addition, the RUC has a process to review services regularly—the timing of which could have implications for when CMS establishes relative values for those services. To ensure the accuracy of its recommendations for CMS, the RUC takes steps during its process to mitigate any possible bias from affecting its work relative value and DPEI recommendations. Despite these steps, weaknesses in the RUC’s relative value recommendation process and in its survey data present challenges for ensuring accurate Medicare payment rates for physicians’ services. The RUC’s process is consistent with statutory criteria specified for CMS consideration in identifying services because the RUC prioritizes its reviews by identifying potentially misvalued services for review based on risk assessment. The RUC does this by applying screens it has independently developed on the basis of risk assessment criteria. These screens are different from those used by CMS to review services, although CMS officials said that many of the RUC’s screens overlap with the screens used by CMS. For example, both CMS and the RUC have used screens to identify for review potentially misvalued services that had the fastest growth in Medicare utilization. According to the RUC, 80 percent of the potentially misvalued services it has reviewed were identified using the RUC’s screens. The RUC also identifies additional services for review in response to requests from CMS. For example, the RUC may create screens to identify services in response to CMS requests to review categories of services the agency has determined are potentially misvalued. In one instance, for payment year 2009, CMS requested that the RUC review “Harvard- valued” services (a category of services that would later be designated in statute as a screen) and prioritize reviewing those services with high utilization. Based on this request, the RUC created screens to identify Harvard-valued services—for services performed more than 1 million times in a year and then for services performed over 100,000 times in a year followed by services performed over 30,000 times in a year—and subsequently submitted relative value recommendations to CMS for a subset of the identified services for payment years 2011 through 2014. Furthermore, the RUC may identify additional services for review while developing recommendations for potentially misvalued services identified by CMS screens. For example, for payment year 2011 CMS requested that the RUC develop recommendations for services CMS had identified using a screen for services with low work relative values and high utilization. The RUC then modified CMS’s screening criteria to identify additional services for review using a broader range of work relative values, and developed recommendations for the services for payment years 2012 and 2013. Entities should also have processes in place to review services regularly, according to relevant statutory criteria and federal internal control standards, which the RUC accomplishes by annually developing work relative value and DPEI recommendations for CMS to consider. The RUC develops recommendations for almost all services that the CPT Editorial Panel identifies as new or revised in time for the upcoming payment year, although it can often be several years between when the RUC or CMS identifies a service as potentially misvalued and when the RUC develops recommendations for that service. Sometimes this is because the RUC determines, as part of its process, to postpone developing recommendations for the service until more information, such as data about how new technology affects the service, becomes available. Other times, the RUC may suggest to the CPT Editorial Panel and CMS that identified services be deleted or bundled into another service, which means these services would not be valued by CMS at all or would be valued as part of the new bundled service. RUC staff also told us that they spend time discussing with CMS officials whether some services requested by CMS for RUC review need to be reviewed; for example, if specialty societies recently developed recommendations for a service, then the specialty societies may determine that another review is unnecessary. The timing of the RUC’s process for reviewing services can have implications for when CMS establishes relative values for those services, since CMS officials told us they rarely establish work relative values and DPEI for individual services without first receiving RUC recommendations. Although the RUC Takes Steps to Mitigate Possible Bias in Its Process, Potential Conflicts of Interest and Data-Related Weaknesses Present Challenges for Ensuring Accurate Medicare Payment Rates We and others have concluded that physicians who serve Medicare beneficiaries may have conflicts of interest when making relative value recommendations. The RUC has taken steps, though, to mitigate any possible biases that RUC members or specialty societies involved in the recommendation process may have from affecting the RUC’s work relative value and DPEI recommendations. As previously mentioned, entities should have processes in place to address conflicts of interest. While changes to Medicare payment rates for physician services are generally required to be budget neutral—that is, increases in the payment rate for specific services will lead to a decrease in the collective payments for all other services—each individual physician who serves Medicare beneficiaries would nonetheless benefit from an increase in the relative values for the services they perform. Given this potential conflict of interest and other potential conflicts that individual physicians involved in the recommendation process may have, the RUC takes steps to mitigate any possible bias from affecting its recommendations to CMS. For example, the RUC does not assign members to prereview recommendations developed by their own specialty societies. The RUC also prohibits its members from participating in deliberations and voting on services in which they or a family member have a direct financial interest, and may preclude members of specialty societies who disclose financial conflicts from presenting at RUC meetings.deliberation process is also intended to mitigate the effects of possible biases. Through the deliberation process RUC members have the opportunity to question the different specialty societies’ proposed recommendations. RUC staff said that it is in members’ best interests to question specialty societies’ proposed recommendations since Medicare payment rates are based on the relativity of services to each other in a budget neutral system. To lower the possibility for bias in survey data specialty societies obtain from their physician members, the RUC designed its survey instrument to ask respondents to disclose any direct financial interests they or a family member have in the surveyed service. Additionally, the RUC may discard all survey responses for a service when it believes the survey process was biased. Nevertheless, the reliability of work relative value recommendations may be undermined by survey respondents’ potential conflicts of interest. According to a member of the RUC, specialty societies’ work relative value recommendations are most likely inflated due to physician bias. RUC staff stated that, while the survey data are the beginning of the process to establish work relative value recommendations, the RUC relies on magnitude estimation and the clinical expertise of its members to develop the RUC’s final recommendations. According to RUC staff, this process often resulted in the RUC recommending a work relative value that was at the 25th percentile or lower of the specialty societies’ survey data between 2011 and 2015. While magnitude estimation and the clinical expertise of the RUC’s members may allow the RUC to partially compensate for inflation in specialty societies’ work relative value recommendations, it may not completely eliminate bias. Specifically, the accuracy of the results of magnitude estimation depends both on the accuracy of previously established relative values, which may also suffer from the same reliability issues, and physicians’ abilities to accurately determine the relativity between services, which may be difficult to do for services as disparate as primary care visits and complex surgeries. It is therefore unclear the extent to which magnitude estimation—without other reliable data about the work it takes to perform a service—is sufficient to generate accurate work relative value recommendations. We also identified other issues with some of specialty societies’ surveys, including low response rates, low total number of respondents, and large ranges in responses that suggest shortcomings with the data. In accordance with federal internal control standards, entities should develop their relative value recommendations based on the most accurate, timely, and reliable data possible, and these shortcomings may further undermine the reliability of RUC’s relative value recommendations. For example, of the 231 active Medicare physicians’ services that specialty societies surveyed for payment year 2015, the median response rate was 2.2 percent and while the median number of respondents was 52, for 23 of these services, the number of respondents failed the minimum survey response thresholds that the RUC implemented in 2014. Of these 23 services, there were only 2 services for which the RUC submitted temporary work relative value recommendations to CMS and required the specialty society to resurvey for a subsequent RUC Among the respondents for all 231 services, the range of meeting.estimated work relative values was broad. For example, surveys’ 25th percentile work relative value responses were at least 16 percent lower than the median value for half of specialty societies’ surveys in 2015. Finally, survey results may be undermined by the individuals who complete the survey, but the RUC has made efforts to address these issues. Survey respondents are asked to complete surveys for services that apply to them and to indicate how many times they have performed the services in the past year. In our review of the survey data, we found most surveys had at least one respondent who reported that they had not performed the service being surveyed within the past year. told us that they try to overcome the challenge of low response rates by allowing specialty societies to survey nonrandom samples of their members, survey those who are familiar with but do not perform certain services, or both; or allowing specialty societies to collect fewer than the required number of responses for their surveys. While these approaches may help the RUC to obtain additional survey responses, they also may further lower the reliability of the RUC’s data. CMS officials have acknowledged that the RUC can experience difficulties collecting sufficient numbers of survey responses if, for example, the services being surveyed have relatively low Medicare utilization. In other words, it is difficult to obtain reliable data about a Medicare service if the service is rarely performed. The RUC provides CMS with its survey data when it submits its recommendations, which may help CMS to draw independent conclusions about the reliability of the RUC’s recommendations and thus how services should be valued. According to the criteria we identified, CMS should establish relative values using the most accurate and reliable data possible. The median number of times respondents reported performing the service in the past year was 10. undermined by data weaknesses and weaknesses in its process due to potential conflicts of interest. Thus, the extent to which CMS does not draw independent conclusions, and instead relies on RUC recommendations for service valuations, presents a challenge for ensuring the accuracy of Medicare payment rates for physicians’ services. Since the RUC’s DPEI recommendations are generally based on input from specialty societies’ PE expert panels rather than on survey data, the reliability of these recommendations is in part dependent upon the expertise of the contributors to these panels. The RUC encourages specialty societies to include in their PE expert panels both subspecialists and generalists from within the specialty to represent different practice settings, as well as to seek input from practice managers and/or clinical staff familiar with DPEI. If these expert panels do not include a mix of physician and nonphysician experts as encouraged by the RUC, it may affect the reliability of the RUC’s DPEI recommendations. Currently, the extent to which specialty societies’ PE expert panels include a mix of physician and nonphysician experts is unclear. One specialty society told us that specialty societies’ PE expert panels may not have the ideal expertise to make DPEI recommendations. We reviewed some of specialty societies’ 2015 DPEI recommendation forms, which are to include a description of the composition of specialties’ PE expert panels, to determine whether specialty societies formed PE expert panels with the RUC’s recommended broad composition. We found that, while some expert panels did have a broad composition, detailed information on composition was frequently missing. The RUC Has Improved the Transparency and Representativeness of Its Recommendation Process, but Stakeholders Still Have Some Concerns about These Areas In recent years, the RUC has taken steps to improve both the transparency and representativeness of its recommendation process. According to relevant statutory criteria and federal internal control standards, entities should maintain transparent processes for establishing relative values. To improve its transparency, the RUC increased the amount of information publicly available online, thus enhancing the public’s access to information about its process. For example, in 2012, the RUC began posting the results of its votes on individual services on its website following CMS’s publication of the final rule establishing the physician fee schedule each year. In 2013, the RUC began posting its meeting minutes online. Additionally, the RUC makes an online product, RBRVS DataManager Online, available for purchase that includes information on services’ current DPEI and the RUC’s most recent work relative value recommendations. As a result of these efforts, the public can have a better understanding of the RUC’s process and knowledge of the recommendations submitted to CMS. The RUC also has taken steps to improve its representativeness by adding new specialty societies to its membership, which is important because stakeholders (such as different physician specialties) should have opportunities to comment and provide input on the RUC’s process per federal internal control standards. Based on feedback from stakeholders and changing trends in patient demographics, in 2012 the RUC added a permanent seat for the American Geriatric Society, a specialty society that did not meet the criteria for having a permanent seat on the RUC but that had expertise in caring for a large, discrete patient population. The RUC also added a rotating seat for a primary care representative—in addition to the permanent seats currently held by various specialty societies that provide primary care services—to increase representation of the specialty on the RUC in response to stakeholders’ concerns that primary care was underrepresented. As a result of these changes, the RUC may be able to consider an increasing variety of stakeholder perspectives. Nevertheless, some stakeholders have continued to express concerns about both the RUC’s transparency and representativeness. With respect to the RUC’s transparency, some stakeholders have said that they cannot determine whether the RUC’s recommendations are biased in favor of certain specialty societies because the RUC does not publish how individual members vote on services. In response to these concerns, RUC staff stated that they do not disclose how individual members vote so as to protect members’ independence throughout the deliberation process from, for example, outside lobbying and potential negative feedback from colleagues. Additionally, the RUC’s public total vote counts show that its votes on services are typically unanimous. RUC staff said this unanimity typically results from members resolving disagreements about services during deliberations (before voting occurs) and that voting does not usually align based on specialty. With respect to the RUC’s representativeness, stakeholders such as the American Academy of Family Physicians have expressed concerns that primary care physicians are underrepresented on the RUC, which biases the RUC’s recommendations against primary care services. According to the RUC, however, the mix of specialties represented in its membership does not affect the types of services for which it makes recommendations to CMS. The RUC also reported that it has recommended substantial increases to primary care services each time these services have been identified for review. To try to determine whether the RUC’s reviews of services underrepresented primary care services, we reviewed the categories of services for which the RUC made work relative value recommendations to CMS between 2011 and 2015. We found that over these years, the number of recommendations the RUC made to CMS for evaluation and management services (a proxy for primary care services) was proportional to the total number of Medicare services in the evaluation and management category. Specifically, during this period, the 16 evaluation and management services reviewed by the RUC comprised, on average, 1 percent of the RUC’s recommendations, which was equal to the percentage of all Medicare services in this category.the RUC was more likely to recommend increases for the work relative values of existing evaluation and management services than for existing services of any other category. However, evaluation and management services for which the RUC made work relative value recommendations represented only 2 percent of Medicare spending on all services with Additionally, RUC recommendations, which was significantly lower than the percentage of Medicare spending on all services in this category (43 percent). Although these results do not indicate whether primary care services are being undervalued by the RUC, they do indicate that for payment years 2011 through 2015 the RUC reviewed these services in proportion to their numbers, but did not review these services in proportion to their impact on overall Medicare spending. CMS’s Process for Establishing Relative Values May Not Ensure Accurate Medicare Payment Rates and Lacks Transparency CMS’s process for establishing relative values embodies several elements that cast doubt on whether it provides assurance of accurate Medicare payment rates. While CMS stated that it complies with a statutory requirement governing how often physicians’ services are to be reviewed, CMS does not track when a service was last valued or have a documented standardized process for prioritizing its review of services. The agency also has limited documentation about its process, and does not have any documentation with specific information about the selected method used to review a specific RUC recommendation. Lack of transparency in its process and lack of data sources to validate RUC recommendations, combined with evidence that CMS relies heavily on the RUC for relative value recommendations despite weaknesses with the RUC’s data, may undermine payment rate accuracy. CMS Does Not Track a Service’s Last Valuation or Have a Documented Standardized Process for Prioritizing Reviews CMS officials told us they comply with the statutory requirement to review relative values for all Medicare physicians’ services at least every 5 years by annually identifying new, revised, and potentially misvalued services for review. Officials explained that they are not required to revalue all services every 5 years through a full revaluation process involving the RUC. Rather, CMS officials said they meet the statutory requirement to review relative values every 5 years by applying screens that are designated in statute to all services and determining whether the resulting services need to be revalued. This indicates that CMS has a process in place to ensure that relative values are reviewed regularly and revalued if necessary. The officials said they also annually identify services for review through other mechanisms as well, including conversations with stakeholders and receiving nominations from the public. Officials told us they review the results of these actions to determine which services need to be revalued. However, we found that CMS does not have a standard process for identifying services for review each year, nor does it track when a service was last valued. To effectively apply the statutory criteria for identifying potentially misvalued services, CMS should prioritize reviews of services based on results of risk assessment and ongoing monitoring, but CMS does not have a standard process for determining which of these screens to apply in a given year. When asked how they select a screen, CMS officials said they decide in part on the basis of what they learn from (1) RUC meetings, (2) stakeholders, and (3) other sources such as the news and the internet. Officials could not provide any supporting documentation to indicate how they select which screens to apply in a given year. Furthermore, CMS officials told us that they do not maintain a database to track when services were last valued; rather, they rely on the final rules addressing changes to the Medicare Physician Fee Schedule to determine when services were last valued to assist in prioritizing the review of services and then determine whether a service needs to be valued again. Officials said that tracking when a service was last valued was challenging because, for example, if CMS identifies a service as potentially misvalued, the CPT Editorial Panel may then revise the service by separating it into multiple services or even deleting it. Thus, under the current process CMS officials said it was more efficient to determine when a service was last valued once it had been identified as potentially misvalued, rather than to track thousands of Medicare services individually. Although officials said they use the final rules to approximate when identified services were last valued and then determine whether a service needs to be valued again, this approach does not allow CMS to proactively flag services for review that had not been revalued over an extended period of time. The most recent Medicare physician services expenditure data available at the time of our analyses were from 2013, so we used 2013 expenditure data as a proxy for 2014 expenditure data when calculating the 2014 spending quintile of 2015 services. for a large share of Medicare spending. However, due to the small number of services reviewed each year, the existing services reviewed between 2011 and 2015 represented under one-third of all Medicare expenditures on physicians’ services. CMS Process for Establishing Relative Values Lacks Transparency, and Heavy Reliance on RUC Recommendations May Undermine Payment Rate Accuracy CMS makes some information about its process for establishing relative values available to the public, but some information on the services under review is not included, which limits stakeholders’ knowledge about whether payment rates are likely to change for these services. Through rulemaking published in the Federal Register, CMS describes how it identifies services for review and the methods it may use to review RUC recommendations. In addition, CMS has increased the amount of information it discloses through rulemaking in recent years. For example, for payment year 2009 CMS began listing services it identified as potentially misvalued in the proposed rule. Additionally, CMS began including information in the final rule for payment year 2011 about whether it had refined the RUC’s DPEI recommendations. However, although CMS rulemaking currently lists services for public comment that it or the public identified as potentially misvalued, CMS does not include information on services identified by the RUC as potentially misvalued prior to addressing the RUC’s recommendations. Stakeholders should have opportunities to comment and provide input on CMS’s process per federal internal control standards. However, unless stakeholders monitor the RUC’s activities, they are unaware that these services are under review and that payment rates for them may change until CMS publishes its responses to the RUC’s recommendations for these services. Thus, stakeholder participation in CMS’s process is limited because of incomplete information regarding which services are undergoing RUC— and eventually CMS—review. Moreover, while CMS provides general information on how it reviews RUC recommendations, it does not document a process for reviewing recommendations that would identify the resources considered during its review of specific RUC recommendations. Entities should maintain a transparent process for establishing relative values, including having documentation about their processes and disclosing information upon which decisions were based to the extent possible. In the case of CMS, information provided in the proposed and final rules addressing changes to the physician fee schedule published in the Federal Register each year are the only sources of documentation about CMS’s process. While past rules indicate that the agency uses multiple methods for reviewing RUC recommendations, they do not provide specific information on the selected method used to review a particular recommendation, and thus CMS does not fully disclose information upon which its decisions were based. To try to better understand what a CMS review includes, we requested supporting documentation for two services CMS recently reviewed. However, CMS was unable to produce supporting documentation for its reviews of these services. CMS officials told us they do not have additional documentation, including internal or external policies or guidance documents, to assist them with their review of RUC recommendations. Without such documentation, there is no assurance that CMS followed a standardized process to ensure consistent reviews and accurate relative values. A standardized process is necessary to ensure that established relative values reflect differences in work relative values and DPEI rather than inconsistencies in CMS’s process. Such inconsistencies may affect the relativity of services to each other and undermine the overall accuracy of Medicare payment rates for physicians’ services. While information on the process CMS uses to review specific RUC recommendations is limited, we have identified two factors that suggest CMS relies heavily on RUC recommendations when establishing relative values. First, according to CMS officials, the agency does not have its own data sources to validate RUC recommendations because such data sources do not exist, so officials generally rely on the RUC’s recommendations as their primary data source for work relative value and DPEI recommendations. The RUC is currently the only source of comprehensive information available regarding the physician work, clinical staff, medical supplies, and equipment required to provide Medicare physicians’ services—no alternative sources currently exist for CMS to consider that can provide information on these components for all Medicare services. Second, participation from other stakeholders in the process for establishing relative values is limited. Specifically, while CMS has provided opportunities for stakeholders to participate in the evaluation process, few stakeholders have taken advantage of them. For instance, for payment year 2012, CMS introduced a public nomination process through which anyone may nominate a potentially misvalued service for review on an annual basis.stakeholders have an additional opportunity to provide input into CMS’s process. However, CMS received no public nominations for payment year 2014, and received only two nominations for payment year 2015. CMS officials also told us that, in instances when stakeholders submit additional information for CMS to consider when reviewing a service, the submitted information often duplicates what officials had already considered. As a result, in the majority of cases, CMS has accepted the RUC’s work relative value recommendation. For example, our analysis shows that between payment years 2011 and 2015, CMS agreed with the RUC’s recommended work relative value on average 69 percent of the time, with its acceptance rate ranging from 60 to 77 percent. (See fig. 5.) The extent to which it agreed varied by the type of recommendation the RUC made. Specifically, CMS most often agreed with RUC recommendations to maintain the current work value (85 percent agreement rate on average, ranging from 69 to 98 percent), followed by agreement with RUC recommendations of decreases (77 percent on average, ranging from 64 to 93 percent), and RUC recommendations of work relative values for new services (64 percent on average, ranging from 46 to 77 percent). CMS Is Developing an Approach for Validating Relative Values, but Does Not Yet Have a Specific Plan for Doing So or for Addressing Other Data Challenges CMS does not yet have a formal process for validating RUC recommendations, but is developing an approach as required by PPACA.as part of its process for establishing relative values and agrees with or refines them based on, for example, the agency’s assessment of the RUC’s data or completion of a clinical review. As previously mentioned, CMS does not currently have a way to systematically (1) validate that the RUC’s proposed work relative values—and the underlying time and intensity assumptions or DPEI recommendations—are correct, or (2) determine what they should be. However, PAMA specifically authorized CMS to collect and use information on physicians’ services in the determination of relative values and appropriates $2 million each year Currently, CMS reviews the RUC’s recommendations and data Although CMS beginning with fiscal year 2014 to carry out this authority.officials told us it is too soon to say how they will spend these funds, CMS has used other funds to contract with two external entities—the Urban Institute and the RAND Corporation—to develop validation models for relative values. These contracts focus on validating work relative values for which recommendations are developed by the RUC. The Urban Institute contract focuses on collecting time data for a selection of services from different health care entities, given that there have been concerns about the accuracy of the times used to estimate work relative values. The Urban Institute’s goal is to collect data from administrative sources, such as electronic health records, and direct observations in order to, among other things, compare new time data against the current times used for the selected services and to develop alternative work models of work values. As of November 2014, the Urban Institute had issued an interim report that included a discussion about the challenges it had encountered when collecting objective time data. We spoke with the researchers, who told us that their biggest challenge was trying to use the RUC’s descriptions of the services when collecting data through direct observations; specifically, the RUC’s descriptions differed from what was observed, such as the tasks actually performed and by whom (e.g., a physician versus clinical staff). The RAND Corporation contract focused on using existing time data to develop validation models to predict work relative values and the individual components of work relative values (time and intensity), based on a subset of surgical services. RAND issued its final report in November 2014. RAND researchers told us they deconstructed the total work relative values for the selected services into, for example, the different times and intensities required to complete the work depending on whether it was the beginning, middle, or end of the service, such as the time required for scrubbing up before a service or evaluating a patient afterward. They used these deconstructed times and intensities to help develop models that could predict new values for these subcomponents and, when summed together, estimate new total work relative values. In developing its models, RAND found that its estimates of intraservice time estimates, which were based on data from existing databases, were typically shorter than the current CMS estimates (which consider the RUC’s estimates). RAND developed several models for predicting total work relative values, each of which accounts for different modeling choices. CMS officials were unable to tell us how they intend to use the results of the Urban Institute’s and RAND Corporation’s studies, but both contractors have highlighted areas where further work is needed before CMS will be able to fully validate relative values. For example, the RAND Corporation reported that additional research is needed in determining how to quantify and validate the intensity component of work relative values. Additionally, the Urban Institute reported that the accuracy of the RUC’s descriptions of services needs further review. Further review is important because if physicians are no longer performing certain tasks associated with a service, then including these tasks in an estimate of a physician’s work relative value could lead to inflated Medicare payment rates for that physician service. CMS’s validation approach will also require determining whether it is appropriate to validate relative values at the service level or physician level and the extent to which some other mechanism—such as an independent panel of experts—would be useful. The Urban Institute and RAND both adopted a “bottom-up” approach to validating work relative values, meaning that the collection and analysis of data would be focused on specific services. However, in 2014 MedPAC expressed some concerns about a “bottom-up” approach, including that, among other things, analyses conducted on a service-by-service level are costly, burdensome, and subject to bias. In light of these concerns, MedPAC suggested a “top-down” approach, which, in contrast, involves the physician as the unit of analysis and examines the mix of services provided by the physician and the total time worked on the services. In addition to a top-down or bottom-up approach, another mechanism for validating work relative values could come from an independent technical panel. MedPAC has previously recommended that CMS create such a panel—which may include individuals with expertise in health economics and physician payments, along with clinical expertise—to help CMS establish more accurate relative values and to reduce its reliance on the When we asked whether they had considered convening such a RUC.panel, CMS officials told us they had not because determining the right balance of expertise among panelists would be challenging and that, if the panel were to include physicians, it would likely duplicate the current RUC process. However, until CMS determines what process it can use to validate the RUC’s recommendations against other sources, it will not be able to address the shortcomings with the RUC’s data. CMS also has limited pricing information for DPEI, but the agency is exploring options for obtaining more accurate, reliable pricing data. CMS has repeatedly stated in rulemaking that it is difficult for the agency to obtain reliable pricing data for DPEI; that its pricing information is almost exclusively anecdotal; and that officials sometimes price items on the basis of a single or small number of invoices. While the RUC submits paid invoices for new medical supplies and equipment to CMS, RUC staff told us that providing pricing information for other medical supplies and equipment is outside of the scope of their expertise and that CMS should obtain this information directly from manufacturers or other sources. CMS encourages other stakeholders to provide CMS with updated pricing as well, and has pursued other options for obtaining reliable pricing data in the past. For example, CMS has contracted with consultants to obtain pricing data and has considered using data from the General Services Administration medical supply schedule. When asked about revisiting these approaches, CMS officials told us that there were advantages and disadvantages to them and that they continue to consider ways to obtain reliable pricing data and that any plans for doing so will be proposed through rulemaking. PPACA requires CMS to develop a plan to validate the data used to establish relative values and specifically authorized CMS to employ a range of specific activities to conduct the analysis, including the use of contractors to collect data for validating relative values. These activities may then generate additional data sources against which CMS could validate the data used to establish relative values. CMS officials told us they are considering using the funds appropriated by PAMA to obtain more accurate, reliable pricing data, but they did not share whether they would use contractors to obtain these data. Because CMS does not have a specific timeline or plan for using these funds, including how these funds may be used to assist CMS with developing its validation approach, it continues to delay establishing a process to validate the accuracy of payment rates under the fee schedule, as required by statute. Conclusions Given the amount of Medicare spending on physicians’ services— approximately $70 billion in 2013—and that other payers base their payment rates at least in part on Medicare payment rates for physicians’ services, the accuracy of Medicare payment rates has major implications for the health care system. For example, financial incentives could induce some physicians to oversupply overvalued services and undersupply undervalued services. Moreover, if categories of services are systematically overvalued, the accompanying financial incentives could affect individuals’ decisions to become trained in certain specialties. Thus, it is important for CMS to establish accurate Medicare payment rates for physicians’ services to promote prudent spending of taxpayers’ and beneficiaries’ money and to promote a workforce that provides appropriate care for patients. Weaknesses in the RUC’s relative value recommendation process and in its data present challenges for ensuring accurate Medicare payment rates. First, physicians who serve Medicare beneficiaries—including members of the RUC and specialty societies—may have potential conflicts of interest with respect to the outcomes of CMS’s process for setting payment rates for Medicare physicians’ services. Second, we found some of the RUC’s survey data to have low response rates, low total number of responses, and large ranges in responses. While we acknowledge it is difficult to collect sufficient and reliable data, especially for low-volume Medicare services, these challenges nonetheless undermine the reliability of the RUC’s recommendations to CMS. Furthermore, because CMS relies on the RUC’s recommendations when establishing relative values, these challenges may also result in CMS setting inaccurate Medicare payment rates for physicians’ services. In addition, CMS’s process lacks transparency. In particular, because CMS does not document the data sources it considered during its review of specific RUC recommendations, it cannot demonstrate what other resources it relied on to make its decisions and cannot assure that it is following a consistent process. Furthermore, although CMS rulemaking currently lists services that CMS or the public identified as potentially misvalued, it does not include services identified by the RUC in this list. Without advance notice of all potentially misvalued services identified for review, the extent to which stakeholders can participate is limited, and CMS may be missing opportunities to enhance stakeholder involvement and improve the accuracy of relative values, and thus, payment rates. The RUC is currently the only source of comprehensive information available regarding the physician work, clinical staff, medical supplies, and equipment required to provide Medicare physicians’ services—no alternative sources currently exist for CMS to consider that can provide information on these components for all Medicare services. CMS has begun taking steps to improve its process by beginning research on how to develop an approach for validating relative values; however, it does not yet have a specific plan for how it will do so, nor how it will use funds appropriated for the collection and use of data on physicians’ services or how it will address other data challenges. Without a timeline and a plan for determining its approach, including how it will use the funds appropriated by PAMA to assist it with validation, CMS risks continuing to use payment rates that may be inaccurate. Recommendations for Executive Action The Administrator of CMS should take the following three actions to help improve CMS’s process for establishing relative values for Medicare physicians’ services: 1. Better document the process for establishing relative values for Medicare physicians’ services, including the methods used to review RUC recommendations and the rationale for final relative value decisions. 2. Develop a process for informing the public of potentially misvalued services identified by the RUC, as CMS already does for potentially misvalued services identified by CMS or other stakeholders. 3. Incorporate data and expertise from physicians and other relevant stakeholders into the process as well as develop a timeline and plan for using the funds appropriated by PAMA. Agency and Third Party Comments and Our Evaluation We provided a draft of this report for review to HHS and received written comments that are reprinted in appendix II. Because of the focus on the RUC in this report, we also provided the AMA an opportunity to review a draft of this report. We received written comments from the AMA, which we have summarized below. Following is our summary of and response to comments from HHS and the AMA. HHS Comments In its comments, HHS concurred with two of our three recommendations, and summarized the steps the agency has already taken to increase transparency of its process and stakeholder involvement. Specifically, HHS concurred with our recommendation that CMS better document its process for establishing relative values, including the methods it used to review RUC recommendations. HHS stated that CMS establishes work relative values for new, revised, and potentially misvalued services based on its review of a variety of sources of information, including the RUC. HHS also stated that CMS assesses the methodology, data, and underlying rationale the RUC uses to develop its recommendations, and that CMS continues to improve the transparency of its process by including more detail on its process in its rulemaking. As an example, HHS noted that CMS has provided more details in its rulemaking regarding its review of the RUC’s DPEI recommendations. While we acknowledge that CMS has increased documentation of its process in rulemaking, we believe that documentation is lacking for other aspects of CMS’s process. For example, as we stated in the report, CMS officials told us they do not have additional documentation, including internal or external policies or guidance documents, to assist them with their review of RUC recommendations. Without such documentation, stakeholders have no assurance that CMS followed a standardized process to ensure consistent reviews and accurate relative values. HHS also concurred with our recommendation that CMS incorporate data and expertise from relevant stakeholders into its process and develop a timeline and plan for using the funds appropriated by PAMA. HHS stated that CMS’s process allows stakeholders to annually nominate potentially misvalued services for review, and that members of the public may attend RUC meetings. HHS also stated that CMS is assessing the outcomes of the Urban Institute’s and RAND’s research to determine the most effective and fiscally responsible way to use the funds appropriated by PAMA. HHS indicated that CMS is using the outcomes of this research to help inform the development of a timeline for use of the funds appropriated by PAMA, but since this work is ongoing, HHS did not provide an estimate of when CMS might finalize such a timeline. We acknowledge that CMS has taken steps to incorporate additional data and expertise into its process, and we describe these steps in our report. However, we believe that CMS needs to do more in both of these areas to increase the accuracy of Medicare physician payment rates. For example, CMS could take specific actions to determine how to incorporate more accurate and reliable sources of pricing data into its process. In addition, CMS could incorporate input from stakeholders apart from the RUC into its process—such as from salaried physicians or those who serve non-Medicare beneficiaries, or from individuals with expertise in physicians’ payments—through methods not limited to public comment on rulemaking. HHS did not concur with our recommendation to include the services identified as potentially misvalued by the RUC in its rulemaking to allow for public comment, prior to finalizing its list of potentially misvalued services for the RUC to review. While HHS acknowledged that some stakeholders may not be aware of all potentially misvalued services being reviewed by CMS prior to the establishment of interim final values for those services in a final rule, HHS expressed concern that implementing the recommendation would require CMS to identify all potentially misvalued services through notice and comment rulemaking before the RUC begins its review process. It was not our intention to recommend CMS establish a new rulemaking process or delay the timing of its reviews of services. Therefore, we reworded our recommendation to clarify that CMS may determine how best to inform stakeholders of services identified as potentially misvalued by the RUC and for which payment rates may subsequently change. HHS also described the steps it had already announced it would take to improve the transparency of its process, beginning for payment year 2017, such as including proposed changes in the relative values for almost all services in the proposed rule, and finalizing changes only after CMS considers and responds to public comments in the final rule. The elimination of most interim final relative values will allow stakeholders to comment on values before they become effective, which is not the case under the current process. However, under the new process CMS does not plan to inform the public of services identified by the RUC as potentially misvalued. We believe it is important for CMS to inform stakeholders of those services identified by the RUC as potentially misvalued before CMS receives RUC recommendations for these services and subsequently publishes values in the proposed rule each year, as CMS does for services the agency or the public has identified as potentially misvalued. Informing stakeholders about all potentially misvalued services identified for review—including those identified by the RUC—would facilitate greater transparency of CMS’s process and give stakeholders more time to provide input on values for these services if they so choose. AMA Comments Overall, the AMA agreed with our recommendations, though the AMA also stated that it is important for CMS to implement our recommendation regarding publishing the services the RUC identified as potentially misvalued in a way that does not delay the RUC’s process. The AMA also stated that the draft report did not sufficiently acknowledge the challenges in collecting reliable survey data—especially for low-volume services— and that the RUC’s survey methodology, followed by rigorous cross- specialty review, is the best available approach to collecting this data. In particular, the AMA stated that the report’s principle criticism of the RUC process of developing work relative value recommendations is that the RUC’s reliance on survey data is insufficient to ensure accurate work relative value recommendations. The RUC requires a random sample from specialty societies, and the AMA pointed out that many specialty societies email their entire membership or a large sample of their membership to obtain survey responses. The AMA also noted that a low response rate is “understandable” given that 80 percent of services paid under the Medicare Physician Fee Schedule with physician work relative values assigned to them are performed under 10,000 times per year. The AMA stated that it is a testament to the RUC’s efforts that we found specialty societies collected an average of 52 physician responses. Furthermore, in response to our finding that most of the surveys we reviewed for payment year 2015 had at least one response in which the respondent reported not performing the surveyed service within the past year, the AMA asserted that the opinion and experience of physicians who have performed the service (even if not very recently) are still valid contributions. We recognize it is difficult to obtain reliable survey data, especially if a service is rarely performed, and that physicians can still provide clinical expertise for a service even if they did not perform the service within the past year. However, these issues still call into question the reliability of the RUC’s recommendations, which underscores the importance of our recommendations that CMS seek additional sources of reliable data to incorporate into its process, as well as develop a timeline and plan for using the funds appropriated by PAMA to develop its approach for validating relative values, including the RUC’s recommendations. The AMA also described how the RUC relies on magnitude estimation as the methodology to develop physician work relative values, and noted that the RUC’s use of physician survey data is only the beginning of the process to establish work relative value recommendations. However, we have some concerns with relying on the RUC’s review of services through magnitude estimation to supplement the absence of reliable data on specific services. As we stated in the report, the accuracy of the results of magnitude estimation depends both on the accuracy of previously established relative values, which may also suffer from the same reliability issues, and physicians’ abilities to accurately determine the relativity between services, which is very difficult to do for services as disparate as primary care visits and complex surgeries. It is therefore unclear the extent to which magnitude estimation—without other reliable data about the work it takes to perform a service—is sufficient to generate accurate work relative value recommendations. Finally, the AMA noted that the RUC would welcome the identification of other reliable data that would provide a representative and consistent source of information to be considered in addition to survey data. To date, the AMA has found only one reliable set of extant physician time data, the Society of Thoracic Surgeons Database, which the RUC has used in its valuation process. We agree that the RUC is currently the only source of comprehensive information available regarding the physician work, clinical staff, medical supplies, and equipment required to provide Medicare physicians’ services, and have clarified this point in our report. The AMA also stated that the report suggested the Urban Institute was unable to obtain accurate time data based on the RUC’s definition of time or services, and commented that the RUC’s definitions of physician time were established by Harvard and CMS, not the RUC. While it is true that Harvard and CMS were responsible for determining the initial definitions for the physician work required to provide Medicare physicians’ services, the AMA was also involved in that effort. For example, when Harvard researchers surveyed physicians about the work required to perform services, the descriptions of the services were based on AMA’s CPT descriptions or on descriptions provided by small groups of physicians representing different specialties that were identified through a process coordinated by the AMA. With respect to RAND’s research, the AMA commented that we failed to mention that RAND generally found that CMS’s current work valuations of services were consistent with RAND’s predicted work valuations. As we stated in the report, RAND’s estimates of intraservice time were typically shorter than the current CMS estimates. As a result, RAND developed several models for predicting work relative values, because the implications of these shorter times on intensity and hence overall work relative values are currently unknown. AMA also provided technical comments on a draft of this report, which we have incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Administrator of CMS, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Establishing Relative Values for a Medicare Physician Service through the RUC and CMS Processes The process to develop and establish relative values involves three main steps: (1) the Centers for Medicare & Medicaid Services (CMS), the American Medical Association’s (AMA) Relative Value Scale Update Committee (RUC), and the AMA’s Current Procedural Terminology (CPT) Editorial Panel identify services for the RUC to review; (2) the RUC works with specialty societies to use surveys and other methods to develop work relative value and direct practice expense input (DPEI) recommendations for CMS for identified services; and (3) CMS considers each RUC recommendation it receives to determine whether to use it when establishing relative values for physicians’ services. To describe how a service is reviewed through the RUC’s and CMS’s processes and the timeline for establishing relative values, we selected an active Medicare physician service that had recently been valued through these processes for a case study. The service we selected was CPT code 31647, which is used to report the insertion of bronchial valve(s).staff told us that this service was reviewed through its standard process. Step 1: Services identified for RUC review (October 2011 – February 2012) At its October 2011 meeting, the CPT Editorial Panel identified CPT code 31647—a new service—for the RUC to review. This service was one of three new services created to report the sizing and insertion or removal of bronchial valves. CPT code 31647 and the other two new services were previously reported using temporary CPT codes that are reserved for tracking new and emerging technologies and were assigned final CPT codes once the CPT Editorial Panel determined that the services had become more widespread; in other words, that the services were generally performed by many physicians in clinical practice in multiple locations. Once the CPT Editorial Panel identified CPT code 31647 for review, it forwarded the service—along with the other additions and revisions to services it was proposing for payment year 2013—to RUC staff, who then worked with RUC specialty societies to determine what action the RUC would take regarding the service. For example, the RUC may decide to develop work relative value and DPEI recommendations to submit to CMS for a service, or may decide to refer a service to the CPT Editorial Panel for further review. In the case of CPT code 31647, the American College of Chest Physicians and the American Thoracic Society both indicated an interest in developing work relative value and DPEI recommendations for this service. Given their shared interest, these two specialty societies agreed to survey physicians about CPT code 31647 and develop joint recommendations for the RUC to consider during its January 2012 meeting. RUC staff told us that CPT code 31647 was originally established as an “add-on code” to describe the insertion of bronchial valves in lungs; that is, it could only be reported in conjunction with codes that described the primary procedure of which it was a part, a bronchoscopy. Conversely, CPT code 31648 was established as a code that could be reported separately (a “stand-alone code”) to describe bronchoscopies that include the removal of valves from one lobe of the lung, and 31649 was established as an accompanying add-on code to describe the removal of valves from additional lobes of the lung. In January 2012, the American College of Chest Physicians and the American Thoracic Society recommended that 31647 be revised as a stand-alone code with its own add-on code, to parallel the structure of 31648 and 31649. In February 2012, the CPT Editorial Panel revised 31647 and created add-on code 31651. after which the American College of Chest Physicians and the American Thoracic Society reaffirmed their decision to survey the service and to develop joint recommendations for the April 2012 RUC meeting. Step 2: RUC develops recommendations (February 2012 – May 2012) Specialty societies develop work relative value recommendations based on surveys In preparation for the April 2012 RUC meeting, the American College of Chest Physicians and the American Thoracic Society distributed the RUC’s standard work survey to a random sample of their members but did not receive a sufficient number of responses; the specialty societies then distributed the survey to a targeted sample of 85 physicians who were trained to perform the service and/or who owned the equipment required to perform the service based on a list of physicians provided by a medical device vendor. The specialty societies obtained responses from 30 out of 85 physicians for a response rate of 35.2 percent, which met the RUC’s required minimum number of survey responses. The specialty societies used 16 out of 30 responses (53.3 percent) for the intensity portion of the survey. The American College of Chest Physicians and the American Thoracic Society’s joint relative value committee analyzed the survey data collected for CPT code 31647 and determined that the median work relative value of 4.40 (survey responses ranged from 1.50 to 6.00) and median intraservice time of 60 minutes were appropriate. The joint relative value committee also determined that 30 minutes of postservice time was appropriate. Although the median survey results for preservice time were 42.5 minutes, the committee determined that the RUC’s standardized preservice package of 25 minutes was appropriate for CPT code 31647.survey respondents’ median preservice time estimates, they usually recommend using the RUC’s standardized time packages. RUC staff told us that if specialty societies cannot justify Specialty societies develop DPEI recommendations based on PE expert panels The American College of Chest Physicians and the American Thoracic Society’s joint practice expense (PE) committee met via conference call to review the set of direct practice expense inputs—and the associated times and quantities—necessary to perform the service. The joint PE committee consisted of 2 private practice physicians, 2 academic-based physicians, 2 medical practice administrators, 1 registered nurse consultant, and 1 certified public accountant. The committee determined that 13 minutes of preservice clinical labor time divided among completing patient referral forms (5 minutes), coordinating presurgery services (3 minutes), scheduling space and equipment (3 minutes) and allowing for follow-up phone calls and prescriptions (2 minutes) was required to perform CPT code 31647. Specialty societies present work relative value and DPEI recommendations at RUC meeting; RUC then decides on recommendations to send to CMS At the April 2012 RUC meeting, three members of the American College of Chest Physicians and the American Thoracic Society presented their joint committee’s work relative value and DPEI recommendations for CPT code 31647 to the RUC and the RUC’s PE Subcommittee, respectively. RUC staff told us that prior to the meeting, a fourth member disclosed a financial interest in 31647 because he worked as a consultant and researcher for a relevant medical device manufacturer. The RUC’s Financial Disclosure Workgroup determined that this member could provide a brief (less than 5 minutes) presentation describing how the service was performed, and then had to leave the RUC deliberation table.RUC to prereview each recommendation and lead the RUC’s deliberations on the service. Also prior to the meeting, RUC staff assigned members of the During the meeting, the RUC PE Subcommittee reviewed the specialty societies’ DPEI recommendation for CPT code 31647 and forwarded it to the full RUC committee for consideration without modification. The RUC reviewed the work relative value and DPEI recommendations for CPT code 31647 and achieved a two-thirds majority vote to accept both recommendations without modification. code 31647 as a new technology service, to be rereviewed in 2016 after additional years of Medicare utilization data would be available. In May 2012, the RUC sent its work relative value and DPEI recommendations for CPT code 31647—as well as for other services deliberated during the April 2012 meeting—to CMS to be considered for the upcoming 2013 payment year. The RUC also flagged CPT Step 3: CMS establishes relative values (May 2012 – December 2013) RUC staff told us that they did not record total vote counts in 2012 and so could only report whether the recommendations achieved the two-thirds majority vote required to be accepted by the RUC. the RUC. CMS was unable to provide additional supporting documentation on its review of the service when asked, and RUC staff told us they did not have any information about CMS’s clinical review of its recommendations for the service apart from what was included in the Federal Register. RUC and others have 60 days to comment In December 2012, the RUC commented in writing on CMS’s final rule, including CMS’s decisions regarding the RUC’s recommendations for CPT code 31647. According to the comment letter, the American Thoracic Society agreed with CMS’s refinements to its clinical labor DPEI recommendations for CPT code 31647; RUC staff told us that the American College of Chest Physicians did not comment on CMS’s refinements to the service. CMS’s interim final values included in the November 2012 final rule went into effect for the 2013 payment year beginning January 1, 2013. CMS may refine previously established relative values In the final rule establishing the physician fee schedule for payment year 2014, which was published in December 2013, CMS finalized the interim final work relative value and DPEI for CPT code 31647 without further refinement. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Gregory Giusto, Assistant Director; Marissa D. Barrera; Alison Binkowski; George Bogart; Kaitlin Coffey; Elizabeth T. Morrison; Vikki Porter; and Daniel Ries made key contributions to this report.
Payments for Medicare physicians' services totaled about $70 billion in 2013. CMS sets payment rates for about 7,000 physicians' services primarily on the basis of the relative values assigned to each service. Relative values largely reflect estimates of the physician work and practice expenses needed to provide one service relative to other services. The Protecting Access to Medicare Act of 2014 included a provision for GAO to study the RUC's process for developing relative value recommendations for CMS. GAO evaluated (1) the RUC's process for recommending relative values for CMS to consider when setting Medicare payment rates; and (2) CMS's process for establishing relative values, including how it uses RUC recommendations. GAO reviewed RUC and CMS documents and applicable statutes and internal control standards, analyzed RUC and CMS data for payment years 2011 through 2015, and interviewed RUC staff and CMS officials. The American Medical Association/Specialty Society Relative Value Scale Update Committee (RUC) has a process in place to regularly review Medicare physicians' services' work relative values (which reflect the time and intensity needed to perform a service). Its recommendations to the Centers for Medicare & Medicaid Services (CMS), the agency within the Department of Health and Human Services (HHS) that administers Medicare, though, may not be accurate due to process and data-related weaknesses. First, the RUC's process for developing relative value recommendations relies on the input of physicians who may have potential conflicts of interest with respect to the outcomes of CMS's process. While the RUC has taken steps to mitigate the impact of physicians' potential conflicts of interest, a member of the RUC told GAO that specialty societies' work relative value recommendations may still be inflated. RUC staff indicated that the RUC may recommend a work relative value to CMS that is less than the specialty societies' median survey result if the value seems accurate based on the RUC members' clinical expertise or by comparing the value to those of related services. Second, GAO found weaknesses with the RUC's survey data, including that some of the RUC's survey data had low response rates, low total number of responses, and large ranges in responses, all of which may undermine the accuracy of the RUC's recommendations. For example, while GAO found that the median number of responses to surveys for payment year 2015 was 52, the median response rate was only 2.2 percent, and 23 of the 231 surveys had under 30 respondents. CMS's process for establishing relative values embodies several elements that cast doubt on whether it can ensure accurate Medicare payment rates and a transparent process. First, although CMS officials stated that CMS complies with the statutory requirement to review all Medicare services every 5 years, the agency does not maintain a database to track when a service was last valued or have a documented standardized process for prioritizing its reviews. Second, CMS's process is not fully transparent because the agency does not publish the potentially misvalued services identified by the RUC in its rulemaking or otherwise, and thus stakeholders are unaware that these services will be reviewed and payment rates for these services may change. Third, CMS provides some information about its process in its rulemaking, but does not document the methods used to review specific RUC recommendations. For example, CMS does not document what resources were considered during its review of the RUC's recommendations for specific services. Finally, the evidence suggests—and CMS officials acknowledge—that the agency relies heavily on RUC recommendations when establishing relative values. For example, GAO found that, in the majority of cases, CMS accepts the RUC's recommendations and participation by other stakeholders is limited. Given the process and data-related weaknesses associated with the RUC's recommendations, such heavy reliance on the RUC could result in inaccurate Medicare payment rates. CMS has begun to research ways to develop an approach for validating RUC recommendations, but does not yet have a specific plan for doing so. In addition, CMS does not yet have a plan for how it will use funds Congress appropriated for the collection and use of data on physicians' services or address the other data challenges GAO identified.
Background Annually, the federal government expends hundreds of billions of dollars for a variety of grants, transfer payments, and procurement of goods and services. Because of its size, complexity, weak control environment, and insufficient preventive controls, the federal government risks disbursing improper payments. Agency-specific studies and audits have indicated that improper payments are a widespread and significant problem. They occur in a variety of programs and activities including those involving contract management, financial assistance benefits—such as Food Stamps and Veterans Benefits—and tax refunds. However, some overpayments, by their nature, are not considered improper payments, such as routine contract price adjustments. Legislative efforts have focused on improving the federal government’s control environment. For example, under the Federal Managers’ Financial Integrity Act of 1982 and the Federal Financial Management Improvement Act of 1996, agency managers are responsible for ensuring that adequate systems of internal controls are developed and implemented. An adequate system of internal controls, as defined by the Comptroller General’s internal control standards, which are issued pursuant to the Financial Integrity Act, should provide reasonable assurance that an agency is effectively and efficiently using resources, producing reliable financial reports, and complying with applicable laws and regulations. Accordingly, cost-effective internal controls should be designed to provide reasonable assurance regarding prevention of or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. Recent legislation has provided an impetus for agencies to systematically measure and reduce the extent of improper payments. For example, with the advent of the CFO Act, GMRA, and the Results Act, agencies are challenged to increase attention on identifying and addressing improper payments. The CFO Act, as expanded by GMRA, requires 24 major departments/agencies to prepare and have audited agencywide financial statements, which are intended to report an agency’s stewardship over its financial resources—including how it expended available funds. The Office of Management and Budget’s Bulletin 97-01, Form and Content of Agency Financial Statements, provides implementing guidance on these CFO Act requirements. In addition, the CFO Act sets expectations for agencies to routinely produce sound cost and operating performance information. Effective implementation of this requirement would enable managers to have timely information for day-to-day management decisions. The CFO Act also requires OMB to prepare and annually revise a governmentwide 5- year financial management plan and status report that discusses the activities the executive branch plans to and has undertaken to improve financial management in the federal government. Additionally, each agency CFO is responsible for developing annual plans to support the governmentwide 5-year financial management plan. The Results Act seeks to improve the effectiveness and efficiency of the federal government by requiring that agencies develop strategic and annual performance goals and report on their progress in achieving these goals. Agency strategic plans are required to include the agency’s mission statement; identify long-term general goals, including outcome-related goals and objectives; and describe how the agency intends to achieve these goals. Agencies are required to consult with the Congress when developing their strategic plans and consider the views of other interested parties. In their annual performance plans, agencies are required to set annual goals, covering each program activity in an agency’s budget, with measurable target levels of performance. Agencies are also required to issue annual performance reports that compare actual performance to the annual goals. Together, these plans and reports are the basis for the federal government to manage for results. The Results Act is supported by the development of federal cost accounting standards under the CFO Act, which require agencies to identify the costs of government activities. These standards can lead to and support linking costs with achieving performance levels. This can give managers information for assessing the full costs of goods, services, and benefits compared to program outputs and results. Such information can provide the basis for agencies to develop performance goals to monitor and track improper payments as well as strategies for preventing such future disbursements. The risk of improper payments and the government’s ability to prevent them will continue to be of concern in the future. Under current federal budget policies, as the baby boom generation leaves the workforce, spending pressures will grow rapidly due to increased costs of Medicare, Medicaid, and Social Security. Other federal expenditures are also likely to increase. Thus, absent improvements over internal controls, the potential for additional or larger volumes of improper payments will be present. Figure 1 illustrates the reported and projected trends in federal expenditures, excluding interest on the public debt, for fiscal years 1978 through 2004. Historically, the recovery rates for certain programs identified as having improper payments have been low. Therefore, it is critical that adequate attention be directed to strengthen controls to prevent improper payments. Scope and Methodology This report is based on our reviews of available major agencies’ fiscal year 1998 financial statement reports prepared under the CFO Act, as expanded by GMRA. We reviewed these reports to identify amounts of reported improper payments. We also identified and reviewed recent GAO reports to identify additional types of programs at risk. We supplemented our review with IG reports from CFO Act agencies and other information obtained from a variety of sources, such as agency studies. In addition, we reviewed these data sources to discern the causes of improper payments. For the nine agencies that reported improper payments in their financial statement reports, we reviewed the agencies’ Results Act performance plans for fiscal year 2000 to determine the extent to which the plans addressed improper payments. We relied on recent GAO reports and guidance to consider any impact from potential Year 2000 computing problems on improper payments. In selected cases, we interviewed agency CFO and IG personnel. Because of the nature of improper payments, our review would not capture all reported instances of such payments. As requested, relevant GAO reports covering our work in these areas for the past 4 fiscal years are listed at the end of this report. To gather information on existing financial statement and performance reporting criteria, we reviewed relevant professional literature, including the American Institute of Certified Public Accountants’ Codification of Statements on Auditing Standards and the Federal Accounting Standards Advisory Board’s (FASAB) Statements of Federal Financial Accounting Concepts and Standards. In addition, we reviewed OMB Bulletin 97-01, Form and Content of Agency Financial Statements and OMB Circular A-11, Part 2, Preparation and Submission of Strategic Plans, Annual Performance Plans, and Annual Program Performance Reports. We performed our work from June 1998 through August 1999. Our work was conducted in accordance with generally accepted government auditing standards. We provided a draft of this report for comment to the Director of the Office of Management and Budget (OMB). These comments are presented and evaluated in the “OMB Comments and Our Evaluation” section and reprinted in appendix IV. Improper Payments Are Widespread Across Government, but the Full Extent Is Unknown Agency-specific studies performed by GAO, IGs, and others indicate that improper payments are a widespread and significant problem. However, efforts by agencies to develop comprehensive estimates have varied. Nine agencies have taken the initiative to disclose improper payments for 17 of their programs in their financial statement reports, which has resulted in the disclosure of important information for oversight and decision-making. At the same time, the methodologies used by some agencies to estimate improper payments do not always result in complete estimates, and many other agencies have not even attempted to identify or estimate improper payments. As a result, the full extent of improper payments governmentwide is largely unknown, which hampers efforts to reduce such payments. Ascertaining the full extent of improper payments governmentwide is critical to determining related causes. Obtaining these data would give agencies baseline information for making cost-effective decisions about enhancing controls to minimize improper use of federal resources. Nine Agencies Reported Improper Payments, but Estimates Are Incomplete Nine of the CFO Act agencies that had issued their fiscal year 1998 audited financial statements as of the end of our field work, acknowledged making improper payments. For fiscal year 1998, HHS, USDA, and HUD collectively reported improper payments of $14.9 billion as part of their program expenses in their financial statement reports. HHS’ estimated improper Medicare benefit payments constitute $12.6 billion of this amount, which represents 7.1 percent of the $177 billion in Fee-for-Service payments processed in fiscal year 1998. USDA disclosed $1.4 billion in food stamp overissuances or approximately 7 percent of its annual program cost of $20.4 billion. HUD’s excess housing subsidy payments totaled $857 million, or 4.6 percent of its rental assistance payments for this $18.6 billion program. These agencies have made significant progress in estimating and reporting improper payments for these programs by implementing methodologies that use statistical sampling. However, implementing a statistically valid methodology will pose challenges to agencies for certain programs. The disclosure methods used by HHS, USDA, HUD, and the other six agencies varied. Some agencies, such as the Social Security Administration (SSA), reported known improper payments as receivables and provided explanatory disclosures in the notes accompanying their financial statements. Other agencies disclosed explanatory information in other sections of their financial statement reports, such as in management’s discussion and analysis or in supplemental data sections. In addition, reporting within agencies for different programs also varied. For example, USDA disclosed improper payments of $1.4 billion for the Food Stamp Program, but only acknowledged making improper payments without providing a specific amount for its Federal Crop Insurance Corporation (FCIC). Three of the nine agencies reported improper payments as expenses for 4 programs, while five agencies reported them as accounts receivable for 10 programs. Three agencies acknowledged making improper payments, but did not quantify the dollar amounts for three programs. Eleven of the CFO Act agencies did not report any information related to improper payments in their financial statement reports. Such inconsistent financial reporting makes it difficult to quantify the extent of the problem governmentwide and indicates a need for more guidance. To address this issue, OMB is contemplating revising its guidance to provide uniform reporting and disclosure of improper payments by management. In addition, OMB has made error reduction in the distribution of benefits a Priority Management Objective, which is monitored by the OMB Director. OMB works with agencies on an individual basis to address these issues in ways most appropriate to the individual programs. For example, OMB is working with ED and the Department of the Treasury to examine ways to implement new statutory authorization for IRS verification of income of student aid applicants, in accordance with existing tax and privacy laws. Table 1 lists the nine agencies and the manner in which they reported improper payments in their fiscal year 1998 financial statement reports for the 17 programs identified. See appendix II for a description of these agencies and/or their programs. The extent of the problem for certain of these agencies’ programs is unknown because agencies are not performing comprehensive quality control reviews to estimate the range and/or identify rates of improper payments. For example: SSA reported $2.5 billion in gross receivables as overpayments related to its Supplemental Security Income (SSI) program—a $27 billion program annually providing cash assistance to about 7 million financially needy individuals who are aged, blind, or disabled. These receivables consist of amounts specifically identified over multiple years based on SSA’s discussions with recipients and the results of its efforts in matching data provided by recipients with information from other federal and state agencies, such as IRS 1099 information, VA benefits data, and state-maintained earnings and employment data. SSA reports a statistically based accuracy rate for new SSI awards of 92.5 percent. However, this accuracy rate does not consider the medical eligibility of recipients. Since the majority of SSI program dollars are historically directed to recipients with medical disabilities, refining the methodology to factor in any questions concerning medical risk is critical to determining improper payments within this program. According to SSA’s year 2000 performance plan, SSA is developing a comprehensive mechanism for quantifying dollar errors related to SSI disability benefit payments. However, no timing for implementation has yet been determined. Although HHS reported $12.6 billion in improper payments for its $177 billion Medicare Fee-for-Service program based on a statistically valid sample, it has not attempted to estimate improper payments for the $98 billion Medicaid program. The HHS IG reported that the Health Care Financing Administration (HCFA)—the HHS agency responsible for overseeing the Medicaid program—has no comprehensive quality assurance program or other methodology in place for estimating improper Medicaid payments. Administered by state agencies, Medicaid provided health care services to approximately 33 million low-income individuals. The IG recommended that HCFA work with the states to develop a methodology to determine the range of improper payments in the Medicaid program. However, developing a statistically valid methodology to estimate Medicaid improper payments poses a challenge. Other state-administered or intergovernmental programs also face difficulties in developing estimates due to the variable nature of the programs and the need to gain the cooperation of state and local government officials nationwide. HCFA has recently drafted a strategy for discussing this issue with states. Other Programs and Activities Have Improper Payments or Are at Risk Previous audits conducted by GAO and IGs have identified several other agencies, such as DOD, ED, and IRS that had improper payments. As illustrated in figure 2, between fiscal years 1994 and 1998, DOD contractors voluntarily returned $984 million that DOD’s Defense Finance and Accounting Service (DFAS) erroneously paid them—resulting from inadvertent errors, such as paying the same invoice twice or misreading invoice amounts. As a result, the contractors, as opposed to DOD, were determining the existence and amount of erroneous payments. As part of its stewardship duties, DOD is responsible for making these determinations. However, DOD has not yet made a comprehensive estimate of improper payments to its contractors, and there are likely more overpayments that have yet to be identified and returned. With an annual budget of over $130 billion in purchases involving contractors, DOD would benefit from estimating the magnitude of improper payments. ED is another agency with improper payments. ED’s student financial assistance programs have been designated as high risk since our governmentwide assessment of vulnerable federal programs began in 1990. ED provides over $8 billion in grants to assist over 4 million students in obtaining postsecondary education. As discussed in our January 1999 Performance and Accountability Series, ED-administered student financial aid programs have a number of features that make them inherently risky. They provide grants to a population composed largely of students who would not otherwise have access to the funds necessary for higher education. ED estimates that $78.9 million, or 1 percent, was misspent by grantees in fiscal year 1997; however, an ED IG report indicates that this estimate may be incomplete. A more complete estimate would allow ED to identify areas of greater risk and target corrective actions. Also, the Earned Income Tax Credit (EITC) program—a refundable tax credit available to low income, working taxpayers—has historically been vulnerable to high rates of invalid claims. During fiscal year 1998, IRS reported that it processed EITC claims totaling over $29 billion, including over $23 billion (79 percent) in refunds. Of the 290,000 EITC tax returns with indications of errors or irregularities that IRS examiners reviewed, $448 million (68 percent of the $662 million claimed) was found to be invalid during fiscal year 1998. The IRS has not disclosed any estimated improper payments in its financial statement reports. IRS examinations of tax returns claiming EITC are important control mechanisms for detecting questionable claims and providing a deterrent to future invalid claims. However, because examinations are often performed after any related refunds are disbursed, they are less efficient and effective than preventive controls designed to identify invalid claims before refunds are made. OMB has worked with IRS to start a 5-year compliance initiative to minimize losses in this area. This initiative is intended to increase taxpayer awareness, strengthen enforcement of EITC requirements, and research sources of EITC noncompliance. EITC compliance efforts include a significant focus on pre-refund fraud/error prevention and detection. For example, the EITC compliance initiative includes recalculation of erroneous overclaims, identification of questionable returns, and initiation of many EITC audits, all of which should occur prior to issuing refunds. However, our work has shown that even in cases where IRS has identified potentially erroneous claims, it released refunds prior to completing the reviews. Other types of federal programs and activities that undergo audits also risk making improper payments. Internal control deficiencies and other problems similar to those prevalent in programs that have acknowledged improper payments suggest that additional federal financial assistance programs, contract management activities, and other miscellaneous programs may also be particularly vulnerable to disbursing improper payments. For example, USDA’s IG reported that the Natural Resources Conservation Service (NRCS) exhibited significant control weaknesses when determining if farmers qualified for annual payments under the Conservation Reserve Program (CRP). CRP, which disbursed $1.7 billion in fiscal year 1998, provides incentives and financial assistance to farmers and ranchers to retire environmentally sensitive land from production. Due to these control weaknesses, the USDA IG noted that CRP risked making incorrect decisions. These incorrect decisions could result in USDA disbursing improper payments. Without a measurement of the extent of improper payments, it is difficult to assess the appropriate level of management attention needed to mitigate these program risks. Once agencies have implemented methodologies to estimate the amount of improper payments, they can use this information to develop error rates. Agencies may find it useful to compute the dollar amount of errors as a percentage of program outlays, and the number of transaction errors as a percentage of the total number of transactions processed. Management could then use these error rates to evaluate whether further action is needed to address improper payments. Internal Control Weaknesses Cause Improper Payments Pervasive deficiencies in internal control across the federal government result in the payment of federal funds for purposes other than those originally intended. For example, several agencies face challenges in ensuring adequate controls for assessing beneficiaries’ initial and continued eligibility due to ineffective data sharing and sources of information. Also, some agencies have insufficient oversight and monitoring mechanisms, such as site visits and reviews of appropriate documentation, to ensure the validity of payments—particularly for federal financial assistance programs. Systems deficiencies also contribute to improper payments when accurate or timely data are not always available for payment decisions. Figure 3 illustrates our categorization of internal control weaknesses that contribute to improper payments within the 17 programs where agencies reported improper payments. Internal Controls Over Eligibility Determinations Are Often Inadequate As highlighted in our reviews of GAO and IG reports, ensuring adequate controls over determining beneficiaries’ eligibility often proves difficult for many agencies. Initial and/or continued eligibility determination problems were noted for 10 of the 17 programs that reported improper payments. For instance, initial eligibility for HUD’s Section 8 and Public Housing programs—providing $18.6 billion in rental assistance for lower income families in fiscal year 1998—is primarily based on an applicant’s self- reported income. According to HUD’s IG, HUD regulations require owners and housing authorities to verify the information provided, but this process often lacks effective controls to ensure that verifications are adequately performed. In addition, the IG reported that recipients do not always report complete or accurate information. Consequently, improper payments have occurred. To improve HUD’s procedures for verifying participant’s income and correct this long-standing problem, the HUD IG recommended the following actions: (1) on-site reviews to assess first hand the housing subsidy administrator’s control environment, (2) confirmations with third parties, and (3) computerized income verification matching to IRS and SSA records. As discussed in our January 1999 Performance and Accountability Series, HUD unveiled a multifaceted plan to identify households’ unreported and/or underreported income in fiscal year 1998. The plan includes steps to (1) further expand HUD’s computer matching efforts, (2) strengthen recertification policies and procedures, (3) ensure that HUD’s information systems have accurate and complete data on tenants, (4) institute penalties, and (5) perform monitoring and oversight functions. OMB is also working with HUD in reducing payment errors in rental assistance due to recipient underreporting of income. In another example, the DOL is challenged to correctly identify eligible recipients for its Unemployment Insurance (UI) program, which in fiscal year 1998 provided over 7 million unemployed workers with about $20 billion in temporary financial support to facilitate re-employment. The DOL IG reported that state-administered claims offices, responsible for determining eligibility requirements, have ineffective controls to verify information provided by claimants. Claimants declaring themselves to be U.S. citizens are not screened for immigration legal status, which in some cases has resulted in improper payments. For example, ineligible individuals, including illegal aliens, were paid millions of dollars over an approximate 2-year time frame because states did not perform up-front verification of social security numbers provided by claimants. OMB has worked with DOL to secure a congressional authorization for an integrity initiative focused on reducing benefit overpayments and improving UI tax compliance. Our analysis of GAO and IG reports also showed that, as with initial eligibility determinations, agencies’ controls are insufficient to ensure the continuing eligibility of beneficiaries for 8 of the 10 programs with eligibility determination problems. For example, SSA is mandated to perform reviews for continued eligibility for program benefits to aid in preventing fraud, waste, and abuse in the Disability Insurance (DI) program—a program to provide a continuing income base for more than 6 million disabled workers and eligible members of their families. However, acknowledged delays in performing these continuing disability reviews have undermined the effectiveness of this control. Because SSA disburses approximately $50 billion in disability benefit payments annually, it is critical that these reviews be performed promptly; otherwise, beneficiaries who are no longer eligible for this program may inappropriately receive benefits. SSA has a multiyear plan to become current with all disability reviews by 2002. Since 1996, the HUD IG has reported that HUD’s housing subsidy programs experience improper payments when beneficiaries’ income status changes and they do not notify housing authorities to adjust their benefits. Various legal, technical, and administrative obstacles impede housing authorities from ensuring that tenants report all income sources during the periodic determination to assess continuing eligibility. HUD has encouraged housing authorities to computer match with state agencies to detect unreported income, since housing authorities lack the legislative authority to access IRS and SSA data. However, little progress has been made in this area, since most housing authorities do not have the systems expertise to effectively implement this technique. In May 1998, the President’s Council on Integrity and Efficiency (PCIE) issued a report to highlight the need for increased cooperation among federal agencies in sharing income/financial resource information about federal program beneficiaries in an effort to improve controls over eligibility verification. For example, the report indicated that the DOL IG’s ability to ensure eligibility of Unemployment Insurance Program recipients could be enhanced by verifying employment status of those recipients with IRS or SSA wage records. Currently, DOL’s IG must coordinate with states, requiring subpoena authority in some cases, to obtain this information from states. Also, governmentwide, there is no omnibus authority for efficiently and effectively obtaining access to some data. We have work ongoing on this issue and will report at a later date. Oversight and Monitoring Controls Are Insufficient Our analysis of GAO and IG reports showed that insufficient federal monitoring and oversight of program expenditures exist in 7 of the 17 programs where agencies reported improper payments. Effective federal monitoring assesses the quality of performance over time. It includes regular management and supervisory activities, such as periodic comparisons of expected and actual results and reconciliation of data to its source. Generally, activities such as site visits, reviews of progress and financial reports filed by contractors and grantees, and reviews of contracts and grant agreements are techniques often used by federal officials to oversee and monitor programs. The lack of sufficient oversight and monitoring controls can lead to improper payments by fostering an atmosphere that invites fraud. For instance, both we and the HHS IG have reported that HCFA’s insufficient oversight of the Medicare program hampered it from preventing improper Medicare payments. To fulfill its primary mission of providing health care coverage for approximately 39 million aged individuals, Medicare pays contractors to process claims for health care services. These contractors are responsible for all aspects of claims administration and serve as HCFA’s front line of defense against fraud and abuse. Yet, vulnerabilities in contractors’ procedures for paying Medicare claims have provided a lax environment. This environment permitted unscrupulous providers opportunities to obtain additional unjustified payments. These activities include billing for services never rendered, misrepresenting the nature of services provided, duplicate billing, and providing services that were not medically necessary. Although HCFA’s most recent estimate of improper payments in its $177 billion Medicare Fee-for-Service program amounted to $12.6 billion, this estimate did not consider improper payments made as part of another $33 billion Medicare Managed Care program. Therefore, the impact of insufficient monitoring and oversight on improper payments could be more extensive than current estimates indicate. To enhance HCFA’s oversight function, the HHS IG recommended that HCFA perform risk assessments of contractor functions to identify those functions that significantly affect the improper payment of claims. This would enable HCFA to target areas and strengthen related controls. The Agency for International Development (AID), which spent $5.2 billion in fiscal year 1998 to provide assistance to developing countries, also suffers from insufficient monitoring and oversight. We reported that AID does not have accurate information to ensure that its operations and programs are being managed cost-effectively and efficiently. In addition, AID’s IG reported weaknesses in monitoring relief and rehabilitation activities. For example, mission employees in Rwanda, who were asked to monitor relief and rehabilitation activities, did not have basic documentation they needed to monitor relief efforts, such as copies of grant agreements, progress reports, and financial status reports. The impact of this control deficiency on improper payments was not quantified. Based on previous GAO and IG reports, insufficient oversight and monitoring is also present in other programs that have improper payments but did not report them. For example, according to the ED IG, audits performed under the Single Audit Act are ED’s principal control for ensuring that student financial assistance funds were being disbursed to eligible students in proper amounts. However, the IG noted that ED did not (1) ensure that all audit reports were received, (2) follow-up on problems identified, or (3) have a systematic process in place to measure trends in the misspending by grantees. The IG recommended that the department (1) complete the development of an ongoing process to identify missing/delinquent audit reports, (2) take corrective actions against delinquent audit report filers, and (3) develop a systematic methodology to quantify costs to measure the effectiveness of monitoring efforts and trends among institutions. The IG also recommended that the department use a risk management model to determine how to effectively deploy limited monitoring resources. Without this type of information, the department is unable to make cost-benefit decisions to determine whether to strengthen preventive internal controls. Systems Deficiencies Exist Deficiencies in agencies’ automated systems, or the lack of systems, prevent personnel from accessing reliable and timely information, which is integral to making disbursement decisions. As a result, improper payments frequently occur because agency personnel lack needed information, rely on inaccurate data, and/or do not have timely information. Agency systems deficiencies have been identified in prior GAO and IG reports for 7 of the 17 programs reporting improper payments. For example, we reported that interstate duplicate participation in the Food Stamp Program goes undetected because there is no national system to identify participation in more than one state. While states may currently learn of some duplicate participation from SSA or through their own matching efforts with neighboring states, they rely primarily on applicants and clients to truthfully identify who resides in their households. USDA’s Food and Nutrition Service (FNS) manages the Food Stamp Program through agreements with state agencies. Because USDA’s most current annual estimate indicates that food stamp overissuances account for over 7 percent of the program’s $20 billion in annual benefit expenses, it is critical that action be taken to strengthen systems and related controls. FNS is considering whether to establish a central system to help ensure that individuals participating in the Food Stamp Program are not being improperly included in more than one state. System deficiencies are also a factor for agency programs that did not disclose improper payments. For example, DOD’s payment process suffers from nonintegrated computer systems that require data to be entered more than once in different systems, sometimes manually, which increases the possibility of erroneous or incomplete data. Also, DOD contracts may have from 1 to over 1,000 accounting classification reference numbers which involve extensive data entry, also increasing the chance for errors. As previously discussed, DOD contractors returned about $984 million between fiscal years 1994 and 1998 to the DFAS in Columbus, Ohio, as a result of duplicate and erroneous payments. We also reported that pervasive weaknesses in access controls in the Air Force vendor payment system application including inadequate separation of duties and other internal control deficiencies resulted in fraudulent payments and left DOD vulnerable to abuse. While no internal control system at DOD, or any other agency, can guarantee the elimination of improper payments and the prevention of fraud, resolving DOD’s systems problems and designing effective solutions to reduce related risks are of critical importance, particularly since DOD expenditures comprise nearly half of the federal government’s discretionary spending. We made several recommendations to resolve these deficiencies, such as suggesting that DOD limit vendor payment system access levels to those appropriate for the user’s assigned duties. DOD has a number of initiatives underway to help ensure that payments are proper—including the development of a standard core system for procurement. However, the new system is not scheduled to be fully implemented for several years. Similar to DOD, ED also lacks a fully functional integrated database to administer over $7 billion in federal student financial aid programs. As a result, it remains vulnerable to losses because the department and schools often do not have accurate, complete, and timely information on program participants needed to effectively and efficiently operate and manage its programs. We have reported that a lack of common identifiers for students and institutions makes it difficult to track them across systems. Because each system uses different combinations of data fields to uniquely identify, access, and update student records, duplicate student records have been identified in key systems. Many of ED’s student financial aid systems were developed independently over time by multiple contractors. Consequently, ED relies on various contractors to operate its numerous systems using different hardware and software. We recommended that ED (1) develop and enforce a departmentwide systems architecture and (2) ensure that the developed systems architecture addresses systems integration, common identifiers, and data standards. The Office of Student Financial Assistance has developed its Modernization Blueprint to guide the development of an integrated financial aid delivery system, which ED officials stated depicts the first 3 years of a continuing process of modernizing its system. Program Design Issues Contribute to Improper Payments Often, the nature of a program can contribute to the disbursement of improper payments. Many programs have complex program regulations, and several emphasize expediting payments or have high volumes of transactions to process. These program design issues inherently increase the potential for improper payments, yet such payments are virtually impossible to eliminate. However, strengthening business practices and developing targets or goals for reducing improper payments can mitigate the risk of improper payments occurring. Also, measuring progress in relation to such targets or goals may serve as a measure of the effectiveness of an agency’s improper payment reduction program. According to our analysis of GAO and IG reports, program design issues were present in programs with improper payments as illustrated in figure 4. Some agencies currently have programs that use targets or goals to aid in reducing improper payments. One example is the Medicaid program, wherein states are responsible for determining the eligibility of beneficiaries and disbursing related federal funds. According to HHS regulations, states must have a payment error rate no greater than 3 percent due to errors in eligibility determinations or HHS may disallow medical assistance payments. Another example is the Food Stamp Program administered by state agencies under USDA regulations. In this program, USDA may pay 50 percent of each state’s cost of administering the program. To encourage states to reduce their payment error rates, the program includes an incentive. This incentive allows USDA to increase the 50 percent reimbursement for administrative costs, by as much as 10 percent, to a total of 60 percent based on reductions in states’ error rates below 6 percent and on other conditions. State agencies also may be required to make payments to USDA if their payment error rate exceeds USDA’s national performance measure. State agencies may be required to invest in improving their administration of the program rather than making refunds. Complex Program Regulations Increase Risk of Improper Payments Program complexity inherently increases the risk of improper payments. Previous GAO and IG reports disclosed this condition for 11 of the 17 programs with reported improper payments. For example, the complexity of state Medicaid programs provide challenges for federal oversight because of the variations in managing these programs on a state-by-state basis. Medicaid—the primary source of health care for 12 percent of the U.S. population—provides matching grants to states based on formulas encompassing states’ per capita income. States have a variety of options for program administration. They can elect to administer the program at the state or county level. Also, they can operate a fee-for-service program, a managed care program, or some combination of the two. States may also elect to operate their claims processing systems directly or contract with private vendors. Because of the size of this program—it disbursed nearly $98 billion in federal funds during fiscal year 1998—it is critical that HCFA comprehensively estimate its improper payments to assess its risk and determine appropriate actions to strengthen oversight controls. Such actions would help to ensure that HCFA is fulfilling its stewardship responsibilities for this program. Block grants present unique challenges to providing adequate accountability for federal funds. Block grants give states flexibility to adapt funded activities to fit state and local needs and devolve major responsibilities to the states themselves to oversee these programs. Under the Temporary Assistance for Needy Families (TANF) block grant, states are authorized to collectively spend up to $16.5 billion annually to provide assistance to needy families and promote work activities. To implement TANF, states and localities determine the range of services and eligibility criteria. Many states manage the TANF, Food Stamp, and Medicaid programs through local offices, and depending upon the state, the same staff may be determining eligibility and benefit levels for all three programs. These programs’ eligibility rules and income tests are complex and differ from one another; thus, although all three programs consider assets and household income and size, the extent to which they do so varies. A requirement that recipients notify the staff when their income changes further complicates eligibility determinations—staff must use three sets of eligibility criteria to recalculate benefit levels. Given the complexity and diversity of eligibility rules among these three programs, it is a challenge at all levels of government to adequately oversee these programs, and improper payments are sometimes made. Speed of Service Issues, Coupled With Resource Constraints, Impact Improper Payments Many programs’ missions emphasize speed of service. As a result, errors are more likely to occur, resulting in improper payments. We considered this condition to exist for 6 of the 17 programs with reported improper payments. It is also present in agencies that had improper payments but did not report them. For example, IRS’ ability to successfully meet the financial management challenges it faces must be balanced with the competing demands placed on its resources by its customer service and tax law compliance responsibilities. IRS is mandated to process tax refunds within 45 days of receipt of a tax return. If the refund is not processed within this time, IRS must remit interest payments to the taxpayer. However, IRS’ systems were not designed to handle this volume of information within these time frames. Further, we reported that IRS lacks critical preventive controls, such as comparing the information on tax returns to third-party data such as W-2s (Wage and Tax Statements) in a timely manner. As a result, the agency is unable to identify and correct discrepancies between these documents that allow duplicate refunds to be issued. Although IRS has detective (post-refund) controls in place, they often occur months after the returns are submitted and processed. Insufficient preventive controls expose the government to potentially significant losses due to inappropriate disbursement of refunds. According to IRS records, IRS’ investigators identified over $17 million in alleged fraudulent refunds that had been disbursed during the first 9 months of calendar year 1998. However, the full magnitude of improper payments disbursed by IRS is unknown. The Federal Emergency Management Agency’s (FEMA) Disaster Relief program is another example of how providing service—in this case, providing assistance to disaster victims—as quickly as possible increases the risk of improper payments being made. FEMA provided over $2.2 billion in disaster relief in fiscal year 1998 to assist individuals, families, communities, and states in responding to and recovering from disasters, such as floods, hurricanes, and tornadoes. FEMA has set demanding performance goals for its disaster assistance activities, from acting within 12 hours on requests to supply disaster victims with water, food, and shelter, to processing disaster housing applications from eligible individuals within an average of 8 days. The IG has noted that achieving these goals will require FEMA to streamline operations and apply new technology to reduce waste and duplication of benefits. In past years, the IG has identified specific cases of individuals filing false claims to obtain FEMA disaster assistance; however, the full extent of this problem has not been quantified. We recognize that delivering services expeditiously while ensuring that the right amount is paid to the right person poses a significant challenge for many agencies. Without state-of-the-art information management systems and appropriate sharing of data, agency personnel cannot readily access needed information for payment decisions and thus are hampered from preventing improper payments. Due to the diverse nature of programs, consulting with congressional oversight bodies would assist agencies when establishing targets and goals to reduce improper payments without impairing service delivery and be an important means of obtaining agreement with the Congress as to expected results for each program. Large Volumes of Transactions Increase Risk of Improper Payments A significant volume of claims or payments is also a factor that contributes to improper payments, especially when compounded with resource constraints. Large volumes of claims were identified in 4 of the 17 programs with reported improper payments. For example, in a single year, Medicare contractors process over 800 million claims with limited time for processing. IRS is another agency with large volumes of activity. For instance, in fiscal year 1998, it processed 1.4 billion tax and information returns, with 88 million involving refunds. Given the high volume of transactions, inadvertent clerical errors are more likely and they could result in improper payments. Potential Year 2000 Problem Increases the Need for Effective Internal Controls While implementing effective internal controls is and will be an ongoing concern, the Year 2000 problem presents a unique challenge to ensuring effective payments controls. Many of the federal government’s computer systems were originally designed and developed 20 to 25 years ago, are poorly documented, and use a wide variety of computer languages, many of which are obsolete. Some applications include thousands, tens of thousands, and even millions of lines of code, each of which must be examined for date-format problems. Moreover, federal programs are also vulnerable to Year 2000 risks stemming from items outside of their control, such as the Year 2000 compliance of critical business partners. Unless corrected, Year 2000 failures may have a costly, widespread impact on federal, state, and local governments—including the extent to which improper payments are made. These many risks increase the possibility that Year 2000 induced failures could result in increased number and amounts of improper payments as agencies attempt to sustain their core business functions. Further, nonexistent or ineffective internal controls, as previously discussed, increase this risk. Accordingly, the federal government could potentially distribute additional improper payments. While the Year 2000 problem increases the risk of improper payments, as we reported earlier this year, it also provides the opportunity to institutionalize valuable lessons, such as the importance of reliable processes and reasonable controls. Our Year 2000 enterprise readiness guide calls on agencies to develop and implement policies, guidelines, and procedures in such critical areas as configuration management, quality assurance, risk management, project scheduling and tracking, and performance metrics. To address the Year 2000 problem, several agencies have implemented such policies. For example, HCFA has implemented policies and procedures related to configuration management, quality assurance, risk management, project scheduling and tracking, and performance metrics for its internal systems. Most Agency Performance Plans Do Not Comprehensively Address Improper Payments As previously discussed, nine agencies acknowledged making improper payments in 17 programs for fiscal year 1998. These agencies’ fiscal year 2000 performance plans, under the Results Act, included performance goals and strategies that address key internal control weaknesses in four programs. For nine programs, the respective agencies did not comprehensively address improper payments in their plans. Improper payments were not addressed at all for the remaining four programs. Appendix III contains our assessment of the extent to which each agency reporting improper payments addressed them in its performance plan. For these and other agencies at risk, the first step in addressing improper payments is to identify the magnitude of these payments. Agencies can then analyze the characteristics of these cases to identify the circumstances and root causes leading to the improper payments. Using this analysis, agencies can make cost-benefit decisions on systems and other internal control improvements to mitigate the risk of improper payments and implement performance goals to manage for results. The use of appropriate performance goals relating to improper payments can focus management attention on reducing such payments. For example, HHS has reported a national estimate of improper payments in its Medicare Fee-for-Service benefits since fiscal year 1996. For fiscal year 1998, HHS reported estimated improper payments of $12.6 billion, or more than 7 percent, in Medicare Fee-for-Service benefits—down from about $20 billion, or 11 percent, reported for fiscal year 1997 and $23.2 billion, or 14 percent, for fiscal year 1996. As discussed earlier, HCFA would also benefit from identifying improper payments and establishing related performance goals for its $98 billion Medicaid program. Analysis of improper Medicare payments, as part of the financial statement preparation and audit process, helped lead to the implementation of several initiatives intended to identify and reduce improper payments. These initiatives included prepayment reviews of selected claims, an increase in the overall level of prepay and postpay claims reviews, and medical reviews of providers identified as having nonstandard billing practices. Annual estimates of improper payments in future audited financial statements will provide information on the progress of these initiatives. Without a systematic measurement of the extent of the problem, management cannot determine (1) if the problem is significant enough to require corrective action, (2) how much to invest in internal controls or (3) the success of efforts implemented to reduce improper payments. In fiscal year 1998, VA piloted a new measurement system to determine the accuracy of veterans’ benefit payments—the Systematic Technical Accuracy Review (STAR) system. Using the STAR system, the Veterans Benefits Administration (VBA)—a component of VA—determined that its regional offices were accurate only 64 percent of the time when making initial benefit decisions. This measure indicated that VBA should focus additional attention on ensuring that correct decisions are made the first time. Using the 64 percent as a baseline, VBA established a goal of achieving a 93 percent accuracy rate by fiscal year 2004. Although it is too early to determine whether VBA’s efforts to meet its accuracy improvement goal will be successful, the new STAR system represents an important step forward by VBA in identifying and correcting the causes of errors and having a baseline against which to measure results and progress. Currently, there is no governmentwide guidance on how to develop mechanisms for identifying and estimating improper payments, which would help agencies to identify whether a need exists to address improper payments in their annual strategic and performance planning processes. Developing such mechanisms would enable each agency’s management to better understand the full extent of its problem. With these mechanisms in place, appropriate cost-beneficial corrective actions could be designed and implemented. Although no governmentwide guidance exists for identifying and estimating improper payments, the CFO and Results Acts provide a framework for OMB and agencies to report on efforts to minimize improper payments. Under the CFO Act, OMB is required to prepare and annually revise a governmentwide 5-year financial management plan and status report that discusses the activities the executive branch has undertaken to improve financial management in the federal government. Each agency CFO is responsible for developing annual agency-specific plans to support the governmentwide 5-year financial management plan. The CFO Act also requires OMB to provide the governmentwide 5-year plan and status report to appropriate congressional committees. This reporting process keeps the appropriate congressional committees informed of agencies’ efforts to improve accountability and stewardship over federal funds. As discussed earlier, under the Results Act, agencies are required to prepare strategic plans that identify goals and objectives at least every 3 years. Complementing the strategic plans are annual performance plans that set annual goals with measurable target levels of performance, and annual performance reports that compare actual performance to the annual goals. The Results Act also requires that OMB annually prepare a governmentwide performance plan as a part of the President’s budget. The agency performance plans are the foundation for OMB’s governmentwide plan. The framework afforded by the CFO and Results Acts suggests that agencies have a variety of mechanisms for reporting on improper payments, depending upon the magnitude or significance of those payments. OMB calls for mission-critical management problems—those which prospectively and realistically threaten achievement of major program goals—to be discussed in agencies’ strategic plans and also in their annual performance plans under the Results Act. In our view, improper payments can reasonably be considered mission-critical problems for certain programs, including the 17 programs with reported improper payments discussed in this report. For example, for programs providing financial assistance benefits, such as the Food Stamp Program, maintaining integrity and accuracy in the payment of benefits is critical to the missions of the programs. For those agencies where these payments are not deemed mission critical, an appropriate vehicle for managing improper payments would be agency 5-year financial management plans developed under the auspices of the CFO Act, or other vehicles such as action plans. Figure 5 shows how the CFO and Results Acts provide a broad structure under which agencies can report the status of their efforts to reduce improper payments. We evaluated the extent to which agencies that reported improper payments in their financial statement reports also addressed improper payments in their fiscal year 2000 performance plans. HHS, SSA, VA, and OPM are four agencies that comprehensively addressed improper payments using this framework. These agencies’ performance plans included both performance goals and strategies for minimizing improper payments for the Medicare Fee-for-Service, Old Age and Survivors Insurance, Veterans Benefits, and Federal Employees’ Life Insurance programs. As shown in figure 6, these programs represent 24 percent of those that reported improper payments or 61 percent of the total program dollars for the 17 programs. In contrast, 7 of the 17 programs (42 percent) did not or only cursorily addressed improper payments in their performance plans, and 34 percent addressed them in a moderate (i.e., less than comprehensive) manner. Some of the nine agencies we reviewed may have addressed these issues in their 5-year financial management, component, or other agency action plans. However, only HHS’ performance plan contained a reference or “pointer” to another plan that addressed improper payments. Because some agencies do not appear to be addressing improper payments in their performance plans, they may not consider the prevention of improper payments a priority or focus adequate attention on this issue. OMB Circular A-11, Part 2, which serves as implementing guidance for agencies in preparing and submitting Results Act strategic and performance plans, states that agency plans should include goals for resolving mission-critical management problems. Circular A-11 also directs agencies to describe actions taken to address and resolve these issues in their performance plans by developing performance goals and discussing strategies. We have also advocated that agencies address mission-critical management problems, in their performance plans, by developing performance goals and discussing strategies. Our analysis indicates that additional guidance on improper payments may be helpful to agency managers. Without an appropriate methodology in place for estimating and reporting improper payments, the Congress, agency managers, and the public are not aware of the full extent of this problem. As a result, agency managers cannot effectively use performance goals for managing improper payments. Conclusions Although reported amounts of improper payments totaled $19.1 billion in fiscal year 1998, many agencies are not identifying, estimating, and reporting the nature and extent of improper payments. As a result, the magnitude is largely unknown. Based on previous audit reports, inadequate internal control and program design issues are the primary causes of improper payments for numerous federal programs. Compounding this problem, some agencies have not recognized the need to address and resolve mission-critical improper payment problems by discussing steps taken in their strategic plans and incorporating appropriate goals into their performance plans. Economic and demographic projections indicate that federal expenditures in certain programs will grow significantly. With billions of dollars at risk, agencies will need to continually and closely safeguard those resources entrusted to them and assign a high priority to reducing fraud, waste, and abuse. A first step for some agencies will involve developing mechanisms to identify, estimate, and report the nature and extent of improper payments annually. Without this fundamental knowledge, agencies cannot be fully informed about the magnitude, trends, and types of payment errors occurring within their programs. As a result, most agencies cannot make informed cost-benefit decisions about strengthening their internal controls to minimize future improper payments or effectively develop goals and strategies to reduce them. Consulting with congressional oversight committees on the development of these goals and strategies is also important to obtaining consensus on how to address this multibillion dollar problem. Recommendations To assist agencies in estimating and managing improper payments, we recommend that the Director of the Office of Management and Budget, through the Deputy Director for Management and OMB’s Office of Federal Financial Management, within the framework of the CFO and Results Acts: Develop and issue guidance to executive agencies to assist them in (1) developing and implementing a methodology for annually estimating and reporting improper payments for major federal programs and (2) developing goals and strategies to address improper payments in their annual performance plans. Require agencies to (1) include a description of steps being taken to address improper payments in their strategic and annual performance plans when the level of improper payments is mission critical and (2) consult with congressional oversight committees, as appropriate, on the projected target levels and goals for estimating and reducing improper payments, as presented in the agencies’ annual performance plan. OMB Comments and Our Evaluation In commenting on a draft of this report, OMB agreed that its focus on improper payments should be expanded. OMB agreed with our first recommendation calling for guidance to assist agencies in developing and implementing a methodology for annually estimating and reporting improper payments for major federal programs and developing goals and strategies to address improper payments in their annual performance plans. Regarding the first element of our second recommendation, OMB expressed concern that it may be inappropriate for agency strategic plans to always include a general goal or objective for reducing improper payments. OMB said it would expect agencies to include goals or objectives in their strategic plans if the level of improper payments was determined to be mission critical. We agree. As stated in our report, it was not our intention that goals or objectives for reducing improper payments be universally included in agency strategic plans. Specifically, within the framework of the CFO and Results Acts, and as shown in figure 5, judgment needs to be exercised so that only mission-critical management problems are addressed in agency strategic and performance plans. To avoid any misconception regarding this issue, we have clarified our recommendation accordingly. With regard to our recommendation to consult with the Congress on specific goals and targets for improper payments, OMB noted that congressional consultation is required by the Results Act and said that agencies interact with the Congress throughout the year as part of the normal appropriations and oversight processes. OMB stated that this level of interaction provides the opportunity for both the agency and the Congress to raise and discuss improper payment issues and should be adequate. In this regard, it will be important that these interactions take place. As discussed in our report, several agencies have not reported improper payments in their performance plans even when they could reasonably be considered mission critical. OMB’s comments are reprinted in appendix IV. OMB also provided informal technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce contents of this report earlier, we will not distribute it until 30 days from its date. Then, we will send copies to Senator Joseph Lieberman, Ranking Minority Member, Senate Committee on Governmental Affairs; Representative Dan Burton, Chairman, and Representative Henry A. Waxman, Ranking Minority Member, House Committee on Government Reform; Senator Pete V. Domenici, Chairman, and Senator Frank R. Lautenberg, Ranking Minority Member, Senate Committee on the Budget; Representative John R. Kasich, Chairman, and Representative John M. Spratt, Jr., Ranking Minority Member, House Committee on the Budget. We will also send copies to the Honorable Jacob J. Lew, Director of the Office of Management and Budget; the heads of the 24 CFO agencies; and respective agency CFOs and Inspectors General. Copies will also be made available to others upon request. This report was prepared under the direction of Gloria L. Jarmon, Director, Health, Education, and Human Services Accounting and Financial Management, who may be reached at (202) 512-4476 or by e-mail at jarmong.aimd@gao.gov if you or your staff have any questions. Staff contacts and other key contributors to this letter are listed in appendix V. Executive Departments and Agencies Covered by the CFO Act Agencies/Programs/Activities With Reported Improper Payments Included in the Agencies’ Fiscal Year 1998 Financial Statements Agency for International Development The U.S. Agency for International Development (AID) was established in 1961 pursuant to the Foreign Assistance Act of 1961. AID manages U.S. foreign economic and humanitarian assistance programs and helps countries recover from disaster, escape poverty, and become more democratic. AID’s mission is to contribute to U.S. national interests by supporting the people of developing and transitional countries in their efforts to achieve enduring economic and social progress and to participate more fully in resolving the problems of their countries and the world. In fiscal year 1998, AID’s total outlays for its various programs were $5.2 billion. Department of Agriculture Federal Crop Insurance Corporation The Federal Crop Insurance Program was established in 1938 by the Federal Crop Insurance Act to protect crop farmers from unavoidable risks associated with adverse weather, plant diseases, and insect infestations. The USDA Risk Management Agency administers the Federal Crop Insurance Program through the Federal Crop Insurance Corporation (FCIC), a government-owned corporation. The federal government retains a portion of the insurance risk for all policies and pays private insurance companies a fee that is intended to reimburse them for the reasonable expenses associated with selling and servicing crop insurance to farmers. In fiscal year 1998, FCIC had over 1 million crop insurance policies in force, with total premiums of $1.9 billion. Food Stamp Program The Food Stamp Program (FSP), enacted by the Food Stamp Act of 1964, is the nation’s principal food assistance program. FSP enables low-income households to obtain a more nutritious diet by issuing monthly allotments of coupons or electronic benefits redeemable for food at retail stores. Eligibility and allotment amounts are based on household size and income as well as on assets, housing costs, work requirements, and other factors. In fiscal year 1998, 19.8 million individuals per month were provided food stamps for total annual program costs of $20.4 billion. Administration for Children and Families The Administration for Children and Families (ACF), a division of the Department of Health and Human Services, is responsible for almost 50 programs that promote the economic and social well being of families, children, individuals, and communities. Three programs do most of ACF’s spending: TANF, Foster Care, and Head Start. Temporary Assistance for Needy Families (TANF) block grants were created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 to replace the Aid to Families with Dependent Children (AFDC) Program. Specified goals of TANF include providing assistance to needy families and ending the dependence of needy parents on government benefits by promoting job preparation, work, and marriage. Over $16 billion in federal assistance is available to states each year through 2002 to fund the TANF program. While states have flexibility over the design and implementation of their welfare programs, they must impose several federal requirements, including work requirements and time limits on aid. Foster Care Foster Care was originally created in 1961 under title IV of the Social Security Act. The Foster Care Program is a permanently authorized entitlement program that provides matching funds to states for maintenance of eligible children in foster care homes, private nonprofit child care facilities, or public child care institutions. In fiscal year 1998, over half a million children were supported by the Foster Care Program with total federal outlays of $4.5 billion. Head Start The Head Start Program was created in 1965 as part of the war on poverty to improve the social competence of children in low-income families. To support the social competence goal, Head Start programs deliver a broad range of services to children. These services include educational, medical, nutritional, mental health, dental, and social services. Head Start regulations require that at least 90 percent of the children enrolled in each program be from low-income families. ACF awards Head Start grants directly to local grantees who operate programs in all 50 states, the District of Columbia, Puerto Rico, and the U.S. territories. In fiscal year 1998, over 800,000 children participated in Head Start programs, and federal outlays totaled $3.3 billion. Health Care Financing Administration Medicaid Medicaid, established in 1965 by Title XIX of the Social Security Act, is a federal-state matching entitlement program that pays for medical assistance for certain vulnerable and needy individuals and families with low incomes and resources. In 1998, it provided health care assistance to 33 million persons, at a cost of about $98 billion to the federal government. The Health Care Financing Administration (HCFA) is responsible for the overall management of Medicaid; however, each state is responsible for managing its own program. Within broad federal statutory and regulatory guidelines, each state: (1) establishes its own eligibility standards, (2) determines the types and range of services, (3) sets the rate of payment for services, and (4) administers its own program. Medicare Fee-for-Service Authorized by Title XVIII of the Social Security Act in 1965, Medicare is the nation’s largest health insurance program, covering an estimated 39.6 million elderly and disabled at a cost of about $210 billion annually. The Medicare Program is administered by HCFA. While some beneficiaries participate in Medicare’s $33 billion Managed Care program, most receive their health care from the $177 billion Fee-for-Service portion of Medicare. HCFA contracts with over 40 insurance companies to process fee-for- service claims. Although contractors are the program’s front line of defense against fraud, abuse, and erroneous payments, HCFA is responsible for overseeing these contractors and for assuring that claims are paid accurately and efficiently. Department of Housing and Urban Development Housing Subsidy Programs Housing and Urban Development’s (HUD) Public Housing and Section 8 programs were established by the U.S. Housing Act of 1937 and the Housing and Community Development Act of 1974 (revising Section 8 of the U.S. Housing Act of 1937), respectively. These programs help eligible low-income families obtain decent, safe, and sanitary housing by paying a portion of their rent. HUD’s Public Housing Program is operated by approximately 3,200 public housing authorities (PHA), which operate under state and local laws and are funded by HUD. Public housing provides affordable shelter for low- income families comprised of citizens or eligible immigrants. Through the operating subsidy program, HUD provides an annual subsidy to help PHAs pay some of the cost of operating and maintaining public housing units. In fiscal year 1998, more than 1.2 million public housing units were under management, with a net cost of about $3.1 billion. The Section 8 programs assist low-income families. Residents in subsidized units generally pay 30 percent of their income for rent, and HUD pays the balance. Section 8 has two assistance programs: project-based and tenant- based assistance. Tenant-based assistance is linked to specific individuals; the project-based assistance is linked to housing units. In fiscal year 1998, the Section 8 programs assisted approximately 3 million households and had net costs of $15.5 billion. Department of Labor Federal Employees’ Compensation Act Enacted in 1916, the Federal Employees’ Compensation Act (FECA) provides workers’ compensation coverage to federal employees for work- related injuries or disease. FECA, administered by the U.S. Department of Labor, authorizes the government to compensate federal employees when they are temporarily or permanently disabled due to injury or disease sustained while performing their duties. In fiscal year 1998, the Department of Labor received 165,000 federal injury reports and issued benefit payments of more than $1.9 billion. Unemployment Insurance Unemployment Insurance, enacted by Title IX of the Social Security Act of 1935, as amended, is the nation’s response to the adverse effects of unemployment. The program’s mission is to provide unemployed workers with temporary income support and to facilitate re-employment. By doing so, the program helps stabilize the economy. In fiscal year 1998, over 7 million unemployed workers received approximately $20 billion from the program. The program is administered by the states through a network of local claims offices and central offices in each state. These offices also are responsible for the collection of taxes from all subject employers. The program is financed through collections of taxes from employers by both the federal and state governments. In addition, each state is responsible for determining eligibility requirements and levels of compensation, including the length of time benefits are paid. Office of Personnel Management Federal Employees’ Health Benefits Program The Federal Employees’ Health Benefits Program (FEHBP) was established by the Federal Employees Health Benefits Act of 1959 for the purpose of making basic hospital and medical protection available to active federal employees, annuitants, and their families through plans offered by carriers participating in the FEHBP. In fiscal year 1998, there were 2.3 million federal civilian employees and 1.8 million annuitants enrolled in the FEHBP. In total, FEHBP covers about 9 million individuals. Annual premiums are over $16.3 billion, with the government paying up to 75 percent of the premiums and employees paying the remaining portion. Federal Employees’ Group Life Insurance Program The Federal Employees’ Group Life Insurance (FEGLI) Program was established in 1954 by the Federal Employees’ Group Life Insurance Act to provide federal employees and annuitants with group term life insurance. The program is administered pursuant to a contract with a life insurance company. In fiscal year 1998, FEGLI covered 90 percent of eligible employees and annuitants, as well as many of their family members, and had $1.6 billion in net outlays. Retirement Program The Retirement Program is a defined benefit retirement plan and includes two components: (1) the Civil Service Retirement System (CSRS), created in 1920 by the Civil Service Retirement Act and (2) the Federal Employees’ Retirement System (FERS), established in 1986 by the Federal Employees’ Retirement System Act. CSRS is a stand-alone retirement plan intended to pay benefits for long-service federal employees. CSRS covers most federal employees hired before 1984 and is closed to new members. FERS covers most employees first hired after December 31, 1983, and provides benefits to the survivors of deceased FERS annuitants and employees. Using Social Security as a base, FERS provides an additional defined benefit and a voluntary thrift savings plan. OPM administers only the defined benefit component of FERS. In fiscal year 1998, OPM had over $43 billion in outlays, with over 2 million annuitants in CSRS, and approximately 100,000 in FERS. Social Security Administration Old Age and Survivors Insurance In 1935, the Social Security Act established a program to help protect aged Americans against the loss of income due to retirement. The 1939 amendments added protection for survivors of deceased retirees by creating the Old Age and Survivors Insurance (OASI) Program. Employee and employer payroll tax contributions under the Federal Insurance Contributions Act (FICA) and the Self-Employment Contributions Act (SECA) finance this program. Administration of the program lies with the Social Security Administration (SSA). In fiscal year 1998, SSA directly disbursed $324 billion to approximately 38 million beneficiaries under this program. Disability Insurance In 1956, the Social Security Act was amended to protect disabled workers against loss of income due to disability through creation of the Disability Insurance (DI) Program. In 1958, amendments to the act expanded benefits to include dependents of disabled workers. As a result, the DI Program provides a continuing income base for eligible workers who have qualifying disabilities and for eligible members of their families before those workers reach retirement age. As authorized by the act, workers are considered disabled if they have severe physical or mental conditions that prevent them from engaging in substantial gainful activity. The condition must be expected to last for a continuous period of at least 12 months or to result in death. Once DI beneficiaries reach age 65, they and their families are converted to the OASI Program. The DI Program is financed by employee and employer payroll tax contributions under FICA and SECA. SSA, using assistance from 54 state Disability Determination Services to make required medical and vocational decisions, is responsible for administering the DI Program. In fiscal year 1998, SSA disbursed approximately $48 billion in monthly cash payments to about 6 million beneficiaries. Supplemental Security Income In 1972, amendments to the Social Security Act established the Supplemental Security Income (SSI) Program. SSI provides cash assistance to financially needy individuals who are aged, blind, or disabled. General tax revenues finance this program. Many states supplement the federal SSI payment, choosing either to have SSA administer the supplement or to pay it directly. In fiscal year 1998, SSA disbursed approximately $27 billion in federal SSI payments to about 7 million recipients. Also, SSA disbursed approximately $3 billion in state supplemental payments during fiscal year 1998. Department of the Treasury Customs Drawbacks/Refunds Refunds are payments made to importers/exporters for overpayments or duplicate payments of duties, taxes, and fees when goods are originally imported into the United States. A drawback is a refund of duties and/or excise taxes already paid to Customs on imported goods which were either (1) never entered into the commerce of the United States because they were either re-exported or destroyed under Customs’ supervision, or (2) used (or substituted) in a process to manufacture articles which were exported from the United States or destroyed under Customs’ supervision without being used. The Congress initially passed legislation authorizing drawbacks in 1789, citing the need to facilitate American commerce and manufacturing. Drawback privileges are provided by the Tariff Act of 1930. The rationale for drawbacks has always been to encourage American commerce or manufacturing, or both. It permits the American manufacturer to compete in foreign markets without the handicap of including in the costs, and consequently in the sales price, the duty paid on imported merchandise. Drawbacks are generally processed in Customs’ port offices across the nation. In fiscal year 1998, net outlays related to drawbacks and refunds were over $1.3 billion. Department of Veterans Affairs In 1930, the Congress consolidated and coordinated various veterans’ programs with the establishment of the Veterans Administration. The Department of Veterans Affairs (VA) was established as a Cabinet level department in March 1989. VA’s mission is to administer the laws providing benefits and other services to veterans and their dependents and the beneficiaries of veterans. The Veterans Benefits Administration (VBA) administers VA’s nonmedical programs, which provide financial and other assistance to veterans, their dependents, and survivors. The compensation and pension (C&P) program is VBA’s largest, and in fiscal year 1998, VA paid approximately $20 billion in C&P benefits to more than 3 million veterans and their survivors. Assessment of Performance Plans for Those Agencies Reporting Improper Payments None: Agency performance plan does not address the issue of improper payments for this program. Cursory: Agency performance plan addresses the need to minimize improper payments but does not provide any substantive performance goals or strategies to minimize improper payments in this program. Moderate: Agency performance plan has either performance goals to address improper payments or strategies to minimize improper payments in this program, but not both, or lacks a comprehensive approach. Comprehensive: Agency performance plan has performance goals and strategies that address key internal controls to minimize improper payments in this program. Comments From the Office of Management and Budget GAO Contact and Staff Acknowledgements GAO Contact Acknowledgements Staff making key contributions to this report are Kwabena Ansong, Kay Daly, Margaret Davis, Marie Kinney, Meg Mills, and Ruth Sessions as well many other staff throughout GAO who contributed to selected sections of this report. Related GAO Products The following lists prior GAO products dealing with improper payments, or overpayments dating back to fiscal year 1996, as requested by the Chairman. Crop Insurance: USDA Needs a Better Estimate of Improper Payments to Strengthen Controls Over Claims (GAO/RCED-99-266, September 22, 1999). Medicare: HCFA Oversight Allows Contractor Improprieties to Continue Undetected (GAO/T-HEHS/OSI-99-174, September 9, 1999). Food Assistance: Efforts to Control Fraud and Abuse in the WIC Program Can Be Strengthened (GAO/RCED-99-224, August 30, 1999). DOD Information Security: Serious Weaknesses Continue to Place Defense Operations at Risk (GAO/AIMD-99-107, August 26, 1999). Defense Health Care: Claims Processing Improvements Are Under Way but Further Enhancements Are Needed (GAO/HEHS-99-128, August 23, 1999). Medicare Fraud and Abuse: DOJ’s Implementation of False Claims Act Guidance in National Initiatives Varies (GAO/HEHS-99-170, August 6, 1999). Defense Health Care: Improvements Needed to Reduce Vulnerability to Fraud and Abuse (GAO/HEHS-99-142, July 30, 1999). Medicare Contractors: Despite Its Efforts, HCFA Cannot Ensure Their Effectiveness or Integrity (GAO/HEHS-99-115, July 14, 1999). Medicare: HCFA Should Exercise Greater Oversight of Claims Administration Contractors (GAO/T-HEHS/OSI-99-167, July 14, 1999). Department of Energy: Need to Address Longstanding Management Weaknesses (GAO/T-RCED-99-255, July 13, 1999). Food Stamp Program: Households Collect Benefits for Persons Disqualified for Intentional Program Violations (GAO/RCED-99-180, July 8, 1999). Recovery Auditing: Reducing Overpayments, Achieving Accountability, and the Government Waste Corrections Act of 1999 (GAO/T-NSIAD-99-213, June 29, 1999). Food Stamp Program: Relatively Few Improper Benefits Provided to Individuals in Long-Term Care Facilities (GAO/RCED-99-151, June 4, 1999). Medicare Subvention Demonstration: DOD Data Limitations May Require Adjustments and Raise Broader Concerns (GAO/HEHS-99-39, May 28, 1999). Medicare: Early Evidence of Compliance Program Effectiveness Is Inconclusive (GAO/HEHS-99-59, April 15, 1999). Auditing the Nation’s Finances: Fiscal Year 1998 Results Highlight Major Issues Needing Resolution (GAO-T-AIMD-99-131, March 31, 1999). Financial Audit: Fiscal Year 1998 Financial Report of the U.S. Government (GAO/AIMD-99-130, March 31, 1999). Contract Management: DOD is Examining Opportunities to Further Use Recovery Auditing (GAO/NSIAD-99-78, March 17, 1999). Veterans’ Benefits Claims: Further Improvements Needed in Claims- Processing Accuracy (GAO/HEHS-99-35, March 1, 1999). Medicare Managed Care: Better Risk Adjustment Expected to Reduce Excess Payments Overall While Making Them Fairer to Individual Plans (GAO/T-HEHS-99-72, February 25, 1999). Direct Student Loans: Overpayments During the Department of Education’s Conversion to a New Payment System (GAO/HEHS-99-44R, February 17, 1999). HCFA Management: Agency Faces Multiple Challenges in Managing Its Transition to the 21st Century (GAO/T-HEHS-99-58, February 11, 1999). Social Security: What the President’s Proposal Does and Does Not Do (GAO/T-AIMD/HEHS-99-76, February 9, 1999). Supplemental Security Income: Long-Standing Issues Require More Active Management and Program Oversight (GAO/T-HEHS-99-51, February 3, 1999). Medicare Home Health Agencies: Role of Surety Bonds in Increasing Scrutiny and Reducing Overpayments (GAO/HEHS-99-23, January 29, 1999). Financial Management: Problems in Accounting for Navy Transactions Impair Funds Control and Financial Reporting (GAO/AIMD-99-19, January 19, 1999). Supplemental Security Income: Increased Receipt and Reporting of Child Support Could Reduce Payments (GAO/HEHS-99-11, January 12, 1999). Internal Controls: Reporting Air Force Vendor Payment System Weaknesses Under the Federal Managers’ Financial Integrity Act (GAO/AIMD-99-33R, December 21, 1998). Contract Management: Recovery Auditing Offers Potential to Identify Overpayments (GAO/NSIAD-99-12, December 3, 1998). Student Loans: Improvements in the Direct Loan Consolidation Process (GAO/HEHS-99-19R, November 10, 1998). Internal Revenue Service: Immediate and Long-Term Actions Needed to Improve Financial Management (GAO/AIMD-99-16, October 30, 1998). DOD Procurement Fraud: Fraud by an Air Force Contracting Official (GAO/OSI-98-15, September 23, 1998). Year 2000 Computing Crisis: Progress Made at Department of Labor, But Key Systems at Risk (GAO/T-AIMD-98-303, September 17, 1998). Fraud, Waste, and Abuse: The Cost of Mismanagement (GAO/AIMD- 98-265R, September 14, 1998). Supplemental Security Income: Action Needed on Long-Standing Problems Affecting Program Integrity (GAO/HEHS-98-158, September 14, 1998). Welfare Reform: Early Fiscal Effects of the TANF Block Grant (GAO/AIMD- 98-137, August 18, 1998). Food Assistance: Computerized Information Matching Could Reduce Fraud and Abuse in the Food Stamp Program (GAO/T-RCED-98-254, August 5, 1998). Earned Income Credit: IRS’ Tax Year 1994 Compliance Study and Recent Efforts to Reduce Noncompliance (GAO/GGD-98-150, July 28, 1998). Section 8 Project-Based Rental Assistance: HUD’s Processes for Evaluating and Using Unexpended Balances Are Ineffective (GAO/RCED-98-202, July 22, 1998). Medicare: Application of the False Claims Act to Hospital Billing Practices (GAO/HEHS-98-195, July 10, 1998). Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 17, 1998). Head Start: Challenges Faced in Demonstrating Program Results and Responding to Societal Changes (GAO/T-HEHS-98-183, June 9, 1998). Medicare: Health Care Fraud and Abuse Control Program Financial Report for Fiscal Year 1997 (GAO/AIMD-98-157, June 1, 1998). Medicare Billing: Commercial System Will Allow HCFA to Save Money, Combat Fraud and Abuse (GAO/T-AIMD-98-166, May 19, 1998). Computer Security: Pervasive, Serious Weaknesses Jeopardize State Department Operations (GAO/AIMD-98-145, May 18, 1998). Medicare: Need to Overhaul Costly Payment System for Medical Equipment and Supplies (GAO/HEHS-98-102, May 12, 1998). Social Security: Better Payment Controls for Benefit Reduction Provisions Could Save Millions (GAO/HEHS-98-76, April 30, 1998). Food Assistance: Observations on Reducing Fraud and Abuse in the Food Stamp Program (GAO/T-RCED-98-167, April 23, 1998). Direct Student Loans: Efforts to Resolve Lender’s Problems With Consolidations are Under Way (GAO/HEHS-98-103, April 21, 1998). Supplemental Security Income: Organizational Culture and Management Inattention Place Program at Continued Risk (GAO/T-HEHS-98-146, April 21, 1998). Department of Defense: Financial Audits Highlight Continuing Challenges to Correct Serious Financial Management Problems (GAO/T-AIMD/NSIAD- 98-158, April 16, 1998). Medicare Billing: Commercial Systems Could Save Hundreds of Millions Annually (GAO/AIMD-98-91, April 15, 1998). Internal Control: Essential for Safeguarding Assets, Compliance with Laws and Regulations, and Reliable Financial Reporting (GAO/T-AIMD-98-125, April 1, 1998). Financial Audit: 1997 Consolidated Financial Statements of the United States Government (GAO/AIMD-98-127, March 31, 1998). Head Start Programs: Participant Characteristics, Services, and Funding (GAO/HEHS-98-65, March 31, 1998). Supplemental Security Income: Opportunities Exist for Improving Payment Accuracy (GAO/HEHS-98-75, March 27, 1998). Food Stamp Program: Information on Trafficking Food Stamp Benefits (GAO/RCED-98-77, March 26, 1998). Medicare Home Health Benefit: Congressional and HCFA Actions Begin to Address Chronic Oversight Weaknesses (GAO/T-HEHS-98-117, March 19, 1998). CFO Act Financial Audits: Programmatic and Budgetary Implications of Navy Financial Data Deficiencies (GAO/AIMD-98-56, March 16, 1998). Medicare: HCFA Can Improve Methods for Revising Physician Practice Expense Payments (GAO/HEHS-98-79, February 27, 1998). Financial Audit: Examination of IRS’ Fiscal Year 1997 Custodial Financial Statements (GAO/AIMD-98-77, February 26, 1998). Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, February 24, 1998). Food Stamp Overpayments: Thousands of Deceased Individuals Are Being Counted as Household Members (GAO/RCED-98-53, February 11, 1998). Financial Management: Seven DOD Initiatives That Affect the Contract Payment Process (GAO/AIMD-98-40, January 30, 1998). Managing for Results: The Statutory Framework for Performance-Based Management and Accountability (GAO/GGD/AIMD-98-52, January 28, 1998). Illegal Aliens: Extent of Welfare Benefits Received on Behalf of U.S. Citizen Children (GAO/HEHS-98-30, November 19, 1997). Inspectors General: Contracting Actions By Treasury Office of Inspector General (GAO/OSI-98-1, October 31, 1997). Food Assistance: Reducing Food Stamp Benefit Overpayments and Trafficking (GAO/T-RCED-98-37, October 30, 1997). DOD Procurement: Funds Returned by Defense Contractors (GAO/NSIAD- 98-46R, October 28, 1997). Medicare Home Health: Differences in Service Use by HMO and Fee-for- Service Providers (GAO/HEHS-98-8, October 21, 1997). Disaster Assistance: Guidance Needed for FEMA’s “Fast Track” Housing Assistance Process (GAO/RCED-98-1, October 17, 1997). VA Medical Care: Increasing Recoveries From Private Health Insurers Will Prove Difficult (GAO/HEHS-98-4, October 17, 1997). Medicare: Recent Legislation to Minimize Fraud and Abuse Requires Effective Implementation (GAO/T-HEHS-98-9, October 9, 1997). Budget Issues: Budgeting For Federal Insurance Programs (GAO/AIMD- 97-16, September 30, 1997). Medicare Automated Systems: Weaknesses in Managing Information Technology Hinder Fight Against Fraud and Abuse (GAO/T-AIMD-97-176, September 29, 1997). Food Assistance: A Variety of Practices May Lower the Costs of WIC (GAO/RCED-97-225, September 17, 1997). Medicare: Control Over Fraud and Abuse Remains Elusive (GAO/T-HEHS- 97-165, June 26, 1997). Supplemental Security Income: Timely Data Could Prevent Millions in Overpayments to Nursing Home Residents (GAO/HEHS-97-62, June 3, 1997). DOD High-Risk Areas: Eliminating Underlying Causes Will Avoid Billions of Dollars in Waste (GAO/T-NSIAD/AIMD-97-143, May 1, 1997). Financial Management: Improved Reporting Needed for DOD Problem Disbursements (GAO/AIMD-97-59, May 1, 1997). Medicare HMOs: HCFA Can Promptly Eliminate Hundreds of Millions in Excess Payments (GAO/HEHS-97-16, April 25, 1997). Social Security Disability: SSA Actions to Reduce Backlogs and Achieve More Consistent Decisions Deserve High Priority (GAO/T-HEHS-97-118, April 24, 1997). Crop Insurance: Opportunities Exist to Reduce Government Costs for Private-Sector Delivery (GAO/RCED-97-70, April 17, 1997). Nursing Homes: Too Early to Assess New Efforts to Control Fraud and Abuse (GAO/T-HEHS-97-114, April 16, 1997). Contract Management: Fixing DOD’s Payment Problems Is Imperative (GAO/NSIAD-97-37, April 10, 1997). Financial Management: Improved Management Needed for DOD Disbursement Process Reforms (GAO/AIMD-97-45, March 31, 1997). Medicaid Fraud and Abuse: Stronger Action Needed to Remove Excluded Providers From Federal Health Programs (GAO/HEHS-97-63, March 31, 1997). Department of Veterans Affairs: Programmatic and Management Challenges Facing the Department (GAO/T-HEHS-97-97, March 18, 1997). Social Security: Disability Programs Lag in Promoting Return to Work (GAO/HEHS-97-46, March 17, 1997). Food Stamps: Substantial Overpayments Result From Prisoners Counted as Household Members (GAO/RCED-97-54, March 10, 1997). Farm Programs: Finality Rule Should Be Eliminated (GAO/RCED-97-46, March 7, 1997). High-Risk Areas: Benefits to Be Gained by Continued Emphasis on Addressing High-Risk Areas (GAO/T-AIMD-97-54, March 4, 1997). Supplemental Security Income: Long-Standing Problems Put Program at Risk for Fraud, Waste and Abuse (GAO/T-HEHS-97-88, March 4, 1997). Medicare HMOs: HCFA Could Promptly Reduce Excess Payments by Improving Accuracy of County Payment Rates (GAO-T-HEHS-97-82, February 27, 1997). Benefit Fraud With Post Office Boxes (GAO/HEHS-97-54R, February 21, 1997). Ex-Im Bank’s Retention Allowance Program (GAO/GGD-97-37R, February 19, 1997). High-Risk Series (GAO/GGD-97-37R, February 1997). SSA Disability Redesign: Focus Needed on Initiatives Most Crucial to Reducing Costs and Time (GAO/HEHS-97-20, December 20, 1996). Medicaid: States’ Efforts to Educate and Enroll Beneficiaries in Managed Care (GAO/HEHS-96-184, September 17, 1996). Supplemental Security Income: SSA Efforts Fall Short in Correcting Erroneous Payments to Prisoners (GAO/HEHS-96-152, August 30, 1996). Supplemental Security Income: Administrative and Program Savings Possible by Directly Accessing State Data (GAO/HEHS-96-163, August 29, 1996). Unemployment Insurance: Millions in Benefits Overpaid to Military Reservists (GAO/HEHS-96-101, August 5, 1996). Department of Education: Status of Actions to Improve the Management of Student Financial Aid (GAO/HEHS-96-143, July 12, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Health Care Fraud: Information-Sharing Proposals to Improve Enforcement Efforts (GAO/GGD-96-101, May 1, 1996). SSA Overpayment Recovery (GAO/HEHS-96-104R, April 30, 1996). Social Security: Issues Involving Benefit Equity for Working Women (GAO/HEHS-96-55, April 10, 1996). Fraud and Abuse: Providers Target Medicare Patients in Nursing Facilities (GAO/HEHS-96-18, January 24, 1996). Unsubstantiated DOE Travel Payments (GAO/RCED-96-58R, December 28, 1995). Financial Management: Challenges Facing DOD in Meeting the Goals of the Chief Financial Officers Act (GAO/T-AIMD-96-1, November 14, 1995). Fraud and Abuse: Medicare Continues to Be Vulnerable to Exploitation by Unscrupulous Providers (GAO/T-HEHS-96-7, November 2, 1995). DOD Procurement: Millions in Contract Payment Errors Not Detected and Resolved Promptly (GAO/NSIAD-96-8, October 6, 1995). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/T-HEHS-96-5, October 2, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Official Business Penalty for Private Use $300
Pursuant to a congressional request, GAO provided information on improper payments in light of the projected future growth of federal expenditures, focusing on the: (1) amounts reported by agencies as improper payments in their fiscal year (FY) 1998 financial statements prepared pursuant to the Chief Financial Officers (FO) Act of 1990; (2) types of federal programs at risk of disbursing improper payments; (3) reported causes of improper payments across the federal government; and (4) extent to which agencies are addressing improper payments in their performance plans under the Government Performance and Results Act of 1993. GAO noted that: (1) in their FY 1998 financial reports, nine agencies collectively reported improper payment estimates of $19.1 billion; (2) these improper payment estimates relate to 17 major programs that expended approximately $870 billion; (3) the programs and related improper payment estimates include: (a) Medicare Fee-for-Service ($12.6 billion); (b) Supplemental Security Income ($1,648 million); (c) Food Stamps ($1,425 million); (d) Old Age and Survivors Insurance ($1,154 million); (e) disability insurance ($941 million); (f) housing subsidies ($857 million); and (g) veterans benefits, unemployment insurance, and others ($514 million); (4) also included are the Agency for International Development (AID), Medicaid, and the Federal Crop Insurance Corporation; (5) AID and the agencies administering these programs acknowledged making improper payments in their FY 1998 financial statement, but did not disclose specific dollar amounts; (6) improper payments are much greater than have been disclosed thus far in agency financial statements reports, as shown by GAO's prior audits and those of agency inspectors general; (7) agencies are not performing comprehensive quality control reviews--internal studies or reviews--for certain programs to determine the propriety of program expenditures; (8) as a result, the full extent of the problem--and possible solutions to it--is unknown; (9) comprehensive quality control reviews could also identify the causes of improper payments, which range from inadvertent errors to fraud and abuse; (10) working with the Office of Management and Budget (OMB), some agencies are taking steps to mitigate this risk by focusing attention on identifying, reporting, and reducing improper payments through the discipline of annual audited financial statements and the development of performance goals; (11) however, agencies responsible for 13 of the 17 programs having made improper payments--many of which GAO identified in its High-Risk and Performance and Accountability series issued earlier this year--did not include specific performance goals or strategies to comprehensively address these payments in their FY 2000 performance plans under the Results Act; and (12) as the federal budget grows, more taxpayer dollars are placed at risk, thus increasing the urgency for identifying and preventing these types of payments and providing complete accountability to taxpayers.
Background SSA’s mission is to advance the nation’s economic security through compassionate and vigilant leadership in shaping and managing America’s Social Security programs. This includes one of the nation’s largest entitlement programs––federal Old-Age, Survivors, and Disability Insurance benefits––commonly referred to as Social Security. The program provides monthly benefits to retired and disabled workers, their spouses and children, and the survivors of insured workers. SSA also administers Supplemental Security Income, a needs-based program for the aged, blind, and disabled that pays monthly benefits to individuals. Over 54 million people, one-sixth of the total U.S. population, receive monthly Social Security or Supplemental Security Income benefit payments. The agency’s estimated 2008 budget of about $657 billion includes an administrative budget of $9.7 billion to support these programs, including about $1 billion for IT. Organizationally, SSA is headed by the Commissioner, who is assisted by a deputy commissioner and various other executive officials, including the Deputy Commissioner, Budget, Finance and Management; Chief Information Officer (CIO); Chief Strategic Officer; and nine deputy commissioners responsible for the agency’s various business components. The organizational structure of the agency is depicted in figure 1. The Commissioner is supported by approximately 60,000 employees located at headquarters and throughout a decentralized network of over 1,400 offices that include regional offices, field offices, teleservice centers, processing centers, state Disability Determination Services, program service centers, and hearing offices. Of these employees, approximately 3,300 IT staff and contractors are assigned to the Office of Deputy Commissioner, Systems. According to SSA, its organizational structure is designed to provide timely, accurate, and responsive service to the American public. SSA Relies on IT to Deliver Services The agency relies extensively on information technology to administer its programs and to support related administrative needs. In this regard, IT is used to, among other things: evaluate evidence and make determinations of eligibility for benefits on issue new and replacement Social Security cards, process earnings items for crediting to workers’ earnings records, handle millions of transactions on SSA’s toll-free telephone number, issue Social Security statements, process continuing disability reviews, and process nondisability Supplemental Security Income redeterminations. The agency’s IT budget for fiscal year 2008 is approximately $1 billion. Of this amount, $400 million is for work year support of software development projects in the Office of Deputy Commissioner, Systems and about $610 million is for acquisition of IT-related products and services. The agency expects to spend about 80 percent of its acquisition budget on infrastructure. Investment Management Is Critical to Effective Use of IT A corporate approach to IT investment management is characteristic of successful public and private organizations. Recognizing this, Congress enacted the Clinger-Cohen Act of 1996, which requires the Office of Management and Budget (OMB) to establish processes to analyze, track, and evaluate the risks and results of major capital investments in IT systems made by executive agencies. In implementing the Clinger-Cohen Act and other statutes, OMB has developed policy and issued guidance for the planning, budgeting, acquisition, and management of federal capital assets. We have also issued guidance in this area that defines institutional structures, such as investment boards; processes for developing information on investments (such as cost/benefit); and practices to inform management decisions (such as whether a given investment is aligned with an enterprise architecture). IT Investment Management: A Brief Description IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans. Consistent with this, the federal approach to IT investment management focuses on selecting, controlling, and evaluating investments in a manner that minimizes risks while maximizing the return on investment. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that projects, as they develop and investment expenditures continue, meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems arise, steps are quickly taken to address the deficiencies. During the evaluation phase, expected results are compared with actual results after a project has been fully implemented. This comparison is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Overview of GAO’s ITIM Maturity Framework Our ITIM framework consists of five progressive stages of maturity for any given agency relative to selecting, controlling, and evaluating its investment management capabilities. (See fig. 2 for the five ITIM stages of maturity.) This framework is grounded in our research of IT investment management practices of leading private and public sector organizations. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in many of our evaluations, and a number of agencies have adopted it. ITIM’s five maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages and the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of Stage 1, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to perform key practices from more than one maturity stage at the same time. However, our research has shown that agency efforts to improve investment management capabilities should focus on implementing all lower stage practices before addressing the higher stage practices. Figure 2 provides an overview of the five ITIM stages of maturity and the critical processes associated with each stage. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment management by helping the agency to attain successful, predictable, and repeatable investment management processes at the project level. Specifically, Stage 2 encompasses building a sound investment management foundation by establishing basic capabilities for selecting new IT projects. This stage also involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and developing the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. It also involves instituting an IT investment board, which includes defining its membership, guidance policies, operations, roles, responsibilities, and authorities. The basic selection processes established in Stage 2 lay the foundation for more mature management capabilities in Stage 3, which represents a major step forward in maturity, in which the agency moves from project-centric processes to an agencywide portfolio approach. Stage 3 requires that an organization continually assess both proposed and ongoing projects as part of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and evaluation processes. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than focusing exclusively on the balance between the costs and benefits of individual investments. Organizations that have implemented Stage 2 and 3 practices have capabilities in place that assist in establishing selection; control; and evaluation structures, policies, procedures, and practices that are required by the investment management provisions of the Clinger-Cohen Act. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and the investment processes in order to better achieve strategic outcomes. At Stage 4, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. SSA’s Current Investment Management Approach SSA’s investment management process is intended to meet the objectives of the Clinger-Cohen Act by providing a framework for selecting, controlling, and evaluating investments that helps to ensure it meets the strategic and business objectives of the agency. The investment management process is documented in the agency’s Capital Planning and Investment Control (CPIC) Guide. The CPIC Guide assigns the responsibility for the investment management process to SSA executive-level managers. In this regard, the Information Technology Advisory Board (ITAB) is responsible for assigning resources to projects reported in the 2-year Agency IT Plan, which specifies which projects and systems the agency will build and operate. The board, which meets quarterly, is comprised of the deputy commissioners and other senior executives, such as the general counsel and the Deputy Commissioner, Budget, Finance and Management and it is chaired by the CIO. The CIO is the key decision maker in the CPIC process. He provides advice to the Commissioner and Deputy Commissioner of Social Security to ensure that IT is acquired and information resources are managed in a manner that is consistent with the policies and procedures of the Clinger- Cohen Act. The CIO is the chairman of the investment board and makes final IT budget recommendations to the Commissioner. The Deputy Commissioner, Systems is responsible for monitoring all development and operations projects included in the Agency IT Plan. Each deputy commissioner responsible for a portfolio has a portfolio manager and portfolio team to assist in the day-to-day management of the corresponding investment portfolio within each business component. Table 1 identifies the key participants that have a role in the agency’s investment management process and their responsibilities. SSA uses its established CPIC process to manage the work years associated with its in-house software development projects. (The acquisition budget is managed by a separate process discussed later in this report.) The CPIC process is as follows: During the investment selection phase, new projects are proposed by a sponsor––either from a business unit for mission-related projects or from the Deputy Commissioner, Systems’ organization for supporting acquisitions, such as telephone systems—and are assigned to 1 of 11 portfolios. Proposals that identify business needs are developed based on the Commissioner’s priorities or gap analyses performed by each portfolio team that identify future business needs. The ITAB issues guidelines to the portfolio teams on the number of work years that each portfolio will have available for projects. In response, each portfolio team develops a prioritized list of proposed and ongoing projects within their work year allocations. Prioritization is based on a vote by portfolio team representatives. According to SSA’s documented procedures, prioritization criteria can include relative benefits, costs, and risks. However, portfolio teams have discretion in how they weigh these and any other criteria. Next, the prioritized lists are combined into a proposed Agency IT Plan for approval by the ITAB. The plan is comprised of proposed investments for the next 2 fiscal years, and provides information on work year requirements. In addition, expected benefits and return on investment are included for new development projects. The ITAB approves or modifies the proposed plan once a year, including allocating work years to the portfolios. At this point, the selection phase of the annual cycle is basically complete, though portfolio teams can propose additional projects that arise in the middle of a cycle. During the control phase, the Deputy Commissioner, Systems holds monthly meetings with his staff who are assigned to monitor projects in development. During these meetings, projects that are not meeting cost and schedule expectations are identified, and corrective actions are initiated. According to SSA guidance, the objective of the Deputy Commissioner, Systems’ meetings with his staff is to resolve problems related to underperforming projects without elevating them to the ITAB. During the months in which ITAB quarterly meetings are scheduled, the Deputy Commissioner, Systems meets with his staff prior to these meetings to prepare to address concerns about investments that may be raised during the meetings. If concerns are raised at the meeting, the Deputy Commissioner, Systems provides information about these investments. In addition, the ITAB receives investment profiles on the status of each of the agency’s major IT investments. These profiles include reports on actual and expended work years, cost, schedule, and any variances. During the evaluation phase, the CPIC Guide calls for the CIO to conduct postimplementation reviews on projects that have been completed and deployed for at least 3 months. The purpose of these reviews is to compare actual project results against planned results in order to assess performance and identify areas where future decision making can be improved. Figure 3 illustrates SSA’s current investment management process as specified in agency guidance. SSA Has Taken Key Steps to Manage Investments, but Gaps Remain in Oversight and in Defining Policies and Procedures SSA has executed a majority of the key practices—82 percent––needed to effectively manage its IT projects as investments, but it has not fully implemented many of the related oversight responsibilities and procedures that our ITIM framework outlines. Of the five Stage 2 critical processes specified by the ITIM, it has (1) established most of the key practices needed for instituting the investment board, (2) developed procedures for ensuring that projects meet business and user needs, (3) established a process for selecting an investment, and (4) developed tools for capturing investment information. However, the critical process of providing oversight is not being fully executed. Also, the agency has made progress in establishing the critical processes and key practices for managing IT investments as a portfolio. It is executing 18 out of 27 key practices from this stage of the ITIM. However, it has not established enterprisewide portfolio selection criteria and has executed few key practices for evaluating the portfolio. In addition, its postimplementation reviews are not achieving key objectives. Further, a gap exists in the agency’s management of its IT in that more than half of its budget—its acquisition budget—is not overseen as part of the agency’s current investment management process. While SSA has taken key steps for managing its investments, until key practices are fully implemented and coverage of its management processes is extended to all investments, it will not be fully postured to ensure that its investments achieve their intended results and address the strategic goals, objectives, and mission of the organization. SSA Has Established Most of the Foundation for Managing IT Investments, but It Has Not Established Some Processes and Procedures At the ITIM Stage 2 level of maturity, an organization has attained repeatable, successful IT project-level investment control and basic selection processes. Through these processes, the organization can identify expectation gaps early and take the appropriate steps to address them. According to ITIM, critical processes at Stage 2 include (1) defining IT investment board operations, (2) identifying the business needs for each IT investment, (3) developing a basic process for selecting new IT proposals and reselecting ongoing investments, (4) developing project- level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 2 describes the purpose of each of these Stage 2 critical processes. Within these 5 critical processes are 38 key practices for effective project- level management. SSA has implemented 31 of these practices. Specifically, the agency has satisfied all the key practices associated with meeting business needs and capturing investment information and most of those associated with instituting an investment board and selecting an investment. However, the agency has not executed most of the key practices related to providing investment oversight. Moreover, the agency has not developed some policies and procedures required for the critical process areas, including providing investment oversight. Table 3 summarizes the status of SSA’s Stage 2 critical processes, showing the number of associated practices that have been implemented, as they apply to the agency’s management of its IT work year budget for in-house projects. The establishment of decision-making bodies or boards is a key component of the IT investment management process. At the Stage 2 level of maturity, organizations define one or more boards, provide resources to support their operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. The board operates according to a written IT investment process guide that is tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization. Once board members are selected, the organization ensures that they are knowledgeable about policies and procedures for managing investments. Organizations at the Stage 2 level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the IT investment board. An IT investment management process guide should be an authoritative document that the organization uses to initiate and manage IT investment processes and should provide a comprehensive foundation for the policies and procedures that are developed for all of the other related processes. (The complete list of key practices is provided in table 4.) SSA has executed seven of the eight key practices for instituting the investment board. In particular, it has established the ITAB as its investment board. As previously discussed, the board is chaired by the CIO, and includes deputy commissioners and other agency senior executives, such as the Deputy Commissioner, Budget, Finance and Management. Further, the agency has a documented investment governance process and provides resources for the board. Management controls have been established for ensuring that the investment board’s decisions are carried out. However, the agency is not executing one of the key practices associated with this process. The board is not implementing one of the three stages of the IT investment governance process based on the Clinger-Cohen Act. Specifically, it is not evaluating IT investments, including performing postimplementation reviews. Rather, the CIO alone is assigned this responsibility and the investment board does not receive the results of these reviews. Until all relevant IT governance becomes the responsibility of the ITAB, SSA may have insufficient high-level executive involvement in its investment management process and will not benefit from the contributions of those executives who are in the best position to make the full range of decisions needed for the agency to carry out its mission most effectively. Further, although SSA has established its investment board, the policies and procedures to define and implement the investment governance process are not fully established for all of the key practices. For example, the procedures for elevating underperforming investments to the board are not established. Further, although the CIO and Deputy Commissioner, Systems agree that the CPIC guide and other guidance they provided are official agency documents, these documents had not been officially approved by SSA’s management. Without policy guidance that is agreed to and approved by all the appropriate levels of the organization, consistent and repeatable investment management practices cannot be assured. Table 4 summarizes our findings relative to SSA’s execution of the eight key practices for instituting the investment board. Defining business needs for each IT project helps to ensure that projects and systems support the organization’s business needs and meet users’ needs. According to ITIM, effectively meeting business needs requires, among other things, (1) documenting business needs with stated goals and objectives; (2) identifying specific users and other beneficiaries of IT projects and systems; (3) providing adequate resources to ensure that projects and systems support the organization’s business needs and meet users’ needs; and (4) periodically evaluating the alignment of IT projects and systems with the organization’s strategic goals and objectives. (The complete list of key practices is provided in table 5). SSA has in place all seven key practices for meeting business needs. The agency’s CPIC Guide and IT Planning Training Package require that sponsors identify the current and future business needs for proposed and ongoing projects and systems. Business needs are to be aligned with the SSA Strategic Plan. Resources for ensuring that IT projects and systems support the organization’s business needs and meet users’ needs include the ITAB, project sponsors and reviewers, the Systems Planning and Reporting System (which documents business needs information on proposed and ongoing projects), and the project scope agreement (which documents the business needs that the developer agrees will meet user needs). In reviewing selected agency projects as part of our study, we verified that the new and ongoing projects had these scope agreements. Table 5 shows the analysis for each key practice of the critical process for meeting business needs and summarizes the supporting evidence. Selecting new IT proposals and reselecting ongoing investments requires a well-defined and disciplined process to provide the agency’s investment boards, business units, and developers with a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used both to select new projects and to reselect ongoing projects for continued funding. According to ITIM, this critical process requires, among other things, (1) making funding decisions for new proposals according to an established process; (2) providing adequate resources for investment selection activities; (3) using a defined selection process to select new investments and reselect ongoing investments; (4) establishing criteria for analyzing, prioritizing, and selecting new IT investments and for reselecting ongoing investments; and (5) creating a process for ensuring that the criteria change as organizational objectives change. (The complete list of key practices is provided in table 6.) SSA has in place 9 of 10 key practices for selecting investments. For example, the agency has established policies and procedures for integrating funding with the process of selecting an investment; the IT Planning Training Package states that the ITAB is to specify the resources available to each portfolio team for its investments. According to SSA officials, resources are provided for selecting investments, including managerial attention and tracking systems. Criteria have been established for selecting and reselecting investments, including return on investment, the business value of the investment, and investment cost. The agency ensures that selection criteria reflect organizational goals by aligning project selection with the organizational priorities set by the Commissioner each year. The IT Planning Training Package and ITAB meeting notes document the predefined selection criteria and process for selection of new investments. We verified that the three case study projects we reviewed were selected using the predefined selection process and criteria, and that these funding decisions were based on the selection information for the projects. However, SSA is not fully executing the key practice requiring policies and procedures for selecting new IT investment proposals. While the CPIC Guide has policies for identifying and evaluating new IT proposals, the IT Planning Training Package does not have documented procedures for prioritizing investment proposals. SSA officials said they do not have documented procedures because predefined criteria might result in not selecting a proposal that a portfolio team determines is required for operations. However, without predefined criteria for prioritizing investments consistently in each portfolio, SSA risks having less critical investments selected over investments that are more critical to accomplishing the portfolio’s objective. Table 6 shows the rating for each key practice required to implement the critical process for selecting investments at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. An organization should provide effective oversight for its IT projects throughout all phases of their life cycles. Its investment board should maintain adequate oversight and observe each project’s performance and progress toward predefined cost and schedule expectations as well as each project’s anticipated benefits and risk exposure. The investment board should also employ early warning systems that enable it to take corrective action at the first sign of cost, schedule, or performance slippages. This board has ultimate responsibility for the activities within this critical process. According to ITIM, effective project oversight requires, among other things, (1) having written policies and procedures for management oversight; (2) developing and maintaining an approved project management plan for each IT project; (3) providing adequate resources for supporting the investment board; (4) having regular reviews by each investment board of each project’s performance against stated expectations; and (5) ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. The agency is executing two of seven key practices for providing oversight. The agency provides resources for oversight, and the board reviews summary reports on projects’ cost and schedule performance. Also, the agency maintains project plans, including cost and schedule milestones for its investments. However, the agency is not executing the remaining five key practices related to providing oversight of IT projects. Although SSA provides the investment board with summary data on projects’ performance related to cost, schedule, and benefits, the board does not receive information on projects’ risks. Also, the board does not regularly track the implementation of corrective actions for each underperforming project. The board’s meeting agenda allows individual deputy commissioners to raise concerns about project performance at quarterly meetings, but, based on our analysis of the ITAB meeting minutes, this opportunity is infrequently exercised. Specifically, during 2007, the meeting minutes showed that underperforming investments were discussed at only one of the quarterly meetings. Also, SSA officials have not specified the criteria for terminating projects that are underperforming. The Deputy Commissioner, Systems told us that he takes corrective actions to address underperforming projects but does not document these actions or report them to the ITAB. Table 7 shows the status of each key practice required to provide investment oversight at the project level and summarizes the supporting evidence. To make good IT investment decisions, an organization must be able to acquire pertinent information about each investment and store that information in a retrievable format. During this critical process, an organization identifies its IT assets and creates a comprehensive repository of investment information. This repository provides information to investment decision makers to help them evaluate the potential impacts and opportunities created by proposed or continuing investments. The repository can take many forms and need not be centrally located, but the collection method should, at a minimum, identify each IT investment and its associated components. According to ITIM, effectively managing this repository requires, among other things, (1) developing written policies and procedures for identifying and collecting the information; (2) assigning responsibilities for ensuring that the information being collected meets the needs of the investment management process; (3) identifying IT projects and systems and collecting relevant information to support decisions about them; and (4) making the information easily accessible to decision makers and others. (The complete list of key practices is provided in table 8.) SSA has in place all six key practices associated with capturing investment information. For example, the agency’s Project Resource Guide documents policies and procedures for submitting, updating, and maintaining relevant project information. One policy document, the Office of Systems Project Management Directive, identifies project management activities and work products for all projects approved by the investment board. SSA’s Systems Process Improvement team is responsible for developing and maintaining the monthly health reports on project performance that are provided to the Deputy Commissioner, Systems to track actual project work years. In addition, projects must be recorded in the Systems Planning and Reporting System and each item to be considered by the investment board must be documented, including project dollar and work year estimates. The automated project status reports provide comprehensive status information for all development projects, including activities completed, activities in progress, and activities planned. We verified that information for three of the agency’s IT projects we examined was collected in the Systems Planning and Reporting System and they all had a project scope agreement, which described the business, user, customer, and systems functions required. Also, project performance was reported in the monthly IT project health reports for all three projects. Table 8 summarizes the status of the six key practices for capturing investment information. SSA Has Established Processes for Managing Investments as an Enterprisewide Portfolio, but Key Practices Remain Not Executed Once an agency has attained Stage 2 maturity, it needs to implement critical processes for managing its investments as an enterprisewide portfolio (Stage 3). An investment portfolio is an integrated, agencywide collection of investments that are assessed and managed collectively based on common criteria. Managing investments as a portfolio is a conscious, continuous, and proactive approach to allocating limited resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s mission, strategic goals, and objectives. Managing IT investments as a portfolio also enables an organization to determine its priorities and make decisions about which projects to fund and continue to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operations. Although investments may initially be organized into separate portfolios—based on, for example, business lines or life-cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into this enterprise-level portfolio. According to the ITIM, Stage 3 maturity includes (1) defining the portfolio, (2) creating the portfolio criteria, (3) evaluating the portfolio, and (4) conducting postimplementation reviews. Table 9 summarizes the purpose of each critical process in Stage 3. Within these 4 critical processes are 27 key practices associated with portfolio-level management. For the work year budget managed by its investment review board, SSA has executed 18 of the 27 key practices. SSA has executed all of the key practices for creating the portfolio and most of those for defining the criteria and conducting postimplementation reviews. However, the agency has not executed nine key practices, including establishing enterprisewide selection criteria and managing all of its investments as an enterprisewide portfolio. SSA has implemented postrelease reviews of its investments, but does not include evaluations of quantitative data and analyses, such as the investments’ contributions toward achieving both the strategy and the objectives of the organization’s IT strategic plan. Table 10 summarizes the status of SSA’s Stage 3 critical processes and key practices. Developing an IT investment portfolio involves defining appropriate investment cost, benefit, schedule, and risk criteria to ensure that the organization’s strategic goals, objectives, and mission will be satisfied by the selected investments. Portfolio selection criteria reflect the strategic and enterprisewide focus of the organization and build on the criteria that are used to select individual projects. When IT projects are not considered in the context of a portfolio, criteria based on narrow, lower-level requirements may dominate enterprisewide selection criteria. SSA is executing five of seven key practices associated with defining the portfolio criteria, including assigning responsibility to the ITAB for developing and modifying portfolio guidance and providing thresholds for selecting investments to the portfolio teams. According to SSA officials, the agency also has adequate resources for portfolio selection activities, including people and tools. Further, project management personnel are aware of the portfolio selection criteria. However, SSA is not executing two key practices. The agency has not fully documented policies and procedures, such as key procedures for creating and modifying IT portfolio selection criteria. Further, the investment board approved the core criteria for selection, but it has delegated the weighting of core criteria to the portfolio teams. This delegated approach conflicts with the need articulated in the ITIM framework to manage investments in a strategic, enterprisewide manner so that the investments address not only the objectives of individual programs, or lines of business, but also the impact that projects have on one another and the IT portfolio’s overall benefit to the organization. Lacking complete enterprisewide portfolio criteria, SSA risks optimizing individual business processes while producing stovepiped systems, as well as not maximizing overall benefits to the agency. Table 11 shows the status for each key practice required to implement the critical process for defining the portfolio criteria and summarizes the evidence that supports these ratings. At ITIM Stage 3, organizations create a portfolio of IT investments to ensure that (1) they are analyzed according to the organization’s portfolio selection criteria and (2) an optimal investment portfolio with manageable risks and returns is selected and funded. According to ITIM, creating the portfolio requires organizations to, among other things, document policies and procedures for analyzing, selecting, and maintaining the portfolio; provide adequate resources, including people, funding, and tools for creating the portfolio; and capture the information used to select, control, and evaluate the portfolio and maintain it for future reference. In creating the portfolio, the investment board should also (1) examine the mix of new and ongoing investments and their respective data and analyses and select investments for funding and (2) approve or modify the performance expectations for the IT investments they have selected. (The complete list of key practices is provided in table 12.) SSA is executing the seven key practices associated with creating the portfolio. For example, according to SSA officials, the agency has adequate resources for selecting the portfolio, including the ITAB executives, other supporting staff, and a system that tracks proposal information. The ITAB also considers a list of proposed IT investments and assigns IT staffing resources to the investment portfolios. Table 12 shows the status for each key practice required to implement the critical process for creating the portfolio and summarizes the evidence that supports these ratings. This critical process builds upon the Stage 2 critical process related to providing investment oversight by adding the elements of portfolio performance to an organization’s investment control capacity. Compared to less mature organizations, Stage 3 organizations will have the foundation they need to control the risks faced by each investment and to deliver benefits that are linked to mission performance. In addition, a Stage 3 organization will have the benefit of good performance data generated by Stage 2 processes. Expanding this focus to the entire portfolio provides the organization with longer-term assurances that the IT investment portfolio will deliver mission value at acceptable cost. SSA has executed two of the seven key practices associated with this process: ensuring adequate resources, including staff and tools for reviewing the investment portfolio, and ensuring that the ITAB is familiar with the process for evaluating and improving investments. The remaining five key practices were not executed, partly because SSA has delegated portfolio management and partly because it is not executing the Stage 2 prerequisite critical process, providing investment oversight, which collects information on projects. As we have discussed, the ITAB does not receive information on nonperforming projects, because performance monitoring has been delegated to the Deputy Commissioner, Systems. SSA officials agreed that they were not evaluating the portfolio as a whole. Until SSA executes all the key practices associated with this critical process, senior executives will not have the information they need to determine whether the investments they have selected are delivering mission value at the expected cost and risk. Table 13 shows the status for each key practice required to implement the critical process for evaluating the portfolio and summarizes the evidence that supports these ratings. The purpose of a postimplementation review is to evaluate an investment after it has completed development in order to validate whether the estimated return on investment was actually achieved. Specifically, the review is conducted to (1) examine differences between estimated and actual investment costs and benefits and possible ramifications for unplanned funding needs in the future and (2) extract “lessons learned” about the investment selection and control processes that can be used as the basis for management improvements. Postimplementation reviews should also be conducted for investment projects that were terminated before completion to readily identify potential management and process improvements. SSA has executed four of the six key practices associated with this process: policies and procedures are defined, adequate resources are provided, individuals assigned to conduct postimplementation reviews are familiar with the processes, and projects for which reviews will be conducted are identified. The remaining two key practices were not executed: quantitative investment data are not collected and analyzed and lessons learned are not conducted on investment processes for selection, control, and evaluation. Without analyzing quantitative data on benefits achieved, SSA cannot determine whether the project has delivered anticipated benefits. Further, without knowledge of what benefits are actually achieved from projects, the portfolio cannot be evaluated, and Stage 4 and 5 practices cannot be carried out effectively. Also, without developing lessons learned from postimplementation reviews to improve the CPIC’s select, control, and evaluate phases, the agency will be unable to use the reviews to improve its investment management processes. Table 14 shows the status for each key practice required to implement the critical process for conducting postimplementation reviews and summarizes the evidence that supports these ratings. More Than Half of SSA’s IT Budget Is Not Subject to Its Current Investment Management Process Even though SSA is executing most Stage 2 and Stage 3 key practices for the work year budget managed by its investment board, IT products and services acquired with the acquisition budget ($610 million in acquisitions in fiscal year 2008—58 percent of the IT budget) are not managed as investments under SSA’s CPIC process, and are not reviewed by the ITAB. These products and services include, among other things, engineering support services, network infrastructure, mainframe capacity infrastructure, hardware maintenance, software maintenance, local telecom services, telephone systems maintenance, and an agencywide support service contract. These acquisition budget expenditures are under the overall direction of the Deputy Commissioner, Systems and are determined by funding requests from the business units and subsequent negotiations. Each deputy commissioner and the associate commissioners who report to the Deputy Commissioner, Systems, submit requests for funds based on the unit’s acquisition needs. These requests are analyzed by the Deputy Commissioner, Systems staff, requests are reconciled with the available resources, a budget is developed, and the CIO reviews and signs it. Although this process involves a large budget and important assets, it is not subject to the CPIC select, control, and evaluate phases. For example, acquisitions of IT products and services are not selected by a board in a disciplined fashion, such as using the agency’s CPIC select and control procedures, but instead are largely selected by one individual—–the Deputy Commissioner, Systems. While the ITAB is provided a list of proposed projects for the Agency IT Plan, the list does not include the acquisition budget expenses associated with projects. However, the investment board does receive a report summarizing the total amount of the funds expended. Agency officials gave several reasons why the acquisition budget is not managed by the investment board. Specifically, in SSA’s view, just as the other deputy commissioners have discretion to manage funding allocated to their portfolios, the Deputy Commissioner, Systems should have the same discretion to allocate funding in the infrastructure portfolio. Further, the officials stated that many items included in this budget are very technical and might not be well understood by senior business management; thus, review at this level is not thought to be effective. In addition, officials said that many items in the acquisition budget (such as telephones) are not optional, but necessary to keep the agency running, and thus do not require a decision process. Given the large amount of funds involved, senior management involvement and oversight are essential to ensure effective management of and full accountability for acquisitions of IT products and services. Further, until the agency manages all of its investments from an enterprisewide perspective, it will be unable to consider its investments comprehensively, and ensure that the investments optimally address the organization’s mission, strategic goals, and objectives. SSA Is Beginning Initiatives Intended to Address High-Level ITIM Processes Organizations that achieve the Stage 4 level of maturity evaluate their IT investment processes and portfolios to identify opportunities for improvement. At the same time, these organizations are able to maintain the mature control and selection processes that are characteristic of Stage 3 in the ITIM model. At Stage 4, organizations are capable of systematically planning for and implementing decisions to discontinue or deselect obsolete, high-cost, and low-value IT investments and planning for successor investments that better support strategic goals and business needs. Organizations acquire Stage 5 capabilities when they create opportunities to shape strategic outcomes by learning from other organizations and continuously improving the manner in which they use IT to support and improve business outcomes. Thus, organizations at Stage 5 benchmark their IT investment processes relative to other best-in-class organizations and conduct proactive monitoring for breakthrough information technologies that will allow them to significantly improve business performance. Table 15 shows the purpose of each critical process in Stages 4 and 5. Because the ITIM is cumulative, agencies cannot fully implement Stage 4 and 5 processes without first executing Stage 2 and 3. Nonetheless, SSA officials said they have begun two initiatives related to a Stage 4 objective (improving the investment process) and a Stage 5 objective (leveraging IT for strategic outcomes). The first initiative, Application Portfolio Management, was established to improve the agency’s information technology decision-making process. When fully implemented, the initiative is intended to address the Stage 4 critical process (managing the succession of information systems). The Application Portfolio Management review is used to analyze and quantify the health of existing software applications to determine whether they are eligible to be retired, renovated, or maintained. According to the agency, SSA has released version 1.0 of Application Portfolio Management and has begun identifying software applications that are eligible to be retired, renovated, or maintained. The second initiative, the Technology Infusion Process, is beginning to address the second Stage 5 critical process—using IT to drive strategic business change. The Technology Infusion Process was established to evaluate and implement new technologies or new uses of existing technologies that will facilitate SSA’s ability to achieve the agency’s strategic goals. SSA has begun to identify various technologies for research and has begun to review technology projects submitted by a component sponsor as candidates for the Technology Infusion Process. However, Application Portfolio Management has not identified hardware or infrastructure projects for retirement, renovation, or maintenance. Conclusions Given the importance of IT to SSA’s mission, it is vital that the agency manages its investments effectively. To its credit, SSA has established many of the basic practices needed to build the foundation for managing its projects as investments and for managing its investments as a portfolio. However, weaknesses remain. For example, although the agency has established an investment board as the decision-making body that defines and implements the investment governance process, key policies and procedures for the investment management process are not fully defined, and the investment board does not provide oversight of underperforming investments. Moreover, the agency does not track corrective actions for its underperforming projects. SSA has also taken the important step of creating an investment portfolio. However, it has not fully established the policies and procedures essential to managing the portfolio, such as for reviewing, evaluating, and improving the performance of the portfolio. Further, the agency’s postimplementation reviews do not evaluate whether the expected benefits were achieved or identified lessons learned for improving the investment management processes. Moreover, the agency’s IT acquisition budget, used to acquire IT-related products and services, is not allocated or overseen by the investment board and is not managed using investment governance processes. Failure to apply these processes to the acquisition budget makes it impossible for SSA executive management tasked with overseeing the agency’s investments to ensure that this portion of the budget is spent in the most efficient and effective manner. Recommendations for Executive Action To strengthen SSA’s investment management capability and address weaknesses discussed in this report, we recommend that the Commissioner of Social Security take the following actions: To fully implement the key practices for building the investment foundation (Stage 2) for current and project-level future IT investments’ success, direct the Chief Information Officer to establish comprehensive policies and procedures for defining the investment governance process that specify (1) investment board operating procedures, (2) delegations of authority, and (3) criteria for prioritizing new and ongoing investments; strengthen and expand the board’s oversight responsibilities for underperforming projects and evaluations of projects; and establish a mechanism for tracking corrective actions for underperforming investments. To fully implement the key practices for developing a complete investment portfolio (Stage 3), direct the Chief Information Officer to establish policies and procedures for defining the portfolio criteria; establish portfolio-level performance evaluation policies and procedures and criteria for assessing portfolio performance; and evaluate quantitative measures during postimplementation reviews, and lessons learned for improving select, control, and evaluate processes. To ensure senior management involvement and full accountability for the agency’s investments, direct the Chief Information Officer to develop and implement policies and procedures to manage IT acquisitions as investments and manage them using the investment management framework. Agency Comments and Our Evaluation The Commissioner of Social Security provided written comments on a draft of this report (comments are reproduced in appendix II). In its comments SSA agreed with six of our recommendations and disagreed with one. Regarding those recommendations with which it agreed, SSA stated that it had initiated actions to document existing investment management processes and that it plans to strengthen and expand the role of the investment board in the oversight of underperforming projects and in the evaluations of investments. The agency also stated that it plans to establish a mechanism for tracking corrective actions for underperforming investments. Further, to achieve a complete IT investment portfolio, SSA plans to establish procedures for defining the portfolio criteria within the context of the existing delegation of authority to the portfolio sponsors. In addition, regarding postimplementation reviews, the agency stated it plans to evaluate quantitative measures and lessons learned for improving select, control, and evaluate processes. SSA disagreed with our recommendation that it develop policies and procedures for managing its IT acquisitions as investments and manage them using the investment board and investment management processes. The agency stated that its existing budget development process already treats these acquisitions as investments and maintains them by using an investment management framework, though not the one described in our ITIM framework. However, under SSA’s current process, these acquisitions are not subject to the agency’s investment management select, control, and evaluate processes and are not managed by its investment board. Given that the IT products and services make up the majority of SSA’s IT budget, the investment board’s involvement is essential to helping ensure effective management of and full accountability for acquisitions of IT products and services. As we previously noted, by the agency not applying its investment management process to the acquisition budget, it limits the ability of SSA’s executive management tasked with overseeing the agency’s investments to ensure that this portion of the budget is spent in the most efficient and effective manner. SSA also provided technical and other comments, which we have incorporated as appropriate. Among the comments, the agency stated that it had pursued the adoption of industry best practices developed by institutions such as the Software Engineering Institute of Carnegie Mellon University and believed it had achieved comprehensive and mature IT management practices. SSA added that our assessment had provided an opportunity for the agency to think carefully about many aspects of its investment management processes, and had enabled it to better understand the strengths and weaknesses of its current approach to managing investments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Director of the Office of Management and Budget, and the Commissioner of Social Security. Copies of this report will be made available to other interested parties on request. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objective, Scope, and Methodology Our objective was to determine whether Social Security Administration’s (SSA) investment management approach is consistent with leading investment management best practices. Our analysis was based on best practices contained in GAO’s Information Technology Investment Management (ITIM) framework and the framework’s associated evaluation methodology, and focused on the agency’s implementation of critical processes and key practices for managing its business systems investments. To address our objective, we asked the agency to complete a self- assessment of its investment management process and provide the supporting documentation. We then reviewed the results of the agency’s self-assessment of Stages 2 and 3 practices and compared them against our ITIM framework. We focused on Stages 2 and 3 because these stages represent the processes needed to meet the standards of the Clinger- Cohen Act and they establish the foundation for effective acquisition management. We also validated and updated the results of the self- assessment through document reviews and interviews with officials, such as the CIO, Deputy Commissioner, Systems, and other staff in these offices. In doing so, we reviewed written policies, procedures, and guidance that provided evidence of documented practices, including SSA’s IT Capital Planning and Investment Control (CPIC) Guide and IT Planning Training Package. We also reviewed the fiscal year 2008-2009 Agency IT Plan and the board’s meeting minutes and other documentation providing evidence of executed practices. We compared the evidence collected from our document reviews and interviews to the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review or when we determined that there were significant weaknesses in SSA’s execution of the key practice. In addition, SSA was provided with the opportunity to produce evidence for key practices rated as “not executed.” We did not assess investments made with SSA’s IT acquisition budget because SSA acknowledged that the acquisition budget is not managed using SSA’s investment management process. This budget includes items that are not projects, but are technology items that support projects, or general infrastructure such as mainframe computers, desktop computers, data storage, or telecommunications services. As part of our analysis, we selected three IT projects as case studies to verify whether certain critical processes and key practices were being applied. SSA officials participated in the selection of these case studies. We selected projects that (1) supported different SSA functional areas, (2) were in different life-cycle phases, and (3) involved different funding amounts. These three projects are described below. Ready Retirement is a project that automates the processing of retirement applications. It allows individuals to file for benefits using a Web interface. This investment is expected to increase online claims filing, minimize the number of recontacts required to complete an application, and provide progress indicators to inform applicants of where they are in the application process. Ready Retirement is intended to prepare the agency for the growing retirement workload expected as baby boomers become eligible for retirement by enabling applicants to prepare their own applications. According to the agency, this project is estimated to require about 27 staff years for fiscal year 2008, which corresponds to costs of about $3.1 million. Appeals Council Case Processing is a software development project that automates the handling of case files in appeals of disability determinations. It is intended to provide the capability to process all disability cases electronically at all adjudicative levels. Further, the system can obtain claims, medical evidence, and supporting documentation over the Internet in a secured environment. The users have the capability to complete all disability case-related actions electronically. This project is expected to eliminate backlogs, reduce reliance on paper folders, and increase decisional and documentation accuracy and decisional consistency. SSA estimates that this project will require about 56 staff years in fiscal year 2008, which corresponds to costs of about $6.4 million. Mainframe Architecture is a large infrastructure investment that involves both developmental and operations and maintenance components, and includes both software development and hardware. SSA’s mainframes are the hardware platform for many critical systems. The agency states that its objective is to provide 100 percent reliability and availability to mainframe users. Tasks for the project include enhancements to hardware and software technology, annual upgrades to the operating system, routine additions to mainframe capacity dictated by workload growth, and migration to the current software versions of over 100 vendor products. The agency estimates that this project will require about 54 staff years for developmental projects and about 28 staff years for operations and maintenance work in fiscal year 2008, which corresponds to costs of about $9.5 million. In addition, the project is expected to require about $84 million from the acquisition budget for a total cost of about $94 million. For these projects, we reviewed project management documentation, such as project proposals, project plans, and performance reports on costs and benefits. We also conducted interviews with the agency’s CIO and Deputy Commissioner, Systems, as well as other managers responsible for the agency’s investment management processes. We conducted our work at SSA headquarters in Baltimore, Maryland from October 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Appendix II: Comments from Social Security Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact person named above, key contributors to this report were Cynthia Scott, Assistant Director; Faiza Baluch; Rebecca LaPaze; Sabine Paul; Tomás Ramirez; Glenn Spiegel; Niti Tandon; and Daniel Wexler.
The Social Security Administration (SSA) spends about $1 billion annually to support its information technology (IT) needs. Given the size and significance of the agency's ongoing and future investments in IT, it is crucial that the agency manages these investments wisely. Accordingly, GAO was requested to determine whether SSA's investment management approach is consistent with leading investment management best practices. To accomplish this, GAO used its IT investment management framework and associated methodology, with a focus on the framework's Stages 2 and 3, which are based on the investment management provisions of the Clinger-Cohen Act of 1996. SSA's investment management approach is largely consistent with leading investment management practices. It has established most of the practices needed to manage its projects as investments and is making progress towards managing IT investments as a portfolio; however, it is not applying its investment management process to all of its investments. Specifically: (1) The agency is executing a majority of the key practices needed to build the foundation for managing its IT projects as investments. Of the 5 processes and their 38 associated key practices, SSA is executing 31 practices. However, the agency's investment board, which should provide executive oversight of investments, is not adequately monitoring the performance of IT projects. (2) SSA has made progress in establishing the key practices for managing investments as a portfolio--it is executing 18 out of 27 key practices. The agency has made important progress in defining and creating the investment portfolio, but it has not developed enterprisewide portfolio selection criteria. The agency also has not established procedures for evaluating the portfolio, and its postimplementation reviews do not determine whether projects meet the agency's strategic goals. (3) SSA is not applying its investment management process to a major portion of its IT budget. Specifically, IT products and services acquired with its acquisition budget ($610 million of the $1 billion IT budget for fiscal year 2008) are not managed by the board as investments. SSA's executive-level review board is not responsible for overseeing the acquisition budget. Consequently, executive management has limited insight into investments acquired with these funds, and the agency has limited ability to ensure that the budget is spent in the most efficient and effective manner. Until it establishes oversight of all investments and fully defines policies and procedures for overseeing both individual projects and an agencywide portfolio, SSA risks not being able to select and control these investments consistently and completely, thus increasing the chance that investments will not meet mission needs in the most cost-effective and efficient manner.
Introduction If not properly managed, agricultural production on the nation’s 382 million acres of cropland can adversely affect water and air quality, long-term soil productivity, and the availability of wildlife habitat. The Conservation Reserve Program (CRP), first enacted in 1985, was designed in part to address these problems. Under the CRP, the U.S. Department of Agriculture (USDA) entered into 10- to 15-year voluntary contracts with farmers to remove highly erodible cropland from production and establish a cover crop on it in return for annual federal rental payments. From 1986 to 1992, 36.4 million acres—almost 10 percent of the nation’s cropland—were removed from production under 375,000 CRP contracts at an estimated total outlay of $19.5 billion through 2002. In October 1995, contracts for the first 2 million acres enrolled in the CRP will expire. Contracts on approximately 22 million additional acres will expire in 1996 and 1997; the remaining contracts will expire by the end of 2002. The prospect of the return of these lands to crop production has raised several concerns, especially the loss of environmental protection afforded by the CRP. The CRP’s Goals Have Evolved The CRP’s goals have changed in response to the nation’s environmental concerns. The Congress initially authorized the CRP in the Food Security Act of 1985 and mandated USDA to retire 40 million to 45 million acres of highly erodible cropland from production by 1990 to improve the environment (focusing on reducing soil erosion), reduce excess supplies of commodities, and support farm income. On the basis of these 1985 goals, USDA enrolled nearly 34 million acres by 1990, principally in the Great Plains and Mountain states. These acres are subject primarily to erosion caused by wind rather than by water. Although both forms of erosion can result in reduced agricultural productivity, water-caused erosion generally results in greater off-site water quality, recreation, and wildlife damages. To improve the environmental benefits achieved by the CRP, the Congress emphasized the program’s water quality goals when it reauthorized the CRP in the Food, Agriculture, Conservation and Trade Act of 1990. Consequently, the last 2.5 million acres that were enrolled—between 1990 and 1992—were concentrated in the Corn Belt and Lake states, where cropland is subject primarily to water-caused erosion. How the CRP Operates USDA’s Agricultural Stabilization and Conservation Service (ASCS) administers the CRP in cooperation with the Department’s Soil Conservation Service (SCS) and Extension Service, state forestry agencies, and local soil and water conservation districts. Acres covered in the CRP have to meet certain criteria established in the legislation and through regulation. ASCS held periodic signup periods during which farmers could offer the number of acres they wished to voluntarily enroll in the CRP for a period of 10- to 15-years and their desired rental payment. CRP Enrollment Criteria Initially, only highly erodible land that had been planted for 2 of 5 years during a specified period prior to enrollment met the enrollment criteria for this program. Generally, two-thirds of the field had to be highly erodible in order for the whole field to be enrolled. The normal practice was to enroll the entire field instead of just the portion of the field that was highly erodible. Beginning with the sixth signup period in 1988, ASCS expanded enrollment criteria to allow partial-field enrollment of grass or tree strips 66- to 99-feet wide bordering waterways, without regard to erodibility. ASCS also allowed other types of partial-field enrollments, such as public water wellhead areas, beginning with the tenth signup period in 1991. Although USDA allowed some partial-field enrollments, the vast majority of CRP enrollments continued to be whole field enrollments. In return for keeping land out of production, farmers receive federal rental payments on the CRP acreage and reimbursement for 50 percent of the cost they incur to establish a permanent cover crop, such as grass, on those acres. The rental payment amount is determined by the landowner’s “bid”—the amount of money the farmer is willing to accept to retire the land—provided that the bid is within established ASCS payment limits and other enrollment criteria are met. CRP Bid Process ASCS has used two different methods to evaluate bids for CRP enrollment. From 1986 to 1990, for nine signups, ASCS evaluated bids using the “maximum acceptable rental rate” method. Under this method, if the farmer’s bid was at or below the rate that ASCS established for that area and the land met the enrollment criteria described above, the acreage was enrolled. This approach met with criticism because (1) it did not target a broad range of environmentally sensitive land and (2) after the first few signups, farmers were able to determine the maximum acceptable rental rate and often submitted bids that were more than the land’s actual market rental rate but less than USDA’s maximum rate for that region. In 1990, the Congress reauthorized the CRP and directed USDA to give priority to future CRP enrollments in areas where crop production is most likely to impact water quality. This instruction caused ASCS to turn to another method for evaluating bids. ASCS compared all bids in signup periods 10 through 12 (1991 and 1992) to the market rental rate for comparable land in the same region. Bids that were less than or equal to this rate were then evaluated using a measure of environmental benefits developed by USDA known as the Environmental Benefits Index (EBI). This index calculates seven potential environmental and other factors associated with the land on which farmers were offering CRP bids—surface water quality, groundwater quality, soil productivity, conservation compliance assistance, tree planting, Water Quality Initiative areas, and conservation priority areas—in relation to the federal costs of enrolling that land. Although USDA officials and environmental groups generally support the EBI as an improvement in the CRP enrollment process, they also agree that the EBI could include more environmental benefits, such as air quality and wildlife habitat. USDA is currently considering revisions to the EBI for any future land retirement program. Current Status of the CRP Currently, 36.4 million acres are enrolled in the CRP. As figure 1.1 shows, most of the CRP acres—22 million—are enrolled in the Great Plains and Mountain states. Table 1.1 shows, by region, the number of acres enrolled in the CRP and rental payments. The 22 million acres in the Great Plains and Mountain states account for 60 percent of all CRP land but only 51 percent of all CRP payments. This difference reflects the generally lower rental rates in these areas. Through 2002, the federal government will have spent an estimated $19.5 billion for the CRP—approximately $18.1 billion in rental payments and $1.4 billion in cost-share payments to establish a cover crop on CRP land. The government’s cost for the CRP is partially offset by a reduction in commodity payments that USDA would have otherwise paid on wheat, corn, barley, and other commodity acres enrolled in the CRP. A 1990 GAO report found that estimates of this offset vary depending on the assumptions made, such as the productivity of CRP land and how other acreage set-aside programs might have operated in the absence of the CRP. USDA has estimated the offsetting commodity program savings to be about 50 percent of total CRP outlays. Farmers can choose a variety of cover crops for their CRP acreage. Approximately 82 percent of CRP acres—30 million—have been planted in grass. These acres could be converted to crop production once the contracts expire. Another 2.4 million acres have been planted to trees, about one-half of the CRP’s tree-planting goal. These acres are less likely to return to cultivated crop production. The remaining 4 million CRP acres are devoted to other conservation practices, including wildlife ponds and food plots, landscape structures such as grassed waterways, filter strips, and windbreaks. Contracts for the first 2 million acres will expire in October 1995.However, the majority of CRP acres—22 million—will be eligible to return to production in 1996 and 1997. Figure 1.2 shows the scheduled expiration dates for the contracts by acreage and major commodity. The CRP Provides Benefits, but GAO Has Questioned Its Cost-Effectiveness The CRP has reportedly achieved substantial environmental benefits. For example, the Department of the Interior estimated that the CRP will provide a total of $13.4 billion in environmental benefits over the program’s life: $3.1 billion for water quality, $400 million for air quality, $1.3 billion in preserved soil productivity, $3.1 billion for small game hunting, $4.1 billion for nonconsumptive wildlife, and $1.4 billion for waterfowl hunting. These estimates, however, are based on a 1992 USDA estimate of soil erosion reductions on CRP land of almost 700 million tons per year. More recent USDA estimates derived from National Resources Inventory (NRI) data indicate that soil erosion has been reduced by only one-half of this estimate—about 370 million tons annually. Our reports have found that the CRP could have been more cost-effective for environmental benefits. For example, these evaluations point out that the CRP could have provided more environmental benefits for the same amount of federal expenditure if USDA had emphasized the program’s water quality goals. These evaluations note that USDA focused primarily on meeting mandated acreage goals that were established for each signup, to the detriment of the program’s environmental goals. Related USDA Programs In addition to the CRP, more than 20 USDA programs address the environmental impacts of crop production. About one-half of these programs were introduced in 1985 and 1990. Most of these programs are voluntary and provide technical assistance, cost-share payments, and/or incentive payments to encourage conservation practices. Both farmers who receive USDA farm program benefits and those who do not can use these programs. Appendix I lists these programs and describes their basic environmental provisions. Objectives, Scope, and Methodology Concerned about the potential adverse environmental impact from crop production on expiring CRP acres and on other cropland acres, the Chairman and Ranking Minority Member of the Senate Committee on Agriculture, Nutrition, and Forestry asked us to (1) estimate the amount, and identify the location of, CRP land and other cropland that is environmentally sensitive and should be permanently removed from crop production to achieve environmental benefits; (2) identify ways to modify the CRP to more effectively remove this land from production; and (3) identify CRP land and other cropland that is environmentally sensitive but can be protected by conservation practices and stay in production. We were also asked to describe ways the federal government can encourage the use of these practices. In response to the first objective, we (1) reviewed literature on agriculture’s effect on environmental quality and interviewed officials at USDA, the Environmental Protection Agency, the Fish and Wildlife Service in the Department of the Interior, representatives of farmers’ organizations, soil scientists, wildlife biologists, and agricultural economists to identify what factors determine environmental sensitivity, and (2) analyzed USDA’s NRI and CRP contract data bases, in cooperation with the SCS, to estimate the amount and identify the location of cropland that met USDA definitions of environmentally sensitive land for each factor. The NRI—a natural resource inventory sample compiled at 5-year intervals—is the federal government’s principal source of information on the status, condition, and trends of soil, water, and related resources for nonfederal lands and links this information to CRP land. Although we did not perform a reliability assessment of the NRI data base, we did review the methods used by SCS to ensure the accuracy and completeness of the data. We determined the data are reliable for our purposes. Appendix II contains confidence intervals for estimates presented in the text of this report and a statement of reliability for confidence intervals for the maps. Confidence intervals for individual hydrologic unit areas for each map have been prepared and are available upon request. To respond to the second objective, we (1) reviewed relevant literature, including contract-holder surveys; (2) interviewed USDA staff, agricultural economists, soil scientists, and representatives of farm, conservation, environmental, and wildlife organizations; and (3) analyzed the recommendations of a USDA CRP task force. We also applied the above methodologies to respond to the third objective. We conducted our work at USDA, the Environmental Protection Agency, and the Fish and Wildlife Service headquarters in Washington, D.C.; several USDA state and county offices; conservation and wildlife organizations’ offices; several universities; and an on-farm demonstration project. We conducted our work from June 1993 through November 1994 in accordance with generally accepted government auditing standards. We obtained written agency comments on a draft of this report. USDA’s comments and our evaluation of them appear in appendix IV. Using Buffer Zones Can Reduce the Amount of Cropland Needed for Land Retirement No comprehensive data are available to specifically identify the amount and location of CRP land and other environmentally sensitive cropland that should be removed from production for environmental benefits. However, depending on the environmental objectives established, only a small portion of CRP acres and other cropland may need to be removed from production. This is because the use of “buffer zones,” as well as other conservation practices such as reduced tillage, can mitigate the environmental degradation caused by crop production. Buffer zones are small portions of land that provide a buffer between fields in crop production and the surrounding environment. For example, if buffer zones were used on CRP land and other cropland to protect surface water—one of the five environmental sensitivity factors—only about 6 million acres would need to be removed from crop production. These acres are primarily located in the Corn Belt, Lake, Delta, and Appalachian states. The amount of buffer zone acres needed for groundwater, air, and soil protection would likely be less than the amount necessary for surface water and wetlands. In addition to buffer zones, some wildlife species require large blocks of their native landscape. Therefore, if wildlife habitat enhancement is established as a major objective of a future CRP, much more land may be necessary than the amount needed for buffer zones. While the Congress is considering reauthorizing the CRP, it could consider three modifications to the CRP that would provide longer-term environmental benefits at less cost. These modifications are (1) focusing the program more on creating buffer zones rather than on retiring whole fields of cropland; (2) allowing CRP participants to generate revenues by using CRP land in ways that do not impair the environment, such as restricted haying or grazing; and (3) purchasing easements which would restrict activities on the land for a substantial period, such as 30 years, or longer, for approximately the same cost to the federal government as the current 10-year contracts. Buffer Zones Can Mitigate the Effects of Crop Production on Environmentally Sensitive Land Land on which crop production can result in significant off-site and on-site environmental damages is considered environmentally sensitive. In identifying cropland that is environmentally sensitive, USDA and environment group officials we spoke with agree that five factors should be examined: surface water; groundwater; air; soil; and wildlife habitat. No comprehensive data are available to examine the effect of crop production on all factors simultaneously. Therefore, we used USDA data to estimate the amount of environmentally sensitive land nationwide for each factor. These estimates cannot be totaled because they are not mutually exclusive. The same land may be sensitive to several of the factors, such as surface water, groundwater, and wildlife habitat. Recent research by USDA, the Environmental Protection Agency, and the National Research Council shows that dedicating small portions of fields to create buffer zones—relatively small plots of land that provide a buffer between fields in crop production and the surrounding environment—can provide substantial environmental benefits without removing whole fields from production. Buffer zones include (1) filter strips—typically 100-foot-wide strips of grass and trees around rivers, streams, lakes, and wetlands that border cropland; these strips prevent the majority of agricultural pollutants from reaching the water; (2) plots of grass surrounding public water wellheads to prevent chemicals from leaching into groundwater; (3) strips of trees and bushes that decrease wind velocity to reduce wind erosion; and (4) strips of vegetative cover—“wildlife corridors”—that connect already existing wildlife habitat areas. To be most effective, buffer zones should be used in tandem with other conservation practices, such as reduced tillage, on cropland in production. For example, the National Research Council Board on Agriculture recently recommended the use of buffer zones as one component in soil and water quality improvement. In addition, USDA officials agree that removing whole fields from crop production may be justified in some limited cases when buffer zones and other conservation practices are not sufficient to mitigate the environmental effects of crop production on the field. Buffer Zones Require That Only a Small Amount of CRP Land and Other Cropland Be Removed From Production By using the buffer-zone approach to protect surface water and wetlands, only about 6 million acres nationwide—255,000 CRP acres and 5.5 million other cropland acres—would need to be removed from crop production.These acres would be placed in filter strips adjacent to surface water and wetlands. Filter strips can improve the quality of (1) surface water and wetlands by removing sediment and chemicals from agricultural runoff, (2) groundwater by improving the quality of surface water that recharges groundwater aquifers, and (3) wildlife habitat for some species. For example, a USDA study found that filter strips reduce the amount of phosphorous and nitrogen that reaches surface water by 80 percent. In addition, improvements to surface water and wetlands often extend to maintaining groundwater quality because groundwater is frequently replenished by surface water and wetlands. Filter strips also provide habitat for wildlife that live near water and improve water quality for fish and other aquatic species. Figure 2.1 shows that these 6 million acres are concentrated in the Corn Belt, Lake, Delta, and Appalachian states. While the 6 million acres in filter strips to protect surface water and wetlands are relatively easy to identify, buffer zones could also be used to protect groundwater, air quality, and two types of wildlife habitat.However, while nationwide data are not available to estimate the amount of buffer zone acres necessary to protect the environment from these perspectives, USDA officials and environmental experts agree that the amount of acres needed is likely to be less than the 6 million acres needed to protect surface water because buffer zones are more appropriate for protecting surface water than for the other factors. The following describes how buffer zones could be used for the remaining environmental factors: Groundwater. Grass buffer zones can protect areas where groundwater approaches the surface, such as where wells have been drilled for a public water supply or where groundwater is replenished with water from the surface through highly porous soils, by filtering water as it leaches from the surface into groundwater aquifers. Air. Tree and bush buffer zones—windbreaks—can protect air quality by decreasing wind velocity, thereby reducing wind erosion. Wildlife Habitat. Buffer zones can be used to protect the habitat of two types of wildlife. Wildlife that live near or in water would benefit from the buffer zones that improve surface water and wetlands quality. Buffer zones would also provide habitat for wildlife such as pheasants that need small, separate plots of habitat adjacent to cropland. However, buffer zones would not offer sufficient habitat for species that require large, unbroken blocks of their native landscape, such as grassland species like the prairie chicken. These species would require whole-field enrollments to provide sufficient habitat. Therefore, a mix of buffer zones and whole field enrollments may be appropriate for a future CRP to provide benefits for a wider range of wildlife species. In this connection, three reports issued in February 1995 suggest that, if wildlife habitat enhancement is established as a major goal of a future CRP, the acreage required for whole-field enrollments could be substantial. For example, the Wildlife Management Institute recently estimated that 27 million acres of grassland are needed in the Great Plains and eastern Mountain states to achieve regional goals of stabilizing and restoring wildlife populations. These acres would provide habitat for game birds and nongame birds. The National Audubon Society report recommends that future CRP enrollments, principally in these regions, should be targeted to areas that have the highest value to wildlife, such as acres adjacent to existing wildlife resource areas. In estimating the number of CRP acres that have the highest value to wildlife, a report by the Center for Agricultural and Rural Development states that very large wildlife benefits would likely result from converting some grassland from cropping uses but that, beyond some point, enrolling additional grassland is likely to yield significantly lower benefits. Wildlife biologists have also suggested that the CRP—using buffer zones or whole field land retirement—could be targeted to provide habitat for threatened or endangered species. According to USDA, the habitat for 319 wildlife species is threatened or endangered because of agricultural development. These habitats are concentrated in the Southwest, Florida, southern Appalachia, and the northern Great Plains. Modifying the CRP Could Provide Environmental Benefits for a Longer Term and at Less Cost Changes to the CRP—focusing on using buffer zones, allowing alternative economic uses, and purchasing long-term easements restricting certain activities on the land—could make the program less costly to the federal government while providing longer-term environmental benefits. Buffer Zones Could Provide Environmental Benefits at Less Cost USDA officials and other agriculture and environment experts have recommended the use of buffer zones as one method to protect the environment while reducing the costs of the CRP. That is, the program could be modified to focus primarily on removing buffer zones from production, rather than whole fields. This program would be smaller—and therefore less costly to the federal government—than the current program. For example, if the approximately 6 million acres identified as appropriate for filter strips to protect surface water and wetlands were enrolled in the CRP at the current average rental rate of $66 per acre for the regions where this acreage is located, total rental rates would be $396 million per year rather than $1.8 billion for the current 36.4 million CRP acres. At the same time, more land would be available for production. Alternative Economic Uses Would Reduce the CRP’s Cost Per Acre USDA could reduce the federal per-acre cost of the CRP by allowing CRP participants to generate revenues on CRP land in ways that do not impair the environment, such as harvesting hay at certain times of the year in exchange for reduced CRP payments. Currently, CRP participants are only allowed to cut hay or graze cattle on CRP land during emergency periods as declared by the Secretary of Agriculture. A House bill (H.R. 3894) introduced in February 1994 proposed allowing limited uses—haying, grazing, producing seeds, and harvesting grass or trees for biomass fuel—in exchange for a 20-percent or greater reduction in current rental rates. Limitations would be placed on these activities to ensure that environmental problems are minimized. While some of these activities could be conducted on buffer zones, others—such as grazing—would generally require larger plots of land. In addition, USDA officials noted in comments to a draft of this report that allowing alternative economic uses may meet opposition from producer groups because it would negatively impact the livestock and forage markets if a large number of CRP participants choose this option. This proposal encourages CRP participants to convert their CRP land to uses other than cropping. The Congress is considering offering current contract holders this option to save money on current contracts and to encourage them to experiment with new uses of the land. Easements Offer Long-Term Protection Through the purchase of easements from farmers, who agree to restrictions on the use of their land, the government can ensure that land will stay out of production for longer than 10 years. Easements offer a better guarantee of long-term protection because they are an interest in the land itself and typically are for a substantial duration (such as from 10 years to in perpetuity). Because easements are recorded on the title to the land and are binding on subsequent owners, they can ensure that the restrictions on the land will be honored even if the land is sold. In addition, easements can cost less to the government than three 10-year contract renewals. For example, if the approximately 6 million acres identified as appropriate for filter strips were enrolled in easements at $620 per acre, total program costs would be $3.1 billion in 1994 dollars, compared to current program costs of $18.1 billion for rental payments.Alternatively, if that land were enrolled in 10-year contracts at the current average rental rate of $66 per acre for the regions where these acres are primarily located, total program costs for 30 years would be $5.9 billion in 1994 dollars. Beginning with the tenth signup period in 1991, CRP participants had the option of either contracts or easements and overwhelmingly opted for contracts because they were reluctant to restrict the use of their land for a long term. Approximately 10,000 acres—less than 0.5 percent of the CRP enrollment in signups 10 through 12—were enrolled as easements. In contrast, USDA offered easements, but not contracts, to farmers through the Wetlands Reserve Program, and farmers more willingly accepted easements. For example, in the Wetlands Reserve Program pilot in 1992, farmers submitted bids for nearly 250,000 acres even though USDA could accept only 50,000 acres. Given the attractiveness of contracts over easements when both are offered, USDA officials believe that easements are viable only if contracts are not offered simultaneously. Conclusions Only a small amount of total cropland nationwide may need to be removed from crop production to protect the environment. Environmental degradation on this small amount of cropland can be managed by establishing buffer zones instead of removing entire fields from production. Under the buffer-zone approach, only 6 million acres of cropland would need to be removed from production and placed in buffer zones to protect surface water and wetlands. The buffer-zone approach can also be used to protect groundwater, air, and some wildlife habitat and is more efficient and less costly to the government because it allows more cropland to be in production. However, this approach would probably not provide for the habitat needs of all wildlife species. Therefore, if wildlife habitat enhancement is established as a major objective, a future CRP could require more acreage than that needed for buffer zones. Also, a buffer-zone oriented CRP would tend to put more land back in production and, depending on farm prices, could reduce farm income for CRP participants. Accordingly, this approach would not help achieve the current CRP’s supply control and farm income objectives. In addition, modifying the CRP could reduce federal costs and increase the amount of time the land is protected by allowing CRP participants to engage in limited uses of the CRP land for a reduced federal payment and encouraging the use of long-term easements instead of 10-year contracts. Matters for Congressional Consideration As the Congress debates the reauthorization of the farm bill in 1995 and contemplates the future environmental objectives of the CRP, it could consider modifying the CRP to (1) focus more on creating buffer zones where appropriate instead of removing whole fields from crop production, (2) allow alternative economic uses on CRP land, and (3) use long-term easements instead of 10-year contracts for any new CRP enrollments. Agency Comments and Our Response In responding to a draft of this report, USDA said that the report focused on the CRP’s environmental objective and did not address the CRP’s supply control and farm income objectives. We agree that this report focuses on the potential adverse environmental impact of CRP land returning to production because this was the issue the requesters asked us to address. However, we were not silent on other issues. Because we recognized that the CRP was also intended to reduce surplus crop production and support farm income, we summarized the results of six economic studies that estimate the impact of returning CRP land to production on these two objectives. (See ch. 3 and app. III.) These studies generally concluded that federal outlays for commodity program payments will increase but will not exceed current CRP payments. In addition, some studies concluded that farm program adjustments, as well as market adjustments, would mitigate the impact of lower farm prices. In addition, USDA said that the Secretary of Agriculture’s December 1994 announcement of planned CRP modifications will address many of the issues discussed in the report. USDA’s actions include modifying and extending existing contracts to target environmentally sensitive land, adjusting rental rates to more accurately reflect local prevailing rental rates, and encouraging the establishment of long-term easements on CRP land. USDA also stated that a future CRP should include a mix of buffer zones and whole-field enrollments to ensure flexibility. We agree that these modifications are steps in the right direction and will improve the environmental benefits and the cost-effectiveness achieved from new CRP enrollments. These steps will not, however, make the program as cost-effective as possible because USDA will still allow current CRP land that could return to crop production without harming the environment to remain in the program. As discussed in chapter 3, most CRP land can return to production with minimal impact on water, air, and soil quality if farmers use appropriate conservation practices. USDA also made three additional comments related to our matters for congressional consideration. USDA asserted that (1) long-term easements are more costly to the federal government than 10-year contracts, (2) easements will be less attractive to farmers than 10-year contracts, and (3) allowing alternative economic uses on CRP land may meet strong opposition from certain producer groups. Regarding the first issue, USDA focused on an example of easements in our report and asserted that the easement price was too low. In our draft report, we recognized that easement prices are likely to vary between geographic regions and soil types. In preparing our cost analysis, we compared expected CRP costs to an estimate of what easement prices might be. Our easement price estimate was based on the Wetlands Reserve Program—the only large-scale USDA land retirement program that purchases both partial and whole-field easements, rather than 10-year contracts. This price—$620 per acre—is actually higher than the average expected easement price of $583 quoted in USDA’s comments. Even using the higher price estimate, our example shows that if 6 million CRP acres were enrolled in 30-year easements rather than 3 10-year contracts, total program costs would be $3.1 billion—53 percent of the cost of contracts. Regarding the second issue, we found that easements are generally less attractive to farmers when 10-year contracts are offered simultaneously. Not surprisingly, when farmers are given a choice between higher government payments through 10-year contracts rather than lower payments through easements, they choose to receive the higher payments. When only easements are offered, farmer acceptance is much better. For example, our draft report cited the Wetlands Reserve Program pilot in which farmers submitted bids for five times the amount of acreage that was authorized, even though easements were the only option available to farmers. Concerning the final issue, we agree that some producer groups may oppose allowing alternative economic uses on CRP land because they believe that it would negatively impact the livestock and forage markets if a large number of CRP participants choose this option. However, because of the potential federal cost savings and the sensitivity of this issue, we believe that it deserves congressional consideration during the 1995 farm bill deliberations. We made minor revisions to our final report to address USDA’s comments. None of the revisions changed the message of the report or our matters for congressional consideration. USDA’s comments and our evaluation of them are included as appendix IV. Appropriate Conservation Practices for Cropland in Production Can Be Pursued Through Current Programs or New Proposals Except for buffer zones, most CRP land and other environmentally sensitive cropland can generally be in agricultural production without seriously harming water, air, and soil quality if farmers use appropriate conservation practices such as correct chemical application, reduced tillage, and periodic rotations to cover crops. Our analysis focused on the most environmentally sensitive cropland. In the absence of appropriate conservation practices, production on such land could result in serious environmental degradation. Appropriate conservation practices can often be achieved through USDA’s regulatory and voluntary programs, which cost less per acre to the federal government than the current CRP. With or without modifications, these programs should ensure that cropland in production will not return to pre-CRP conditions of environmental degradation. In addition, new proposals called green payments could be utilized to promote greater use of appropriate conservation practices. Although this report is focused on CRP’s environmental objective, we recognize that if CRP land returns to production it may impact this program’s other two objectives—reducing surplus crop production and supporting farm income. Therefore, we examined six economic studies that estimate the impact on these objectives. (See app. III.) Most studies found that, in the short term, CRP acres returning to production may increase crop supplies thereby causing lower farm prices and income. However, the studies also found that these effects are likely to be mitigated by adjustments in federal programs and the market. Millions of Environmentally Sensitive Acres Can Be Farmed Under Conservation Practices Millions of CRP acres and other cropland acres nationwide that are environmentally sensitive can be in production with the use of appropriate conservation practices, such as reduced tillage, appropriate chemical application, and periodic rotations to cover crops. The following presents our estimate of the amount and location of these acres for each environmental sensitivity factor and describes USDA-recommended conservation practices to mitigate the impact of agricultural production on these environmentally sensitive acres. These estimates cannot be totaled because they are not mutually exclusive (the same land may be sensitive to several of the five factors). Surface Water and Wetlands. Approximately 10 million CRP and other cropland acres—primarily in the Corn Belt and Appalachian states—are extremely erodible and between 100 and 500 feet from surface water or wetlands. (See fig. 3.1.) According to USDA, these acres have the highest potential to contaminate surface water and wetlands through erosion caused by rainwater and the resulting runoff of sediment and chemicals. Approximately 1 million of these acres are currently in the CRP. Conservation practices that can mitigate this erosion include reduced or no tillage, periodic rotation to cover crops, and conservation structures, such as terraces. Groundwater. Approximately 149 million acres of farmland nationwide—concentrated in the Corn Belt, Lake, and Eastern states—are most likely to contaminate groundwater because of the leaching of agricultural pesticides. (See fig. 3.2.) According to a USDA index of groundwater vulnerability, these acres have the highest potential for contaminating groundwater because they have highly leachable soils and/or are subject to chemical application. Of this national total, approximately 8 million are in the CRP. Proper nutrient, pesticide, and herbicide applications and crop rotations can significantly abate the potential for groundwater contamination. Air. Approximately 19 million CRP acres and other cropland acres nationwide—concentrated in the Great Plains and Mountain states—have the highest potential to decrease air quality through wind erosion. (See fig. 3.3.) Approximately 6.7 million of these acres are enrolled in the CRP. Conservation practices such as crop rotations and reduced or no tillage can reduce the potential for wind erosion. Soil. Approximately 50 million CRP acres and other cropland acres are least able to sustain soil productivity, according to USDA’s erodibility index. This index, a commonly used measure of soil productivity, compares the amount of potential wind- or water-caused erosion with the amount of erosion the soil will tolerate. (See fig. 3.4.) About 8 million CRP acres are included in this estimate. Conservation practices that help sustain soil productivity include crop rotations, reduced tillage, and appropriate chemical application. A different soil indicator—the land capability class—measures a field’s suitability for crop production on a scale of 1 through 8, with 8 being the least suitable for crop production. The land capability class was one of the measures used to determine eligibility for CRP enrollment. Approximately 24 million CRP acres and other cropland acres nationwide, concentrated in the Great Plains and the Midwest, have the least suitable soil for crop production, according to this index. (See fig. 3.5.) Approximately 4 million of these acres are in the CRP. While proper soil management techniques, including multiyear cover crop rotations, can enable some of these acres to sustain crop production, other acres may be best suited for rangeland or pastureland rather than cropland. While the erodibility index and the land capability class are traditional USDA measures of soil productivity, soil scientists generally agree that more complete measures of a soil’s overall quality are needed. In addition to productivity, soil quality measures would include texture, density, the ability to absorb chemicals, and the ability to retain water. USDA is currently developing soil quality measures that will examine the effects of long-term crop production on these characteristics. Wildlife Habitat. Crop production improves the habitat for some wildlife species while adversely affecting others. Therefore, it is difficult to estimate the amount and location of environmentally sensitive cropland for this factor. Wildlife biologists agree that the effects of production on wildlife can be mitigated through the use of conservation practices such as periodic rotations of cover crops, proper cover crop management on yearly set-aside acres, and greater use of multiyear set-aside acres. However, of the five environmental factors, damage to wildlife habitat is the most difficult to mitigate while leaving the land in production. USDA Programs Foster Conservation Practices and Could Be Strengthened For those acres requiring conservation practices, USDA conservation programs that currently require or encourage the use of such practices can prevent a return to pre-CRP environmental conditions or could be strengthened to increase environmental benefits. For example, one program—conservation compliance—requires farmers who want to receive USDA program benefits to use appropriate erosion control practices. This program could cover nearly 65 percent of CRP land if farmers wish to return this land to production and receive program benefits. Alternatively, tightening the erosion control standards could further reduce erosion. USDA Regulatory and Voluntary Conservation Programs Can Prevent a Return to Pre-CRP Environmental Conditions USDA’s regulatory conservation programs—the conservation compliance program and the swampbuster program—can ensure that environmental degradation from crop production will not return to pre-CRP levels if farmers wish to continue receiving USDA program benefits. The conservation compliance program—enacted in 1985—requires farmers to implement plans to reduce soil erosion on highly erodible cropland. These plans will be required on approximately 22 million CRP acres—65 percent of all CRP acres—if those acres return to crop production. The plans have already been implemented on over 100 million highly erodible acres currently in crop production. The swampbuster program—also enacted in 1985—prevents the conversion of wetlands to new cropland. Approximately 667,000 CRP acres are wetlands and could be subject to swampbuster; another 16 million acres of wetlands on other cropland were also subject to swampbuster as of March 1994. In addition, as shown in appendix I, 17 voluntary USDA environmental programs could reduce the impact of returning CRP land to production, at a lower cost per acre than the CRP. These programs generally provide technical assistance, cost-sharing, and/or incentive payments to farmers to establish conservation structures or conservation practices. For example, the Water Quality Incentives Program provides incentive payments to farmers for 3 years to encourage the adoption of water quality management practices. Another program—the Agricultural Conservation Program—provides financial assistance for approved conservation and environmental protection practices. Currently, USDA expenditures for these programs, including expenditures for the conservation compliance and swampbuster programs, are less than expenditures for the CRP. (See fig. 3.6.) Strengthening Some Current USDA Programs Could Improve Environmental Benefits Strengthening the environmental requirements for some current USDA programs could provide greater environmental protection. While examining the entire spectrum of USDA programs could lead to potential improvements, policymakers, USDA officials, and environmental groups have discussed the following modifications: Conservation Compliance. Tightening the soil erosion tolerance standard in conservation compliance plans could further reduce erosion. For example, some environmental groups and Environmental Protection Agency and USDA officials have suggested that farmers should be required to reduce erosion to “T”—the maximum soil erosion that can occur while maintaining soil productivity. In addition, broadening conservation compliance plans to more explicitly include water quality impacts could lessen the off-site impact of erosion. For example, a field may not have a soil erosion level high enough to fall under current compliance standards, yet may be polluting a nearby river even with a relatively low erosion level. The field, therefore, could be subject to appropriate erosion control practices. Acreage Reduction Program and 0/50/85 Program. Improving cover crop requirements for programs that idle a specified number of base acres annually—Acreage Reduction Program and 0/50/85—could improve environmental benefits on these acres. Currently, a cover crop is not always required on idled acres or, if required, falls short of potential environmental benefits. Requiring improved cover crop standards could reduce erosion and provide wildlife habitat and still leave the idled acreage in good condition for subsequent cropping. Additionally, encouraging the use of multiyear planning would keep the same acreage idled for more than 1 year, thereby improving environmental benefits on that acreage, particularly for wildlife. Base Acres. Allowing farmers with environmentally sensitive base acres—acres for which they are entitled to receive USDA payments based on the amount of crops they produce—to sell base-acre rights to another farmer with less sensitive land could reduce the incentive to farm environmentally sensitive land. For example, under a House bill (H.R. 3894) introduced in February 1994, CRP contract holders would be allowed to offer their CRP base-acre rights for lease or sale to producers for use on cropland in the same or adjacent county in which the land is located in exchange for maintaining the land in permanent cover. However, the USDA CRP task force believes that the administrative costs of such a program would be substantial. Green Payments Can Encourage Farmers to Make Greater Use of Appropriate Conservation Practices Recent proposals called green payments—incentives to farmers to adopt appropriate conservation practices—suggest that environmental benefits can be increased above the level of current conservation programs. These incentive payments would augment current price and income support programs that are primarily focused on production objectives. While some current conservation programs, such as the Agricultural Conservation Program, could continue to assist farmers in meeting conservation goals, green payments would be available for a broader set of conservation practices, such as fencing off streams from livestock. For example, under one green payments approach, farmers could maximize federal support for agricultural production by participating in two programs. The first program would be similar to current price and income support programs that primarily pay farmers based on the amount of production. Farmers would be eligible for support payments at 80 to 90 percent of the current level as well as other USDA support programs in exchange for meeting minimum conservation compliance standards. Whether or not they participate in this program, farmers would also be eligible for a separate green payments program that pays farmers if they use additional conservation practices beyond the minimum conservation compliance standards. While the green payments concept is still in its formative stages, agriculture and environment researchers we spoke with agree that a green payments program should (1) consider the impact of crop production on the whole farm as well as the watershed, (2) allow state and local representatives to identify problems and allocate resources, (3) complement a regulatory approach, and (4) not be linked to participation in other USDA programs. Consider the impact on the whole farm and watershed. Whole-farm planning involves identifying pollution sources and developing plans to implement appropriate conservation practices uniquely tailored to fit each farm’s topographical conditions and business practices. Furthermore, even with whole-farm planning, addressing conservation problems on a farm-by-farm basis does not sufficiently address the environmental problems within an entire watershed. Through watershed planning, USDA can more efficiently set conservation priorities and target technical and financial assistance to the areas with the greatest need. Allow state and local representatives to identify problems and allocate resources. Since environmental problems differ between regions, USDA officials and agriculture and environmental group representatives generally agree that local representatives may be in a better position to identify and set priorities on environmental issues and develop site-specific plans for addressing them. Complement a regulatory approach. The voluntary incentives should complement mandated conservation practices. According to a 1993 report by the National Research Council, the voluntary approach is most effective when the conservation practice to be implemented is also profitable to farmers. Regulatory requirements can be used to achieve a threshold level of environmental protection. Voluntary incentive payments can then be used to assist farmers in achieving higher levels of environmental protection. Not be linked to participation in other USDA programs. Agricultural and environmental researchers we spoke with said that participation in a green payments program should not be linked to participation in other USDA programs because the most environmentally sensitive land may not be covered by these other programs. Current agricultural support—deficiency payments, crop insurance, disaster payments, or loans—are not necessarily targeted to areas with the greatest environmental problems, according to agriculture and environmental researchers we spoke with. Therefore, conservation efforts that are only linked to current programs may not address critical environmental concerns. CRP Acres Returning to Production May Impact Farm Prices and Income We recognize that CRP acreage returning to production may result in surplus crop production and impact farm prices and income. Therefore, we examined several economic studies that estimate this impact. (See app. III.) While the degree of impact depends on such assumptions as agronomic conditions, market conditions, and public policy decisions, most studies found that CRP acres returning to production may lower farm prices and increase federal commodity program payments. However, total government outlays for commodity program payments probably will be less than the level of current CRP payments, resulting in net government savings. Furthermore, some studies concluded that farm program adjustments, as well as market adjustments, would mitigate the impact of lower farm prices. Conclusions Except for acres in buffer zones, most CRP acres and other environmentally sensitive cropland can stay in production without significantly impairing the environment if farmers use appropriate conservation practices. USDA’s regulatory and voluntary conservation programs, as currently structured or with strengthened environmental objectives, encourage the use of these practices. Therefore, even if the Congress allows the CRP to expire, available programs may prevent a return to the environmental problems that existed before 1985. A green payments approach offers the potential to further emphasize conservation objectives in agricultural production. Agency Comments and Our Response In responding to a draft of this report, USDA said that, until the green payments concept is more fully developed, it is impossible to determine whether green payments would be a viable alternative in accomplishing those objectives currently being met by CRP. We included a discussion of green payments because our requesters specifically asked us to provide this information. We agree that this concept needs to be more fully developed, and our report states that the green payments concept is still in its formative stages. This concept is being explored to promote greater use of appropriate conservation practices that could include land retirement but would also include practices for lands in crop production.
Pursuant to a congressional request, GAO: (1) estimated the amount and locations of land enrolled in the Conservation Reserve Program (CRP) and other environmentally sensitive cropland that should be removed from production; and (2) provided information on alternatives for managing these lands. GAO found that: (1) it could not precisely identify the amount of CRP and other environmentally sensitive cropland that should be kept out of production; (2) there are about 36.4 million acres of land enrolled in CRP, but by using buffer zones and other conservation practices, this amount could be reduced substantially; (3) reducing the amount of land enrolled in CRP would reduce federal costs; (4) allowing farmers to earn revenue from environmentally compatible uses of CRP land would also reduce federal costs; (5) CRP benefits would last longer if the program used easements to restrict land use for longer periods than the 10-year contracts CRP presently uses; (6) except for buffer zones, most CRP and other environmentally sensitive cropland can be in production without serious environmental consequences if farmers practice appropriate conservation measures; and (7) environmental benefits could also be increased through incentive payments to farmers to encourage them to adopt conservation practices.
Background The MAF is a data file that contains a list of all known living quarters in the United States and Puerto Rico. The Bureau uses the MAF to support the decennial census as well as the American Community Survey and other ongoing demographic surveys. The MAF contains address information, census geographic location codes, and source and history data. In conjunction with the MAF, the TIGER contains spatial geographical information that allows information from the MAF to be mapped. For the 2010 Census, the Bureau updated the MAF through a complete address canvassing that verified virtually every existing MAF address and added new addresses and deleted those that no longer exist. While full address canvassing helped ensure the accuracy of the address list, we believe it was also very costly. According to Bureau decision documents leading up to the 2010 Census, the Bureau canceled planned research on the feasibility of targeting its canvassing when prioritizing its research agenda early in the decade given its funding levels. As part of the Bureau’s effort to conduct the 2020 Census at a cost lower than the 2010 Census, the Bureau is researching the feasibility of conducting targeted address canvassing, verifying addresses in only select areas that are more likely to require updates to the address list. To support targeted address canvassing, the Bureau plans to increase its reliance on other previously used sources of updates, including U.S. Postal Service files, commercial database files, and significant input from state and local governments. For example, GSS-I is working to allow government agencies at all levels to more regularly share and update their address lists with the Bureau throughout the decade (rather than solely 2 years prior to the decennial, as had been the case in prior decennial censuses) so that fewer areas need to be fully canvassed. The life cycle for 2020 Census preparation is divided into five phases, as illustrated in figure 1. The Bureau intends to use the early research and testing phase through fiscal year 2015 to develop a proposal for conducting targeted address canvassing that considers both cost and quality implications. By the end of the early research and testing phase, the Bureau plans to complete decisions about preliminary operational designs rather than continuing critical research and testing until the end of the decade as it did for the 2010 Census. Because Schedules Are Not Reliable, Management Lacks Valid Information to Assess Progress and Manage Risk The Bureau faces legally mandated deadlines for delivering census tabulations. Effective scheduling is critical for ensuring that the Bureau adheres to a timeline that meets these deadlines. The Bureau relies on schedules to help monitor progress of its many interdependent activities. The schedules are essential to help manage the risks to preparing and implementing a successful decennial census. Certain dates within the schedule could be subject to change or activities may be canceled as a result of time or budget constraints. As dates change from the original schedule or there are significant changes to the work planned, there could be an associated increase in risk as the Bureau may have less time than originally planned to complete future activities in time to make decisions needed to execute the 2020 Census. We determined that a schedule not only provides a road map for systematic execution of a program, but also provides a means by which to gauge progress, identify and address potential problems, and promote accountability. In the GAO Schedule Assessment Guide,four characteristics of a reliable schedule. A schedule should be: Comprehensive: The schedule should identify all activities and resources necessary to accomplish the project. The schedule should cover the scope of work to be performed so that the full picture is available to managers. Well constructed: Activities should be logically sequenced and critical activities that would affect the timelines of the schedule should be identified. Credible: All schedules should be linked to a complete master schedule for managers to reference and analyzed for how risk impacts the outcome of the schedule. Controlled: There should be a documented process for changes to the schedule so that the integrity of the schedule is assured. For a schedule to be reliable, it must substantially or fully meet all criteria for these four characteristics. These characteristics and their criteria are described in more detail in appendix II. We found that the Bureau’s 2020 Research and Testing and GSS-I schedules exhibit some of the characteristics of a reliable schedule, yet important weaknesses remain. Each of the schedules substantially met one of the four characteristics (controlled) and minimally or partially met the other three characteristics (comprehensive, well constructed, and credible) (see table 1). Examples of the extent to which these characteristics were met are provided below. For a more detailed explanation, see appendix III. Comprehensive– Partially Met The Bureau is using a work breakdown structure to guide the activities of the 2020 Research and Testing and GSS-I schedules. A work breakdown structure defines in detail the work necessary to accomplish a program’s objectives. However, not all activities listed in the work breakdown structure are included in the 2020 Research and Testing and GSS-I schedules. For example, in the GSS-I schedule, 20 of the 28 projects have very few activities in them, indicating a lack of detail. Additionally, two MAF-related projects in the 2020 Research and Testing schedule, the MAF Business Rules Improvement project and the Frame Extract Evaluation project, do not have activities assigned to them. If research activities—or any other activities relevant to developing the MAF—are not listed in a schedule, managers may not be able to readily identify causes of delay. According to the Bureau, the schedules are still evolving, these two projects have not yet started or been staffed, and activities and detail will continue to be added to the schedule. For both schedules, the Bureau appeared to record reasonable durations for most activities, helping to ensure that managers can understand the time activities are expected to take and can hold staff who are executing these activities accountable for meeting deadlines. However, neither schedule included information about what levels of resources are required to complete the planned work. Information on resource needs and availability in each work period assists with forecasting the likelihood that activities will be completed as scheduled. Bureau officials stated that they hope to begin the exercise of identifying the resources needed for each activity in both schedules by early 2014 and are waiting for decisions and guidance from the Bureau’s effort to standardize cost estimation practices enterprise-wide. In 2012 we recommended, and the Bureau agreed, that the Bureau establish and communicate a timeline for all enterprise activity However, the Bureau so that decennial managers can plan accordingly.has not yet produced this timeline. Well Constructed– Minimally Met In both of the schedules, the Bureau logically linked many of the activities in a sequence. This helps staff identify next steps as they progress through MAF development activities and helps managers identify the impact of changes in one activity on subsequent activities. Yet in both schedules, the Bureau did not identify the preceding and following activity for a number of activities (20 percent for the GSS-I schedule and 9 percent for the 2020 Research and Testing schedule). Scheduling staff were unable to explain why this information was missing. Without this logic, the effect of a change in one activity on future activities cannot be seen in the schedule. For example, in the GSS-I schedule, the “delivery of the targeted address canvassing recommendation report” to managers has no predecessor. According to the Bureau, this report is to outline the research findings, impacts, operational considerations, and benefits of conducting a targeted address canvassing. For those activities that lack predecessors in the schedule, the real effects of changes or delays in preceding activities would not be visible in the schedule, potentially resulting in unforeseen delays in the recommendation report. The Bureau used a large number of constraints which, if used inappropriately, can affect the reliability of the schedule. Activities for which constraints would be justified are Census Day and the delivery of the Apportionment Count, because they have legally mandated deadlines. But, for example, the Bureau also placed a constraint on the delivery of a draft targeted address canvassing report. Such an activity would likely not need a constraint because delays in preceding activities could affect the actual timing of the delivery of the draft report. Placing a constraint on this type of activity would mask in the schedule the effects of any changes or delays that would affect the true delivery date. While our schedule guide states that documenting the justification for constraints is important, the Bureau has not provided justifications in the schedule for its use of constraints. The Bureau told us that justifications are in meeting notes and e-mails, rather than the schedule. If this information is not included in the schedule, the justification for constraints remains unclear to those who did not have access to the meeting notes or e-mails. Also, leaving constraints within the schedule beyond when the schedule is being tested can make the schedule unreliable for other purposes. Additionally, inappropriately used constraints make it difficult to identify the schedules’ “critical path”—the sequence of steps needed to achieve the end goal that, if they slip, could negatively affect the overall project completion date. The absence of a critical path or a poorly constructed one calls into question the reliability of the calculated schedule dates, such as estimates of when research results will be available. When certain constraints are placed on an activity, this can automatically trigger the schedule software to place an activity on the calculated critical path when it might otherwise not be. Because the Bureau used so many constraints and the schedule is missing logic about preceding and following activities, it is possible that the calculated critical path includes activities that are not necessarily germane to the true critical path. Eliminating the unnecessary constraints and including additional logic would provide a more accurate picture of the degree of criticality in the schedule. Until the Bureau can produce a true critical path, it will not be able to provide reliable timeline estimates of effects of schedule changes. This undermines the Bureau’s ability to focus on activities that will have detrimental effects on the progress of designing targeted address canvassing and other 2020 Census decisions. Finally, a critical path with so many activities appearing on it is not useful to managers in identifying what is truly necessary to develop the MAF in a timely manner. For example, within the 2020 Research and Testing schedule, 52 percent of activities not yet completed appear on the calculated critical path. Similarly, for the GSS-I schedule, 19 percent of the activities appear on the calculated critical path, almost half of which could be because constraints are placed on them. Such a large share of activities appearing on the critical path can reduce the flexibility managers have to complete activities in parallel with each other or to reallocate resources when the same resource is needed for multiple activities on the path. Credible–Minimally Met The schedules have shortcomings with (1) the integration into management reporting and (2) the ability to automatically change as activities within the schedule change. First, management documents from the 2020 Research and Planning Office indicate the Bureau does not always derive information on milestones from the schedule. For example, two documents dated July 2013 cite the same baseline date from the schedule list major milestones, but the documents indicate a different date for the same part of the research and testing schedule; one states that the research and testing milestones for the current phase will be complete in September 2014, while the other states that these milestones will be completed in September 2015. Bureau managers acknowledged that the planning milestones within the schedule had not been updated to reflect ongoing Bureau management decisions about reprioritizing research and testing plans in light of budget uncertainty during fiscal year 2013. Without keeping the schedule current and using the most recent information to derive information for management such as schedule milestones, there are limited assurances that management is receiving reliable information. Second, we tested the schedules to determine how they changed when dates within the schedule were changed. In our test, the Research and Testing schedule responded automatically to changes in dates of activities, following best practices. However, the GSS-I schedule did not respond in the same way: When we adjusted the date of an activity, subsequent related activities appearing necessary to achieve the milestone did not change, even though the ultimate milestone date changed based on the date shift. More importantly, though, the Bureau is not in a position to carry out systematic quantitative risk analysis on its schedule. A quantitative risk analysis relies on statistical simulation to predict the level of confidence in meeting a program’s completion date. The Bureau has identified risks to MAF development efforts, but a quantitative risk analysis would have the advantage of illustrating the impact of risks on the schedule and how that would affect the Bureau’s ability to meet milestones and provide a measure of how much time contingency should be built in the schedule to help manage certain risks. Bureau officials said they were waiting for decisions about scheduling software before making decisions about conducting a schedule risk analysis. Without a more credible schedule, the Bureau cannot determine the likelihood that information will be available in time to inform decisions about building the MAF; moreover, the Bureau may not be able to fully understand which risks could affect when information will be available to make decisions and the likelihood that the risks could occur. Controlled– Substantially Met Both schedules were baselined—creating a comparison schedule to measure, monitor, and report the project’s progress—in March 2013, and there is evidence the Bureau has a schedule management process in place and a method for logging changes to the schedule that is in line with best practices. By baselining the schedule, the Bureau helps provide some accountability and transparency to the measurement of the program’s progress. The Bureau has implemented a formal change control process which helps ensure the measurement of meaningful progress through comparisons to past versions of the schedule. The Bureau clearly documented its criteria for justifying changes. A team of senior managers is to approve the change and Bureau teams are to acknowledge the change’s effect if the schedule indicates they will be affected by the change. The Bureau provides narratives that go along with some schedule updates and includes these in monthly status reports, ensuring that management are informed of schedule changes on a regular basis in accordance with leading practices. This practice helps Bureau officials use their schedules to produce reports that can be used to identify work that should have started or finished by that time. Bureau managers acknowledged that not all changes reflecting Bureau decisions on dealing with budget uncertainty have been processed and reflected yet in the schedule. Yet with processes in place—and being used—that ensure the schedule is updated, management can be reasonably assured that it is looking at current data when examining the schedule, contingent upon the accuracy of the updates. Scheduling Challenges Demonstrate Lack of Expertise among Staff In conversations with Bureau officials responsible for managing the 2020 Research and Testing and GSS-I schedules, they said that they had not although staff received training or certification in scheduling practices,have received training in the software they are using for scheduling and many staff have been trained in project management. The scheduling managers referred to GAO’s Schedule Assessment Guide as a key resource for their efforts; however, staff answers to interview questions about leading practices demonstrated a lack of knowledge of the practices. For example, staff explained the presence of the large number of constraints in the schedule they provided to us was related to their occasional “testing” of the schedule, but guidelines for a baselined schedule state that it represents the original configuration of the program plan, and would, thus, not include temporary changes such as the staff described. Both the 2020 Research and Planning Office and the Geography Division have contracted for scheduling support in recent years, and maintain that their contractors have a number of certifications in the advanced use of appropriate software and project management methods. Further, Bureau officials described high turnover and extended vacancies in the management team over the 2020 Research and Planning Office’s scheduling contractors and staff until shortly before we began our audit and obtained a copy of their schedule to review. After we completed our audit work at the Bureau, officials told us that subject to the availability of funding, schedule team members will pursue professional certification to further develop and refine their project scheduling skills. Geography Division managers also stressed to us their commitment to schedule management. GAO, Human Capital: Key Principles for Effective Strategic Workforce Planning, GAO-04-39 (Washington, D.C.: Dec. 11, 2003). We developed the key principles of workforce planning by reviewing documents from organizations with expertise in workforce planning models and federal agencies with promising workforce planning practices, as well as our past work. to achieve programmatic goals. A key principle for strategic workforce planning includes systematically identifying gaps in competencies in staff with the goal of minimizing or eliminating these gaps. Our prior work has shown that organizations can use methods such as training, contracting, staff development, and hiring to help align skills in order to eliminate gaps in competencies needed for mission success. By conducting a workforce planning process that includes an analysis of skills and training needed, such as what the Bureau describes for its scheduling staff in the future, and the identification of gaps to be addressed, the Bureau can better ensure that staff who manage the schedules understand the leading practices and the importance of adhering to them. Thus, the Bureau can better ensure it has the capacity to develop schedules able to support key management decisions. The Bureau Generally Documented Leading Practices for Collaboration in Its Master Address File Plans Several divisions are involved in efforts to build the 2020 MAF, making collaboration critical to ensuring that participating divisions work together to achieve the Bureau’s goals. In our past work, we identified leading practices to foster collaborative relationships across organizational boundaries. We determined that four of these practices were directly relevant to the Bureau’s internal efforts to build its MAF. Table 2 identifies and describes these four practices and shows our assessment of the extent to which Bureau documentation demonstrates the Bureau engaged in these leading practices. The Bureau has documented its goals for building a more cost-effective MAF as part of its strategic plans. The Bureau’s 2020 Census Strategic Plan set forth Bureau-wide goals for the MAF and the 2020 Census. These goals provide a common rationale for Bureau teams to work across organizational boundaries. Specifically, the Bureau has documented its intention to improve the coverage and accuracy of the address list; continuously update the address list through the decade; and improve the cost-effectiveness of the address list. The Geography Division and the 2020 Research and Planning Office have incorporated these common outcomes as part of their individual efforts. Officials in these units indicated an understanding of these goals and communicated them to us. Each also documented these goals in their planning documents. For example, the Geography Division’s governance document for GSS-I connects its purpose to working towards building the 2020 MAF. Moreover, in the 2020 Research and Planning Office’s Research and Testing management plan, the Bureau sets goals for the division’s research associated with improving the accuracy and cost-effectiveness of the MAF. The Bureau, through its strategic plan, has set goals for the 2020 MAF, communicating these to organizational units in a way that will help focus the work in support of upcoming design decisions. Establish Mutually Reinforcing or Joint Strategies Similar to its goals, the Bureau has established and documented joint strategies as part of its strategic plans. These strategies help outline how the Bureau will achieve the goals of an accurate, continuously updated and cost-effective MAF. The Bureau’s 2013-2017 Strategic Plan and 2020 Census Strategic Plan specifically identify these strategies for achieving its goals for the MAF: defining components of error in the MAF; assessing how rules for using addresses contained in the MAF should implementing targeted address canvassing; change to accommodate new address sources; and identifying more effective approaches to incorporate addresses from state, local, and tribal governments. The Bureau is executing five research projects related to these strategies. The Bureau is also reinforcing implementation of these strategies by creating new coordination groups as well as placing staff from relevant units on MAF-related research projects. For example, the MAF error model research team has representation from the Geography Division, the Decennial Statistical Studies Division, and the Field Division, among others. The importance of having joint strategies clearly documented is underscored by the differences in perspectives that these divisions can bring to common challenges they may work on together, such as developing a proposal for targeted address canvassing. For example, the Geography Division is responsible for, among other things, administering geographic and cartographic activities needed for the 2020 Census. Its research when working with other teams will focus on geographic concepts, methods, and standards needed for the 2020 Census. Meanwhile, the Field Division, with its responsibility for effectively deploying field personnel to support efficient field data collection, will focus on the “on the ground” feasibility and challenge of targeting certain types of housing units for address canvassing. In addition, coordination bodies are working to share information. In May 2013, relevant Bureau officials began to meet regularly to discuss issues related to implementing targeted address canvassing. Bureau officials involved in these meetings said that the team acts as a vehicle to provide status updates across organizational boundaries. Another team working to identify models to predict where addresses were most in need of being updated was chartered in May 2013. The charter indicated that membership was to include representation from staff in the Field Division and would work with relevant research projects. By defining strategies and reinforcing the collaborative nature of these strategies through such actions as coordination groups and matrixed research teams, the Bureau is helping to align the activities and resources of various divisions to achieve the goals of the 2020 MAF. Agree on Roles and Responsibilities and Participants The Bureau’s 2013-2017 Census Strategic Plan identifies relevant divisions within the Bureau with responsibilities related to developing a more cost-effective 2020 MAF and implementing targeted address canvassing. The 2020 Research and Planning Office has identified the relevant divisions participating in active research projects and coordination groups through documents such as charters. For example, the Targeted Address Canvassing Research, Model, and Area Classification team—a coordination team headed by the Geography Division—was chartered in May 2013 and defines what is both in and out of the scope of its activities. Members are responsible for analyzing potential datasets to be used for targeted address canvassing, but are not responsible for analyzing the costs of targeted address canvassing. In addition, the Bureau recently established memorandums of understanding between the 2020 Research and Planning Office and other relevant divisions, generally finalizing them in May 2013 and signing them in June and July 2013. These agreements are not limited to MAF building efforts, but they provide the broad framework for working together and defining coordination. The agreements define the responsibilities of the 2020 Research and Planning Office and the relevant divisions and include provisions for communication between the two organizational units, resource sharing, and modifying agreements as changes in work dictate. However, the Bureau has not taken advantage of some opportunities to use its schedules to reinforce roles or clarify responsibilities. Detailed schedules for 2020 Research and Testing and GSS-I do not completely reflect roles and responsibilities of other divisions or organizational units, such as by reflecting dependencies of activities or handing off to each other. Information on dependencies between projects is available in the project plans for research projects, but such dependencies are not reflected in the schedules. Bureau officials said they would address this by directing project teams to more clearly identify dependencies on various divisions, and review activities to be flagged as having “external” dependencies within the Research and Testing schedule. Reinforce Collaborative Efforts through Performance Management Systems The Bureau reinforces individual accountability for collaborative activities through individual performance expectations including both broad ones and others specific to MAF development efforts. Bureau-wide, individuals are rated on their “customer service,” a work competency that includes their performance working in collaboration with those outside of their division to respond to internal and external needs. Managers we spoke with said that collaboration across units within the Bureau is assessed within this competency. Bureau officials also provided examples of performance management plans where staff were to be rated specifically on collaboration. For example, one staff member was expected to attend interdivisional coordination meetings and to implement new projects based on these meetings. The inclusion of specific performance expectations and metrics dependent on collaborative activity can reinforce synergy across organizational boundaries within the Bureau. This should help ensure that individuals with responsibility for developing the MAF have a vested interest in achieving the overall goals set by the Bureau. As the Bureau moves to testing and implementation, roles and responsibilities will change, and the respective roles of divisions may also change in prominence. Continued management attention to follow leading practices for collaboration will help to ensure that collaboration across units is occurring as the Bureau strives to achieve its goal of a more cost- effective 2020 MAF and Census. Conclusions Planning efforts related to targeted address canvassing and building a more cost-effective MAF are important to the Bureau’s efforts to control the costs of the 2020 decennial. As key design decisions are to be made in the coming years, it is important that the Bureau has a reliable schedule in place upon which management can depend to make those decisions. Our analysis of two Bureau schedules key to MAF development efforts indicates that there are problems with the schedules’ reliability. It will be important to ensure that schedules are comprehensive in order for management to be reasonably assured that they have complete information to make decisions. Similarly, problems with the schedules’ construction mean that the progression of critical events could be unclear to management. Finally, the schedules lack credibility, meaning that risks, including those the Bureau has already identified, could impact the schedules in ways not yet considered. Some of the identified deficiencies indicate that staff and managers have not been available and prepared to sufficiently construct and maintain the schedules. Conducting a workforce planning process of staff working on MAF schedules could help the Bureau to identify staff skills needed to help ensure related gaps are addressed. Without staff knowledge of the leading practices and the importance of adhering to them, the schedules may prove problematic for decennial managers’ ability to assess progress, make decisions, identify future risks, or anticipate potential delays. With its planning documents, memorandums of understanding, and various charters, the Bureau has put in place a framework to support collaborative efforts following leading practices, particularly in recent months, which will aid the efforts. These methods could be bolstered by building collaboration into the schedule. By improving practices in the area of constructing a schedule, the Bureau can help address these gaps. As the Bureau continues its implementation efforts up to and beyond key decisions about how to build a cost-effective MAF, it is vital to ensure that the practices incorporated into Bureau planning documents and processes thus far are continued. Recommendations for Executive Action To help maintain a more thorough and insightful 2020 Census development schedule in order to better manage risks to a successful 2020 Census, the Secretary of Commerce and Undersecretary of Economic Affairs should direct the U.S. Census Bureau to improve its scheduling practices in three areas: the comprehensiveness of schedules, including ensuring that all relevant activities are included in the schedule; the construction of schedules, including ensuring complete logic is in place to identify the preceding and subsequent activities as well as a critical path that can be used to make decisions; and the credibility of schedules, including conducting a quantitative risk assessment. In addition, we recommend that the Director of the U.S. Census Bureau initiate a robust workforce planning process for those working on schedules related to the Master Address File, including actions such as an analysis of skills needed, to identify and address gaps in scheduling skills. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Commerce and received the department’s written comments on November 5, 2013. The comments are reprinted in appendix IV. The Department of Commerce concurred with our findings and recommendations and provided several clarifications, which are reflected in this report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report (1) assesses the reliability of the schedules for two key Master Address File (MAF) development programs, and (2) examines the extent to which the Census Bureau (Bureau) is following leading practices for collaboration for its MAF development work. To determine the extent to which the Bureau is following leading practices for scheduling as identified in the GAO Schedule Assessment Guide, we analyzed the Geographic Support System Initiative (GSS-I) schedule and the 2020 Research and Planning Office (Research and Testing) schedule. We scored each scheduling best practice on a five-point scale ranging from “not met” to “fully met.” To determine the extent to which the Bureau’s key efforts to build a cost-effective MAF/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) incorporate leading practices for collaboration, we identified leading practices to apply to intra-agency collaborative efforts based on our past work on leading collaboration We identified organizational units and activities relevant to practices.building a cost-effective MAF in consultation with the Bureau. We also identified documentation of their research projects. We reviewed key management documents for content pertaining to collaboration, including the Bureau’s strategic plan for the decennial and current (2013-2017) strategic plan. In addition, we reviewed documents directly addressing coordination efforts, such as charters and meeting minutes from coordination groups and memorandums of understanding between divisions. We compared the documented plans and activities to best practices in order to rate the extent to which leading practices were incorporated or were intended to be incorporated into Bureau documents. We rated each practice on a three-point scale from “Not Documented” to “Generally Documented.” Not Documented: The Bureau provided no documentary evidence that satisfies any of the criteria. Partially Documented: The Bureau provided documentary evidence that satisfies a portion of the criteria. Generally Documented: The Bureau provided documentary evidence that satisfies all or nearly all of the criteria. We then interviewed Bureau officials in the Geography and 2020 Research and Planning Office divisions to discuss schedules and their collaboration efforts. Additionally, regarding scheduling and collaboration, we spoke with relevant officials in the Center for Administrative Records Research and Applications, Decennial Statistical Studies Division, and Field Division. These divisions are participating in some MAF development activities with the Geography and 2020 Research and Planning Office divisions. Our review of scheduling and collaboration practices was limited to 2020 Decennial Census activities and focused on MAF development activities and cannot be generalized to other, non- decennial Bureau activities and operations. Appendix II: Description of Scheduling Best Practices Description A schedule should reflect all activities defined in the project’s work breakdown structure and include all activities to be performed by the government and contractor. The schedule should realistically reflect the resources (i.e., labor, material, and overhead) needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist. The schedule should reflect how long each activity will take to execute. The schedule should be planned so that all activities are logically sequenced in the order they are to be carried out. The schedule should identify the critical path, or those activities that, if delayed, will negatively impact the overall project completion date. The critical path enables analysis of the effect delays may have on the overall schedule. The schedule should identify float—the amount of time an activity can slip in the schedule before it affects other activities—so that flexibility in the schedule can be determined. As a general rule, activities along the critical path have the least amount of float. The detailed schedule should be horizontally traceable, meaning that it should link products and outcomes associated with other sequenced activities. The integrated master schedule should also be vertically traceable—that is, varying levels of activities and supporting subactivities can be traced. Such mapping or alignment of levels enables different groups to work to the same master schedule. The schedule should include a schedule risk analysis that uses statistical techniques to predict the probability of meeting a completion date. A schedule risk analysis can help management identify high priority risks and opportunities. Progress updates and logic provide a realistic forecast of start and completion dates for program activities. Maintaining the integrity of the schedule logic at regular intervals is necessary to reflect the true status of the program. To ensure that the schedule is properly updated, people responsible for updating should be trained in critical path method scheduling. A baseline schedule represents the original configuration of the program plan and is the basis for managing the project scope, the time period for accomplishing it, and the required resources. Comparing the current status of the schedule to the baseline can help managers target areas for mitigation. Appendix III: Assessment of the Extent to Which the Bureau Followed Scheduling Best Practices Verifying that the schedule is traceable horizontally and vertically Conducting a schedule risk analysis Updating the schedule with actual progress and logic Legend:  Fully Met: The Bureau provided complete evidence that satisfies the entire criteria. ◕ Substantially Met: The Bureau provided evidence that satisfies a large portion of the criteria. ◓ Partially Met: The Bureau provided evidence that satisfies about half of the criteria. ◔ Minimally Met: The Bureau provided evidence that satisfies a small portion of the criteria.  Not Met: The Bureau provided no evidence that satisfies any of the criteria. Assigning resources to all activities Establishing the durations of all activities Confirming that the critical path is valid Verifying that the schedule is traceable horizontally and vertically Conducting a schedule risk analysis Updating the schedule with actual progress and logic Legend:  Fully Met: The Bureau provided complete evidence that satisfies the entire criteria. ◕ Substantially Met: The Bureau provided evidence that satisfies a large portion of the criteria. ◓ Partially Met: The Bureau provided evidence that satisfies about half of the criteria. ◔ Minimally Met: The Bureau provided evidence that satisfies a small portion of the criteria.  Not Met: The Bureau provided no evidence that satisfies any of the criteria. Appendix IV: Comments from the Department of Commerce Appendix V: GAO Contact and Staff Acknowledgment GAO Contact Staff Acknowledgments Other key contributors to this report include Ty Mitchell, Assistant Director; Tom Beall; Juaná Collymore; Rob Gebhart; David Hulett; Andrea Levine; Jeffrey Niblack; Karen Richey; and Timothy Wexler.
According to the Bureau, it is committed to limiting its per household cost for the 2020 Census to that of the 2010 Census, and believes that reducing the cost of updating the MAF can be of significant help. Because of tight deadlines and the involvement of several different Bureau units in this effort, effective scheduling and collaboration practices are important for the entire process to stay on track. GAO was asked to examine scheduling and collaboration in the Bureau's efforts to develop a more cost-effective MAF. GAO (1) assessed the reliability of the schedules for two key MAF development programs, and (2) examined the extent to which the Bureau is following leading practices for collaboration for its MAF development work. GAO analyzed the schedules for the two programs most relevant to developing the address list, and reviewed strategic plans and other documents establishing coordination mechanisms and compared them to leading practices for intra-agency collaborative efforts. The Census Bureau (Bureau) is not producing reliable schedules for the two programs most relevant to building the Master Address File (MAF)--the 2020 Research and Testing program and the Geographic Support System Initiative. The Bureau did not include all activities in either schedule. The schedules appeared to have reasonable durations for most activities, but they did not include information about required resources. For both schedules, the Bureau logically linked many activities in a sequence. Yet in both schedules the Bureau did not identify the preceding and following activity for a significant number of activities. Without this logic, the effect of a change in one activity on future activities cannot be seen in the schedule, potentially resulting in unforeseen delays. The Bureau is not in a position to carry out a quantitative risk analysis on the schedules. As a result of these issues, the schedules are producing inaccurate dates, which could mislead Bureau managers to falsely conclude that all of the work is on schedule when it may not be. Without reliable schedule information, such as valid forecasted dates and the amount of flexibility remaining in the schedule, management faces challenges in assessing the progress of MAF development efforts and determining what activities most need attention. Staff managing the schedules said that they had not received thorough training or certification on scheduling best practices, and, according to schedule managers, staff turnover contributed to the issues GAO identified. Workforce planning and training can help the Bureau have the skills in place to ensure that characteristics of a reliable schedule are met to support key management decisions. The Bureau has documented collaboration activities that follow many leading practices for collaboration. Because several divisions are involved in efforts to develop the MAF, collaboration across these divisions is critical. In recent months, the Bureau has put in place a variety of mechanisms to aid coordination, such as crosscutting task teams. For example, research projects relevant to developing the MAF have representation from multiple divisions. The Bureau has also established memorandums of understanding across divisions to provide a broad framework for working together. Continued management attention to collaboration practices will help to ensure that collaboration across units is occurring as MAF development continues.
Background Food aid comprises all food-supported interventions by foreign donors to individuals or institutions within a country. It has helped to save millions of lives and improve the nutritional status of the most vulnerable groups, including women and children, in developing countries. Food aid is one element of a broader global strategy to enhance food security by reducing poverty and improving availability, access to, and use of food in low- income, less-developed countries. Donors provide food aid as both a humanitarian response to address acute hunger in emergencies and as a development-focused response to address chronic hunger. Large-scale conflicts, poverty, weather calamities, and severe health-related problems are among the underlying causes of both acute and chronic hunger. Countries Provide Food Aid through In-Kind or Cash Donations, with the United States as the Largest Donor Countries provide food aid through either in-kind donations or cash donations for local procurement. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations for the purchase of food in local markets. U.S. food aid programs are all in-kind, and no cash donations are allowed under current legislation. However, the Administration has proposed legislation to allow up to 25 percent of appropriated food aid funds for purchase of commodities in locations closer to where they are needed. Other food aid donors have also recently moved from providing less in-kind to more or all cash donations for local, regional, or donor-market procurement. While there are ongoing debates as to which form of assistance is more effective and efficient, the largest international food aid organization, the World Food Program (WFP), continues to accept both. The United States is both the largest overall and in-kind provider of food aid, supplying over one-half of all global food aid. Most U.S. Food Aid Goes to Africa In fiscal year 2006, the United States delivered food aid to over 50 countries, with about 78 percent of its funding allocations for in-kind food donations going to Africa, 12 percent to Asia and the Near East, 9 percent to Latin America, and 1 percent to Eurasia. Of the 78 percent of the food aid funding going to Africa, 30 percent went to Sudan, 27 percent to the Horn of Africa, 17 percent to Southern Africa, 14 percent to West Africa, and 12 percent to Central Africa. Emergencies Represent an Increasing Share of U.S. Food Aid Food aid is used for both emergency and non-emergency purposes. Over the last several years, the majority of U.S. food aid has shifted from a non- emergency to an emergency focus. In fiscal year 2005, the United States directed approximately 80 percent or $1.6 billion of its $2.1 billion expenditure for international food aid programs to emergencies. In contrast, in fiscal year 2002, the United States directed approximately 40 percent or $678 million of its $1.7 billion food aid expenditure to emergency programs (see fig. 1). U.S. Food Aid Is Delivered Through Multiple Programs with Multiple Mandates U.S. food aid is funded under four program authorities and delivered through six programs administered by USAID and USDA, which serve a range of objectives including humanitarian goals, economic assistance, foreign policy, market development and international trade (see app. I). The largest program, Public Law (P.L.) 480 Title II, is managed by USAID and averaged approximately 74 percent of total in-kind food aid allocations over the past 4 years, most of which funded emergency programs (see fig. 2). In addition, P.L. 480, as amended, authorizes USAID to preposition food aid both domestically and abroad with a cap on storage expenses of $2 million per fiscal year. U.S. food aid programs also have multiple legislative and regulatory mandates that affect their operations. One mandate that governs U.S. food aid transportation is cargo preference, which is designed to support a U.S.- flag commercial fleet for national defense purposes. Cargo preference requires that 75 percent of the gross tonnage of all government-generated cargo be transported on U.S.-flag vessels. A second transportation mandate, known as the “Great Lakes Set Aside,” requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. Other mandates require that a minimum of 2.5 million metric tons of food aid be provided through Title II programs, and that of this amount, a “sub-minimum” of 1.825 million metric tons be provided for non-emergency programs. (For a summary of congressional mandates for P.L. 480, see app. I.) Multiple U.S. Government Agencies and Stakeholders Participate in U.S. Food Aid Programs U.S. food aid programs involve multiple U.S. government agencies and stakeholders. For example, USAID and USDA administer the programs, USDA’s Kansas City Commodity Office (KCCO) manages the purchase of all commodities, and the U.S. Maritime Administration (MARAD) of DOT is involved in supporting their ocean transport on U.S. vessels. These and other government agencies coordinate food aid programs through the Food Assistance Policy Council, which oversees the Bill Emerson Humanitarian Trust, an emergency food reserve. Other stakeholders include donors, implementing organizations such as WFP and NGOs, agricultural commodity groups, and the maritime industry. Some of these stakeholders are members of the Food Aid Consultative Group, which is led by USAID’s Office of Food for Peace and addresses issues concerning the effectiveness of the regulations and procedures that govern food assistance programs. Multiple Challenges Hinder the Efficiency of Delivery of U.S. Food Aid Multiple challenges reduce the efficiency of U.S. food aid, including logistical constraints that impede food aid delivery and reduce the amount, timeliness, and quality of food provided. While agencies have tried to expedite food aid delivery in some cases, the majority of food aid program expenditures is on logistics, and the delivery of food from vendor to village is generally too time-consuming to be responsive in emergencies. Factors that increase logistical inefficiencies include uncertain funding and inadequate planning; transportation contracting practices that disproportionately increase risks for ocean carriers (who then factor those risks into freight rates); legal requirements; and inadequate coordination to systematically track and respond to logistical problems, such as food spoilage or contamination. While U.S. agencies are pursuing initiatives to improve food aid logistics, such as prepositioning food commodities, their long-term cost effectiveness has not yet been measured. Food Aid Procurement and Transportation are Costly and Time-Consuming Transportation costs represent a significant share of food aid expenditures. For the largest U.S. food aid program (Title II), approximately 65 percent of expenditures are on inland transportation (to the U.S. port for export), ocean transportation, in-country delivery, associated cargo handling costs, and administration. According to USAID, these non-commodity expenditures have been rising in part due to the increasing number of emergencies and the expensive nature of logistics in such situations. To examine procurement costs (expenditures on commodities and ocean transportation) for all U.S. food aid programs, we obtained KCCO procurement data for fiscal years 2002 through 2006. KCCO data also suggest that ocean transportation has been accounting for a larger share of procurement costs with average freight rates rising from $123 per metric ton in fiscal year 2002 to $171 per metric ton in fiscal year 2006 (see fig. 3). Further, U.S. food aid ocean transportation costs are relatively expensive compared with those of some other donors. WFP transports both U.S. and non-U.S. food aid worldwide at reported ocean freight costs averaging around $100 per metric ton— representing less than 20 percent of its total procurement costs. At current U.S. food aid budget levels, every $10 per metric ton reduction in freight rates could feed about 1.2 million more people during a typical hungry season. Delivering U.S. food aid from vendor to village is also a relatively time- consuming task, requiring on average 4 to 6 months. Food aid purchasing processes and example time frames are illustrated in figure 4. While KCCO purchases food aid on a monthly basis, it allows implementing partners’ orders to accumulate for 1 month prior to purchase in order to buy in scale. KCCO then purchases the commodities, receives transportation offers, and awards transportation contracts over the following month. Commodity vendors bag the food and ship it to a U.S. port for export during the next 1 to 2 months. After an additional 40 to 50 days for ocean transportation to Africa, for example, the food arrives at an overseas port, where it is trucked or railroaded to the final distribution location over the next few weeks. While agencies have tried to expedite food aid delivery in some cases, the entire logistics process often lacks the timeliness required to meet humanitarian needs in emergencies and may at times result in food spoilage. Additionally, the largest tonnages of U.S. food aid are purchased during the months of August and September. Average tonnages purchased during the fourth quarter of the last 5 fiscal years have exceeded those purchased during the second and third quarters by more than 40 percent. Given a 6-month delivery window, these tonnages do not arrive in country until the end of the peak hungry season (from October through January in southern Africa, for example) in most cases. Various Factors Cause Inefficiencies in Food Aid Logistics Food aid logistics are costly and time-consuming for a variety of reasons. First, uncertain funding processes for emergencies can result in bunching of food aid purchases, which increases food and transportation costs and lengthens delivery time frames. Many experts, officials, and stakeholders emphasized the need for improved logistical planning. Second, transportation contracting practices—such as freight and payment terms, claims processes and time penalties—further increase ocean freight rates and contribute to delivery delays. A large percentage of the carriers we interviewed strongly recommended taking actions to address these contracting issues. Third, legal requirements such as cargo preference can increase delivery costs. Although food aid agencies are reimbursed by DOT for certain transportation expenditures, the sufficiency of reimbursement levels varies. Fourth, when food delivery problems arise, such as food spoilage or contamination, U.S. agencies and stakeholders lack adequately coordinated mechanisms to systematically track and respond to complaints. Funding and Planning Processes Increase Costs and Lengthen Time Frames Uncertain funding processes, combined with reactive and insufficiently planned procurement, increase food aid delivery costs and time frames. Food emergencies are increasingly common and now account for 80 percent of USAID program expenditures. To respond to sudden emergencies—such as Afghanistan in 2002, Iraq in 2003, Sudan, Eritrea, and Ethiopia in 2005, and Sudan and the Horn of Africa in 2006—U.S. agencies largely rely on supplemental appropriations and the Bill Emerson Humanitarian Trust (BEHT) to augment annual appropriations by up to a quarter of their budget. Figure 5, for example, illustrates that USAID supplemental appropriations have ranged from $270 million in fiscal year 2002 and $350 million in fiscal year 2006 to over $600 million in fiscal years 2003 and 2005. Agency officials and implementing partners told us that the uncertainty of whether, when, and at what levels supplemental appropriations would be forthcoming hampers their ability to plan both emergency and non-emergency food aid programs on a consistent, long- term basis and to purchase food at the best price. Although USAID and USDA instituted multi-year planning approaches in recent years, according to agency officials, uncertain supplemental funding has caused them to adjust or redirect funds from prior commitments. Agencies and implementing organizations also face uncertainty about the availability of Bill Emerson Humanitarian Trust funds. As of January 2007, the Emerson Trust held about $107.2 million in cash and about 915,350 metric tons of wheat valued at $133.9 million—a grain balance that could support about two major emergencies based on an existing authority to release up to 500,000 metric tons per fiscal year and another 500,000 of commodities that could have been, but were not, released from previous fiscal years. Although the Secretary of Agriculture and the USAID Administrator have agreed that the $341 million combined value of commodity and cash currently held in the trust is more than adequate to cover expected usage over the period of the current authorization, the authorization is scheduled to expire on September 30, 2007. Resources have been drawn from the Emerson Trust on 12 occasions since 1984. For example, in fiscal year 2005, $377 million from the trust was used to procure 700,000 metric tons of commodities for Ethiopia, Eritrea, and Sudan. However, experts and stakeholders with whom we met noted that the trust lacks an effective replenishment mechanism—withdrawals from the trust must be reimbursed by the procuring agency or by direct appropriations for reimbursement, and legislation establishing the Emerson Trust capped the annual replenishment at $20 million. Inadequately planned food and transportation procurement reflects the uncertainty of food aid funding. As previously discussed, KCCO purchases the largest share of food aid tonnage during the last quarter of each fiscal year. This “bunching” of procurement occurs in part because USDA requires 6 months to approve programs and/or because funds for both USDA and USAID programs may not be received until mid-fiscal year (after OMB has approved budget apportionments for the agencies) or through a supplemental appropriation. USAID officials stated that they have reduced procurement bunching through improved cash flow management. Although USAID has had more stable monthly purchases in fiscal years 2004 and 2005, food aid procurement in total has not been consistent enough to avoid the higher prices associated with bunching. Higher food and transportation prices result from procurement bunching as suppliers try to smooth earnings by charging higher prices during their peak seasons and as food aid contracts must compete with commercial demand that is seasonally high. According to KCCO data for fiscal years 2002 through 2006, average commodity and transportation prices were each $12 to $14 per metric ton higher in the fourth quarter than in the first quarter of each year. Procurement bunching also stresses KCCO operations and can result in costly and time-consuming congestion for ports, railways, and trucking companies. While agencies face challenges to improving procurement planning given the uncertain nature of supplemental funding in particular, stakeholders and experts emphasized the importance of such efforts. For example, 11 of the 14 ocean carriers we interviewed reported that reduced procurement bunching could greatly reduce transportation costs. When asked about bunching, agency officials, stakeholders and experts suggested the following potential improvements: Improved communication and coordination. KCCO and WFP representatives suggested that USAID and USDA improve coordination of purchases to reduce bunching. KCCO has also established a web-based system for agencies and implementing organizations to enter up to several years’ worth of commodity requests. However, implementing organizations are currently only entering purchases for the next month. Additionally, since the Food Aid Consultative Group (FACG) does not include transportation stakeholders, DOT officials and ocean carriers strongly recommended establishing a formal mechanism for improving coordination and transportation planning. Increased flexibility in procurement schedules. USAID expressed interest in an additional time slot each month for food aid purchases. Several ocean carriers expressed interest in shipping food according to cargo availability rather than through pre-set shipping windows that begin 4 weeks and 6 weeks after each monthly purchase. Although KCCO has established shipping windows to avoid port congestion, DOT representatives believe that carriers should be able to manage their own schedules within required delivery time frames. Increased use of historical analysis. DOT representatives, experts, and stakeholders emphasized that USAID and USDA should increase their use of historical analysis and forecasting to improve procurement. USAID has examined historical trends to devise budget proposals prepared 2 years in advance, and it is now beginning to use this analysis to improve timing of procurement. However, neither USAID nor USDA has used historical analysis to establish more efficient transportation practices, such as long- term agreements commonly used by DOD. Furthermore, WFP is now using forecasting to improve purchasing patterns through advanced financing but is unable to use this financing for U.S. food aid programs due to legal and administrative constraints. Transportation Contracting Practices Increase Delivery Costs and Contribute to Delays Transportation contracting practices are a second factor contributing to higher food aid costs. DOT officials, experts, and ocean carriers emphasized that commercial transportation contracts include shared risk between buyers, sellers, and ocean carriers. In food aid transportation contracts, risks are disproportionately placed on ocean carriers, discouraging participation and resulting in expensive freight rates. Examples of costly contracting practices include: Non-commercial and non-standardized freight terms. Food aid contracts define freight terms differently than commercial contracts and place increased liability on ocean carriers. For example, food aid contracts hold ocean carriers responsible for logistical problems such as improperly filled containers that may occur at the load port before they arrive. Food aid contracts also hold ocean carriers responsible for logistical problems such as truck delays or improper port documentation that may occur at the discharge port after they arrive. Further, several carriers reported that food aid contracts are not sufficiently standardized. Although USAID and USDA created a standard contract for non-bulk shipments, contracts for bulk shipments (which currently account for 63 percent of food aid tonnage delivered) have not yet been standardized. To account for risks that are unknown or outside their control, carriers told us that they charge higher freight rates. Impractical time requirements. Food aid contracts may include impractical time requirements, although agencies disagree on how frequently this occurs. Although USAID officials review contract time requirements and described them as reasonable, they also indicated that transportation delays are a common result of poor carrier performance and the diminishing number of ocean carriers participating in food aid programs. Several implementing organizations also complained about inadequate carrier performance. WFP representatives, for example, provided several examples of ocean shipments in 2005 and 2006 that were more than 20 days late. While acknowledging that transportation delays occur, DOT officials indicated that some contracts include time requirements that are impossible for carriers to meet. For example, one carrier complained about a contract that required the same delivery date for four different ports. When carriers do not meet time requirements, they must pay costly penalties. Carriers reported that they review contracts in advance and, where time requirements are deemed implausible, factor the anticipated penalty into the freight rate. While agencies do not systematically collect data on time requirements and penalties associated with food aid contracts, DOT officials examined a subset of contracts from December 2005 to September 2006 and estimated that 13 percent of them included impractical time requirements. Assuming that the anticipated penalties specified in the contracts analyzed were included in freight rates, food aid costs may have increased by almost $2 million (monies that could have been used to provide food to an additional 66,000 beneficiaries). Lengthy claims processes. Lengthy processes for resolving transportation disputes discourage both carriers and implementing organizations from filing claims. According to KCCO officials, obtaining needed documentation for a claim can require several years and disputed claims must be resolved by the Department of Justice. USAID’s Inspector General reported that inadequate and irregular review of claims by USAID and USDA has also contributed to delayed resolution. Currently, KCCO has over $6 million in open claims, some of which were filed prior to fiscal year 2001. For ocean carriers, the process is burdensome and encourages them to factor potential losses into freight rates rather than pursue claims. Incentives for most implementing organizations are even weaker given that monies recovered from claims reimburse the overall food aid budget rather than the organization that experienced the loss. According to KCCO and WFP officials, transportation claims are filed for less than 2 percent of cargo. However, several experts and implementing organizations suggested that actual losses are likely higher. In 2003, KCCO proposed a new administrative appeals process for ocean freight claims that would establish a hearing officer within USDA and a 285-day timeframe. While DOT and some carriers agreed that a faster process was needed, DOT officials suggested that the process for claims review should include hearing officers outside of USDA to ensure independent findings. To date, KCCO’s proposed process has not been implemented. Lengthy payment time frames and burdensome administration. Payment of food aid contracts is slow and paperwork is insufficiently streamlined. When carriers are not paid for several months, they incur large interest costs that are factored into freight rates. While USDA now provides freight payments within a few weeks, several ocean carriers complained that USAID often requires 2 to 4 months to provide payment. USDA freight payments are timelier due to a new electronic payment system, but USAID officials said this system is too expensive, so they are considering other payment options. In addition, a few carriers suggested that paperwork in general needs streamlining and modernization. The 2002 Farm Bill required both USDA and USAID to pursue streamlining initiatives that the agencies are in the process of implementing. KCCO officials indicated that they are updating food aid information technology systems (to be in place in fiscal year 2009). Through structured interviews, ocean carriers confirmed the cost impact of food aid transportation contracting practices. For example, 9 (60 percent) and 14 (100 percent) of the carriers reported that “inefficient claims processes” and “liabilities outside the carriers’ control” increase costs, respectively. To quantify the impact, two carriers estimated that non-standardized freight terms increase costs by 5 percent (about $8 per metric ton) while another carrier suggested that slow payment increases costs by 10 percent (about $15 per metric ton). Over 70 percent of the carriers strongly recommended actions to address contracting practices. Legal Requirements Can Increase Delivery Costs and Time Frames Legal requirements governing food aid procurement are a third factor that can increase delivery costs and time frames, with program impacts dependent on the sufficiency of associated reimbursements. In awarding contracts, KCCO must meet various procurement requirements such as cargo preference and the Great Lakes Set Aside. Each requirement may result in higher commodity and freight costs. Cargo preference laws, for example, require 75 percent of food aid to be shipped on U.S.-flag carriers, which are generally more expensive than foreign-flag carriers by an amount that is known as the ocean freight differential (OFD). The total annual value of this cost differential between U.S.- and foreign-flag carriers averaged $134 million from fiscal years 2001 to 2005. Additionally, since only a relatively small percentage of cargo can be shipped on foreign-flag vessels, agency and port officials believe that cargo preference regulations discourage foreign-flag participation in the program and result in delays when a U.S.-flag carrier is not available. DOT officials emphasize that USAID and USDA receive reimbursements for most if not all of the total OFD cost—DOT reimbursements varied from $126 million in fiscal year 2002 to $153 million in fiscal year 2005. However, USAID officials expressed concern that the OFD calculations do not fully account for the costs of cargo preference or the uncertainties regarding its application. For example, OFD reimbursements do not account for the additional costs of shipping on U.S.-flag vessels that are older than 24 years (approximately half of these vessels) or shipments for which a foreign-flag vessel has not submitted a bid. USAID officials estimate that the actual cost of cargo preference in fiscal year 2003 exceeded the total OFD cost by about $50 million due to these factors. Finally, USAID and DOT officials have not yet agreed on whether cargo preference applies to shipments from prepositioning sites. Inadequate Coordination Limits Agencies’ and Stakeholders’ Response to Food Delivery Problems U.S. agencies and stakeholders do not coordinate adequately to respond to food and delivery problems when they arise. USAID and USDA lack a shared, coordinated system to systematically track and respond to food quality complaints, and food aid working groups and forums are not inclusive of all stakeholders. Food quality concerns have been long- standing issues provoking the concern of both food aid agencies and the U.S. Congress. In 2003, for example, USAID’s Inspector General reported some Ethiopian warehouses in poor condition, with rodent droppings near torn bags of corn soy blend (CSB), rainwater seepage, pigeons flying into one warehouse, and holes in the roof of another. Implementing organizations we spoke with also frequently complained about receiving heavily infested and contaminated cargo. For example, in Durban, South Africa we saw 1,925 metric tons of heavily infested cornmeal that arrived late in port because it had been erroneously shipped to the wrong countries first. This food could have fed over 37,000 people. When food arrives heavily infested, NGOs hire a surveyor to determine how much is salvageable for human consumption or for use as animal feed, and destroy what is deemed unfit. When such food delivery problems arise, U.S. agencies and food aid stakeholders face a variety of coordination challenges in addressing them. For example: KCCO, USDA and USAID have disparate quality complaint tracking mechanisms that monitor different levels of information. As a result, they are unable to determine the total quantity of and trends in food quality problems. In addition, because implementing organizations track food quality concerns differently, if at all, they cannot coordinate to share concerns with each other and with U.S. government agencies. For example, since WFP—which accounts for 60 percent of U.S. food aid shipments—independently handles its own claims, KCCO officials are unable to track the quality of food aid delivery program-wide. Agencies and stakeholders have suggested that food quality tracking and coordination could be improved if USAID and USDA shared the same database and created an integrated food quality complaint reporting system. Agency country offices are often unclear about their roles in tracking food quality, creating gaps in monitoring and reporting. For example, USAID has found that some missions lack clarity on their responsibilities in independently verifying claims stemming from food spoilage, often relying on the implementing organization to research the circumstances surrounding losses. One USAID country office also noted that rather than tracking all food quality problems reported, it only recorded and tracked commodity losses for which an official claim had been filed. Further, in 2004, the Inspector General for USAID found that USAID country offices were not always adequately following up on commodity loss claims to ensure that they were reviewed and resolved in a timely manner. To improve food quality monitoring, agencies and stakeholders have suggested updating regulations to include separate guidance for complaints, as well as developing a secure website for all agencies and their country offices to use to track both complaints and claims. When food quality issues arise, there is no clear and coordinated process for seeking assistance, creating costly delays in response. For example, when WFP received 4,200 metric tons of maize in Angola in 2003 and found a large quantity to be wet and moldy, it did not receive a timely response from USAID officials on how to handle the problem. WFP incurred $176,000 in costs in determining the safety of the remaining cargo, but was later instructed by USAID to destroy the whole shipment. WFP claims it lost over $640,000 in this case, including destruction costs and the value of the commodity. Although KCCO established a hotline to provide assistance on food quality complaints, KCCO officials stated that it was discontinued because USDA and USAID officials wanted to receive complaints directly, rather than from KCCO. Nevertheless, agencies and stakeholders have suggested that providing a standard questionnaire to implementing organizations would ensure more consistent reporting on food quality issues. While Agencies Have Taken Steps to Improve Efficiency, Their Costs and Benefits Have Not Yet Been Measured To improve timeliness in food aid delivery, USAID has been prepositioning commodities in two locations and KCCO is implementing a new transportation bid process. Prepositioning enabled USAID to respond more rapidly to the 2005 Asian tsunami emergency than would have been otherwise possible. KCCO’s bid process is also expected to reduce delivery time frames and ocean freight rates. However, the long-term cost effectiveness of both initiatives has not yet been measured. Prepositioning and Transportation Procurement Could Improve Timeliness USAID has prepositioned food aid on a limited basis to improve timeliness in delivery. USAID has used warehouses in Lake Charles (Louisiana) since 2002 and Dubai (United Arab Emirates) since 2004 to stock commodities in preparation for food aid emergencies and it is now adding a third site in Djibouti, East Africa. USAID has used prepositioned food to respond to recent emergencies in Lebanon, Somalia, and Southeast Asia, among other areas. Prepositioning is beneficial because it allows USAID to bypass lengthy procurement processes and to reduce transportation timeframes. USAID officials told us that diverting food aid cargo to the site of an emergency before it reaches a prepositioning warehouse further reduces response time and eliminates storage costs. When the 2005 Asian tsunami struck, for example, USAID quickly provided 7,000 metric tons of food to victims by diverting the carrier at sea, before it reached the Dubai warehouse. According to USAID officials, prepositioning warehouses also offer the opportunity to improve logistics when USAID is able to begin the procurement process before an emergency occurs, or if it is able to implement long-term agreements with ocean carriers for tonnage levels that are more certain. Despite its potential for improved timeliness, prepositioning has not yet been studied in terms of its long-term cost effectiveness. Table 1 shows that over fiscal years 2005 and 2006, USAID purchased about 200,000 metric tons of processed food for prepositioning (about 3 percent of total food aid tonnage), diverted about 36,000 metric tons en route, and incurred contract costs of about $1.5 million for food that reached the warehouse (averaging around $10 per metric ton). In addition to contract costs, ocean carriers generally charge higher freight rates for prepositioned cargo to account for additional cargo loading or unloading, additional days at port, and additional risk of damage associated with cargo that has undergone extra handling. USAID officials have suggested that average freight rates for prepositioned cargo could be $20 per metric ton higher. In addition to costs of prepositioning, agencies face several challenges to their effective management of this program, including the following: Food aid experts and stakeholders expressed mixed views on the appropriateness of current prepositioning locations. Only 5 of the 14 ocean carriers we interviewed rated existing sites positively and most indicated interest in alternative sites. KCCO officials and experts also expressed concern with the quality of the Lake Charles warehouse and the lack of ocean carriers providing service to that location. For example, many carriers must move cargo by truck from Lake Charles to Houston before shipping it, which adds as much as an extra 21 days for delivery. Inadequate inventory management increases risk of cargo infestation. KCCO and port officials suggested that USAID had not consistently shipped older cargo out of the warehouses first. USAID officials emphasized that inventory management has been improving but that limited monitoring and evaluation funds constrain their oversight capacity. For example, the current USAID official responsible for overseeing the Lake Charles prepositioning stock was able to visit the site only once in fiscal year 2006—at his own expense. Agencies have had difficulties ensuring phytosanitary certification for prepositioned food because they do not know the country of final destination when they request phytosanitary certification from APHIS. According to USDA, since prepositioned food is not imported directly from a U.S. port, it requires either a U.S.-reissued phytosanitary certificate or a foreign-issued phytosanitary certificate for re-export. USDA officials told us they do not think that it is appropriate to reissue these certificates, as once a food aid shipment leaves the United States, they cannot make any statements about the phytosanitary status of the commodities, which may not meet the entry requirements of the country of destination. USDA officials are concerned that USAID will store commodities for a considerable period of time during which their status may change, thus making the certificate invalid. Although USDA and USAID officials are willing to let foreign government officials issue these certificates, U.S. inspection officials remain concerned that the foreign officials might not have the resources or be willing to recertify these commodities. Without phytosanitary certificates, food aid shipments could be rejected, turned away, or destroyed by recipient country governments. Certain regulations applicable to food aid create challenges for improving supply logistics. For example, food aid bags must include various markings reflecting contract information, when the commodity should be consumed, and whether the commodity is for sale or direct distribution. Marking requirements vary by country (some require markings in local language), making it difficult for USAID to divert cargo. Also, due to the small quantity of total food aid tonnage (about 3 percent) allocated for the prepositioning program, USAID is unable to use the program to consistently purchase large quantities of food aid earlier in the fiscal year. New Transportation Bid Process Could Reduce Procurement Time Frames In addition to prepositioning, KCCO is implementing a new transportation bid process to reduce procurement time frames and increase competition between ocean carriers. In the prior two–step system, during a first procurement round, commodity vendors bid on contracts and ocean carriers indicated potential freight rates. Carriers provided actual rate bids during a second procurement round, once the location of the commodity vendor had been determined. In the new 1-step system, ocean carriers will bid at the same time as commodity vendors. KCCO expects the new system to cut 2 weeks from the procurement process and potentially provide average annual savings of $25 million in reduced transportation costs. KCCO also expects this new bid process will reduce cargo handling costs as cargo loading becomes more consolidated. When asked about the new system, many carriers reported uncertainty as to what its future impact would be, while several expressed concern that USDA’s testing of the system had not been sufficiently transparent. Various Challenges Prevent Effective Monitoring of Food Aid Despite the importance of ensuring the effective use of food aid to alleviate hunger, U.S. agencies’ efforts to monitor food aid programs are insufficient. Limited food aid resources make it important for donors and implementers to ensure that food aid reaches the most vulnerable populations, thereby enhancing its effectiveness. However, USAID and USDA do not sufficiently monitor food aid programs, particularly in recipient countries, due to limited staff, competing priorities, and legal restrictions in use of food aid resources. U.S. Agencies Do Not Sufficiently Monitor Food Aid Programs Although USAID and USDA require implementing organizations to regularly monitor and report on the use of food aid, these agencies have undertaken limited field-level monitoring of food aid programs. Agency inspectors general have reported that monitoring has not been regular and systematic, and that in some cases intended recipients have not received food aid or the number of recipients could not be verified. Our audit work also indicates that monitoring has been insufficient due to various factors including limited staff, competing priorities, and restrictions in use of food aid resources. USAID and USDA require NGOs and WFP to conduct regular monitoring of food aid programs. USAID Title II guidance for multi-year programs requires implementing organizations to provide a monitoring plan, which includes information such as the percentage of the target population reached, as well as mid-term and final evaluations of program impact. USDA requires implementing organizations to report semi-annually on commodity logistics and the use of food. According to WFP’s agreement with the U.S. government, WFP field staff should undertake periodic monitoring at food distribution sites to ensure that commodities are distributed according to an agreed-upon plan. Additionally, WFP is to provide annual reports for each of its U.S.-funded programs. In addition to monitoring by implementing organizations, agency monitoring is important to ensure targeting of food aid is adjusted to changes in conditions as they occur, and to modify programs to improve their effectiveness, according to USAID officials. However, various USAID and USDA Inspectors General reports have cited problems with agencies’ monitoring of programs. For example, according to various USAID Inspector General reports on non-emergency programs in 2003, while food aid was generally delivered to intended recipients, USAID officials did not conduct regular and systematic monitoring. One such assessment of direct distribution programs in Madagascar, for example, noted that as a result of insufficient and ad hoc site visits, USAID officials were unable to detect an NGO reallocation of significant quantities of food aid to a different district that, combined with late arrival of U.S. food aid, resulted in severe shortages of food aid for recipients in a USAID-approved district. The Inspector General’s assessment of food aid programs in Ghana stated that the USAID mission’s annual report included data, such as number of recipients, that were directly reported by implementing organizations without any procedures to review the completeness and accuracy of this information over a 3-year period. As a result, the Inspector General concluded, the mission had no assurance as to the quality and accuracy of this data. Limited Staff Constrain Monitoring of Food Aid Programs in Recipient Countries Limited staff and other demands in USAID missions and regional offices have constrained their field-level monitoring of food aid programs. In fiscal year 2006, although USAID has some non-Title II staff assigned to monitoring, it had only 23 Title II-funded staff assigned to missions and regional offices in just 10 countries to monitor programs costing about $1.7 billion in 55 countries. For example, USAID’s Zambia mission had only one Title-II funded foreign-national and one U.S.-national staff to oversee $4.6 million in U.S. food aid funding in fiscal year 2006. Moreover, the U.S.- national staff only spent about one-third of his time on food aid activities and two-thirds on the President’s Emergency Plan for AIDS Relief program. USAID regional offices’ monitoring of food aid programs has also been limited. These offices oversee programs in multiple countries, especially where USAID missions lack human-resource capacity. For example, USAID’s East Africa regional office, which is located in Kenya, is responsible for oversight in 13 countries in East and Central Africa, of which 6 had limited or no capacity to monitor food aid activities, according to USAID officials. This regional office, rather than USAID’s Kenya mission, provided monitoring staff to oversee about $100 million in U.S. food aid to Kenya in fiscal year 2006. While officials from the regional office reported that their program officers monitor food aid programs, according to an implementing organization official we interviewed, USAID officials visited the project site only 3 times in 1 year. USAID officials told us that they may have multiple project sites in a country and may monitor selected sites based on factors such as severity of need and level of funding. In another case, monitoring food aid programs in the Democratic Republic of Congo (DRC) from the USAID regional office had been difficult due to poor transportation and communication infrastructure, according to USAID officials. Therefore, USAID decided to station one full-time employee in the capital of the DRC to monitor U.S. food aid programs that cost about $51 million in fiscal year 2006. Limited Resources and Restrictions in Their Use Further Constrain Monitoring Efforts Field-level monitoring is also constrained by limited resources and restrictions in their use. Title II resources provide only part of the funding for USAID’s food aid monitoring activities and there are legal restrictions on the use of these funds for non-emergency programs. Other funds, such as from the agency’s overall operations expense and development assistance accounts, are also to be used for food aid activities such as monitoring. However, these additional resources are limited due to competing priorities and their use is based on agency-wide allocation decisions, according to USAID officials. As a result, resources available to hire food aid monitors are limited. For example, about 5 U.S.-national and 5 foreign-national staff are responsible for monitoring all food aid programs in 7 countries in the Southern Africa region, according to a USAID food aid regional coordinator. Moreover, because its operations expense budget is limited and Title II funding only allows food monitors for emergency programs, USAID relies significantly on Personal Services Contractors (PSCs) —both U.S.-national and foreign-national hires—to monitor and manage food aid programs in the field. For example, while PSCs can use food aid project funds for travel, USAID’s General Schedule staff cannot. Restrictions in the use of Title II resources for monitoring non-emergency programs further reduce USAID’s monitoring of these programs. USDA administers a smaller proportion of food aid programs than USAID, and its field-level monitoring of food aid programs is more limited than for USAID-funded programs. In March 2006, USDA’s Inspector General reported that USDA’s Foreign Agricultural Service (FAS) had not implemented a number of recommendations made in a March 1999 report on NGO monitoring. Furthermore, several NGOs informed GAO that the quality of USDA oversight from Washington, D.C. is generally limited in comparison to oversight by USAID. USDA has fewer overseas staff who are usually focused on monitoring agricultural trade issues and foreign market development. For example, the agency assigns a field attaché— with multiple responsibilities in addition to food aid monitoring—to U.S. missions in some countries. However, FAS officials informed us that in response to past USDA Inspector General and GAO recommendations, a new monitoring and evaluation unit has been established recently with an increased staffing level to monitor the semiannual reports, conduct site visits, and evaluate programs. Without adequate monitoring from U.S. agencies, food aid programs are vulnerable to not effectively directing limited food aid resources to those populations most in need. As a result, agencies may not be sufficiently accomplishing their goals of getting the right food to the right people at the right time. Objectives, Scope, and Methodology To address these objectives, we analyzed food aid procurement and transportation data provided by USDA’s KCCO and food aid budget data provided by USDA, USAID and WFP. We determined that the food aid data obtained was sufficiently reliable for our purposes. We reviewed economic literature on the implications of food aid on local markets and recent reports, studies, and papers issued on U.S. and international food aid programs. We conducted a structured interview of the 14 U.S.- and foreign- flag ocean carriers that transport over 80 percent of U.S. food aid tonnages. We supplemented our structured interview evidence with information from other ocean carriers and shipping experts. In Washington, D.C., we interviewed officials from USAID, USDA, the Departments of State (State), DOD, DOT, and the Office of Management and Budget (OMB). We also met with a number of officials representing NGOs that serve as implementing partners to USAID and USDA in carrying out U.S. food aid programs overseas; freight forwarding companies; and agricultural commodity groups. In Rome, we met with officials from the U.S. Mission to the UN Agencies for Food and Agriculture, the UN World Food Program headquarters, and FAO. We also conducted field work in three countries that are recipients of food aid—Ethiopia, Kenya, and Zambia—and met with officials from U.S. missions, implementing organizations, and relevant host government agencies in these countries and South Africa. We visited a port in Texas from which food is shipped; two food destination ports in South Africa and Kenya; and two sites in Louisiana and Dubai where U.S. food may be stocked prior to shipment to destination ports. For the countries we visited, we also reviewed numerous documents on U.S. food aid, including all the proposals that USDA approved from 2002 to 2006 for the food aid programs it administers, and approximately half of the proposals that USAID approved from 2002 to 2006 for the food aid programs it administers. Finally, in January 2007, we convened a roundtable of 15 experts and practitioners including representatives from academia, think tanks, implementing organizations, the maritime industry, and agricultural commodity groups to further delineate, based on GAO’s initial work, some key challenges to the efficient delivery and effective use of U.S. food aid and to explore options for improvement. We took the roundtable participants’ views into account as we finalized our analysis of these challenges and options. We conducted our work between April 2006 and March 2007 in accordance with generally accepted U.S. government auditing standards. Conclusions U.S. international food aid programs have helped hundreds of millions of people around the world survive and recover from crises since the Agricultural Trade Development and Assistance Act (P.L. 480) was signed into law in 1954. Nevertheless, in an environment of increasing emergencies, tight budget constraints, and rising transportation and business costs, U.S. agencies must explore ways to optimize the delivery and use of food aid. U.S. agencies have taken some measures to enhance their ability to respond to emergencies and streamline the myriad processes involved in delivering food aid. However, opportunities for further improvement in such areas as logistical planning and transportation contracting remain. Moreover, inadequate coordination among food aid stakeholders has hampered ongoing efforts to address some of these logistical challenges. Finally, U.S. agencies’ lack of monitoring leaves U.S. food aid programs vulnerable to wasting increasingly limited resources, not putting them to their most effective use, or not reaching the most vulnerable populations on a timely basis. In a draft report that is under review by U.S. agencies, we recommend that to improve the efficiency of U.S. food aid—in terms of amount, timeliness, and quality—USDA, USAID, and DOT work together and with stakeholders to improve food aid logistical planning through cost-benefit analysis of supply-management options, such as long-term transportation agreements and prepositioning—including consideration of alternative methods, such as those used by WFP; modernize transportation contracting procedures to include, to the extent possible, commercial principles of shared risks, streamlined administration, and expedited payment and claims resolution; seek to minimize the cost impact of cargo preference regulations on food aid transportation expenditures by updating implementation and reimbursement methodologies to account for new supply practices, such as prepositioning, and potential costs associated with older vessels or limited foreign-flag participation; and establish a coordinated system for tracking and resolving food quality complaints. To optimize the effectiveness of food aid, we recommend that USAID and USDA improve monitoring of food aid programs to ensure proper management and implementation. Agency Comments and Our Evaluation USAID, USDA, and DOT provided oral comments on a draft of this statement and we incorporated them as appropriate. We also provided DOD, State, FAO, and WFP an opportunity to offer technical comments that we have incorporated as appropriate. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. GAO Contact and Staff Acknowledgments Should you have any questions about this testimony, please contact Thomas Melito, Director, at (202) 512-9601 or MelitoT@gao.gov. Other major contributors to this testimony were Phillip Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Kristy Kennedy, Joy Labez, Kendall Schaefer, and Mona Sehgal. Appendix I: Program Authorities and Congressional Mandates The United States has principally employed six programs to deliver food aid: P.L. 480 Titles I, II, and III; Food for Progress; McGovern-Dole Food for Education and Child Nutrition; and Section 416(b). Table 2 provides a summary of these food aid programs by program authority. In addition to these programs, resources for U.S. food aid can be provided through other sources, which include the following: International Disaster and Famine Assistance funds, designated for famine prevention and relief, as well as mitigation of the effects of famine by addressing its root causes. Over the past 3 years, USAID has programmed $73.8 million in famine prevention funds. Most of these funds have been programmed in the Horn of Africa, where USAID officials told us that famine is now persistent. According to USAID officials, experience thus far demonstrates that one of the advantages of these funds is that they enable USAID to combine emergency responses with development approaches to address the threat of famine. Approaches should be innovative and catalytic, while providing flexibility in assisting famine- prone countries or regions. Famine prevention assistance funds should generally be programmed for no more than 1 year and seek to achieve significant and measurable results during that time period. Funding decisions are made jointly by USAID’s regional bureaus and the Bureau for Democracy, Conflict and Humanitarian assistance, and are subject to OMB concurrence and congressional consultations. In fiscal year 2006, USAID programmed $19.8 million to address the chronic failure of the pastoralist livelihood system in the Mandera Triangle—a large, arid region encompassing parts of Ethiopia, Somalia, and Kenya that was the epicenter of that year’s hunger crisis in the Horn of Africa. In fiscal year 2005, USAID received $34.2 million in famine prevention funds for activities in Ethiopia and six Great Lakes countries. The activities in Ethiopia enabled USAID to intervene early enough in the 2005 drought cycle to protect the livelihoods—as well as the lives—of pastoralist populations in the Somali Region, which were not yet protected by Ethiopia’s Productive Safety Net program. In fiscal year 2004, the USAID mission in Ethiopia received $19.8 million in famine prevention funds to enhance and diversify the livelihoods of the chronically food insecure. State’s Bureau of Population, Refugees, and Migration (PRM), which provides limited amounts of cash to WFP to purchase food locally and globally in order to remedy shortages in refugee feeding pipeline breaks. In these situations, PRM generally provides about 1 month’s worth of refugee feeding needs—PRM will not usually provide funds unless USAID’s resources have been exhausted. Funding from year to year varies. In fiscal year 2006, PRM’s cash assistance to WFP to fund operations in 14 countries totaled about $15 million, including $1.45 million for humanitarian air service. In addition, PRM also funds food aid and food security programs for Burmese refugees in Thailand. In fiscal year 2006, PRM provided $7 million in emergency supplemental funding to the Thailand-Burma Border Consortium, most of which supported food- related programs. PRM officials told us that they coordinate efforts with USAID as needed. Table 3 lists congressional mandates for the P.L. 480 food aid programs and the target for fiscal year 2006. Related GAO Products Darfur Crisis: Progress in Aid and Peace Montoring Threatened by i Ongoing Vioence and Operational Challenges, D.C.: Nov. 9, 2006). GAO-07-9. (Washington, Darfur Crss: Deah Estmaes Demonstrate Severiy of Crisis, but Their Accuracy and Credibility Could Be Enhanced, GAO-07-24. (Washington, D.C.: Nov. 9, 2006). Mar ime Security Feet: Many Factors Determine Impact o Potential Limis on Food Aid Shpments, GAO-04-1065. (Washington, D.C.: Sept. 13, i 2004). Foreign Assistance: Lack of Strategic Focus and Obstacles to Agrculura Recovery Threaten Afghanistan’s Stability, GAO-03-607. (Washington, D.C.: June 30, 2003). Foreign Assistance: Susaned Efors Needed toHelp Southern Afrca Recover from Food Criss, GAO-03-644. (Washington, D.C.: June 25, 2003). Food Aid: Experence o U.S. Programs Suggest Opportunites for Improvement, GAO-02-801T. (Washington, D.C.: June 4, 2002). Foreign Assistance: Global Food for Educaion Initiative Faces Cha enges for Successful Impementaton, GAO-02-328. (Washington, D.C.: Feb. 28, li 2002). Foreign Assistance: U.S. Food Aid Program o Russa Had Weak nternal Controls, GAO/NSIAD/AIMD-00-329. (Washington, D.C.: Sept. 29, 2000). Foreign Assistance: U.S. Bilatera Food Asssance to North Korea Had Mxed Resuls, GAO/NSIAD-00-175. (Washington, D.C.: June 15, 2000). ti Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses, GAO/NSIAD-00-91. (Washington, D.C.: Mar. 9, 2000). Foreign Assistance: North Korean Constraints Lmt Food Aid Montorng, GAO/T-NSIAD-00-47. (Washington, D.C.: Oct. 27, 1999). Foreign Assistance: North Korea Resricts Food Aid Mon oring, GAO- NSIAD-00-35. (Washington, D.C.: Oct. 8, 1999). Food Security: Prepara ons for the 1996 World Food Summt, GAO/NSIAD-97-44. (Washington, D.C.: Nov. 1996). i Food Security: Factors That Could Afec Progress Toward Meeting World Food Summi Goals, GAO/NSIAD-99-15. (Washington, D.C.: Mar. 1999). t International Rea ons: Food Security n Afrca, GAO/T-NSIAD-96-217. (Washington, D.C.: July 31, 1996). Food Aid: Competng Goals and Requiremens Hnder Tile Program t Ilt Resu s, GAO/GGD-95-68. (Washington, D.C.: June 26, 1995). Foreign Ad: Actions Taken to Improve Food Aid Management, GAO/NSIAD-95-74. (Washington, D.C.: Mar. 23, 1995). Mar ime Indusry: Cargo Preference Laws Es mated Costs and Effecs, GAO/RCED-95-34. (Washington, D.C.: Nov. 30, 1994). Privae Voluntary Organzatons’ Role in Distrbutng Food Aid, GAO/NSIAD-95-35. (Washington, D.C.: Nov. 23, 1994). Cargo Preerence Requirements: Objectives Not Met When Applied to Food Aid Programs, GAO/GGD-94-215. (Washington, D.C.: Sept. 29, 1994) Publc Law 480 Tite I: Economc and Market Deveopment Objectves No Met, GAO/T-GGD-94-191. (Washington, D.C.: Aug. 3, 1994). Mut ateral Assstance: Accountab ty for U.S. Contributons to the World Food Program, GAO/ T-NSIAD-94-174. (Washington, D.C.: May 5, 1994). Foreign Assistance: Inadequate Accountab ty for U.S. Donations tothe World Food Program, GAO/NSIAD-94-29. (Washington, D.C.: Jan. 28, 1994). Foreign Assistance: U.S. Partcpa on n FAO’s Technica Cooperaton Program, GAO/NSIAD-94-32. (Washington, D.C.: Jan. 11, 1994). Food Aid: Management Improvements Are Needed to Achieve Program Objectives, GAO/NSIAD-93-168. (Washington, D.C.: July 23, 1993). Cargo Preerence Requirements: Ther Impact on U.S. Food Aid Programs and the U.S. Merchant Marine, GAO/NSIAD-90-174. (Washington, D.C.: June 19, 1990). Status Report on GAO’s Reviews on P.L. 480 Food Aid Programs, GAO/T- NSIAD-90-23. (Washington, D.C.: Mar. 21, 1990). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States is the largest provider of food aid in the world, accounting for over half of all global food aid supplies intended to alleviate hunger. Since the 2002 reauthorization of the Farm Bill, Congress has appropriated an average of $2 billion per year for U.S. food aid programs, which delivered an average of 4 million metric tons of agricultural commodities per year. Despite growing demand for food aid, rising business and transportation costs have contributed to a 43-percent decline in average tonnages delivered over the last 5 years. For the largest U.S. food aid program, these costs represent approximately 65 percent of total food aid expenditures, highlighting the need to maximize the efficiency and effectiveness of food aid. To inform Congress as it reauthorizes the 2007 Farm Bill, GAO examined some key challenges to the (1) efficiency of delivery and (2) effective monitoring of U.S. food aid. Multiple challenges combine to hinder the efficiency of delivery of U.S. food aid by reducing the amount, quality, and timeliness of food provided. These challenges include (1) funding and planning processes that increase delivery costs and lengthen time frames; (2) transportation contracting practices that create high levels of risk for ocean carriers, resulting in increased rates; (3) legal requirements that can result in the awarding of food aid contracts to more expensive service providers; and (4) inadequate coordination between U.S. agencies and food aid stakeholders in systematically addressing food delivery problems, such as spoilage. U.S. agencies have taken some steps to address timeliness concerns. USAID has been stocking or prepositioning food commodities domestically and abroad and USDA has implemented a new transportation bid process, but the long-term cost effectiveness of these initiatives has not yet been measured. Given limited food aid resources and increasing emergencies, ensuring that food reaches the most vulnerable populations--such as poor women who are pregnant or children who are malnourished--is critical to enhancing its effectiveness. However, USAID and USDA do not sufficiently monitor the effectiveness of food aid programs, particularly in recipient countries, due to limited staff, competing priorities, and restrictions in the use of food aid resources. For example, although USAID has some non-Title II-funded staff assigned to monitoring, it had only 23 Title II-funded staff assigned to missions and regional offices in just 10 countries to monitor programs costing about $1.7 billion in 55 countries in fiscal year 2006. As a result of such limitations, U.S. agencies may not be sufficiently accomplishing their goals of getting the right food to the right people at the right time.
Background NIH’s Organization As the central office for NIH, OD establishes NIH policy and broad themes for the agency to pursue, such as ensuring a sustainable scientific workforce, based on national needs and scientific opportunities. In addition, the OD is responsible for coordinating the programs and activities that span NIH components, particularly research initiatives and issues involving more than 1 of the 27 ICs. OD is also responsible for ensuring that scientifically-based strategic planning is implemented in support of research priorities, and that NIH’s resources are sufficiently allocated for research projects identified in strategic plans. NIH conducts and sponsors biomedical research through its ICs, each of which is charged with a specific mission. ICs’ missions generally focus on a specific disease, a particular organ, or a stage in life, such as childhood or old age. The ICs support, plan, and manage their own research programs in keeping with OD policy and priorities. Within an IC, there can be a number of offices, centers, and divisions that focus on specific aspects of the IC’s mission. For example, NCI has a Division of Cancer Epidemiology and Genetics, as well as a Division of Cancer Treatment and Diagnosis. Extramural and Intramural Research Supported by NIH ICs accomplish their missions chiefly through extramural and intramural research. Extramural research accounted for more than 80 percent of NIH’s budget in fiscal year 2012. This research is conducted at 2,500 universities, medical schools, research organizations, and companies who are awarded extramural research grants or extramural research and development contracts through NIH’s competitive process. Twenty-four of the 27 ICs fund extramural research projects, and these ICs make final decisions on which extramural research projects to fund following a standard peer review process.what NIH considers “unsolicited” research and research training projects for which applications are submitted in response to broad funding opportunity announcements that span the breadth of the NIH mission. In addition, to encourage and stimulate research and the submission of research applications in specific areas, many ICs will issue solicitations that are more narrow in scope in the form of , requests for applications, and requests for proposals. When reviewing applications for extramural research projects, NIH follows a process of peer review, established by law.sequential levels of peer review. According to NIH officials, the first level involves panels of experts to assess the scientific merit of the proposed science. The second level involves panels of experts and leaders of non- science fields including patient advocates that, in addition to scientific merit, also consider the IC’s mission and strategic plan goals, public health needs, scientific opportunities, and portfolio balance. After NIH’s Most IC extramural funding is provided for This peer review system has two peer review process is concluded, IC directors make extramural funding decisions. Intramural research, which accounts for approximately 10 percent of NIH’s budget, is conducted by NIH scientists in NIH laboratories. This includes about 5,300 scientists and technical support staff who are employees, and another 5,000 young scientists at various stages of research training who come to NIH for a few years to work as non- employee trainees, including 3,800 postdoctoral fellows. All but 3 of the 27 ICs have an intramural research program, but the size, structure, and activities of the programs vary greatly. NIH’s Research, Condition, and Disease Categorization System In January 2007, Congress directed NIH to establish an electronic system to categorize the research grants and activities of OD and all the ICs. In response, NIH created RCDC. Implemented in February 2008, RCDC uses a computer-based text-mining tool that recognizes words and phrases in project descriptions in order to assign NIH projects to at least one of 235 categories of diseases, conditions, and research areas that were developed for reporting to Congress and the public. NIH officials said that RCDC serves as NIH’s primary computerized reporting process to categorize its research funding. The system includes reporting tools that can be used to generate publically-available, web-based reports on total funding amounts for the research projects related to each RCDC category. Individual Institutes and Centers Set Their Own Research Priorities Using a Variety of Approaches, Including Strategic Planning, Annual Planning, and Review of Research Portfolios Individual ICs at NIH set their own research priorities, and we found that the five selected ICs we reviewed did so considering similar factors and using various priority-setting approaches. Officials at all five of the ICs stated that their mission and available appropriations inform priority setting approaches. Officials at one IC noted that their IC’s mission provides context related to why the IC was developed initially and insight into the emerging areas of research. Officials stated that an IC’s appropriations not only set funding parameters, but may also influence priority setting if the appropriations include mandated spending by Congress for a specific disease. Some IC officials noted that because the costs of potential research projects generally exceed the available appropriation, the ICs generally must prioritize among research projects. In priority setting, IC officials also reported taking into consideration scientific needs and opportunities, gaps in funded research, the burden of disease in a population, and public health need, such as an emerging public health threat that needs to be addressed, like influenza. While individual ICs used different approaches to priority setting, all five selected ICs we reviewed reported using some combination of strategic planning, annual planning, and periodic review and evaluation of their research portfolios as part of their approach to priority setting. Strategic planning: All five ICs we interviewed developed strategic plans—consistent with law and NIH policy—to set priorities and goals for research funding. According to officials at selected ICs, the development of these plans is guided by various processes at each IC. Although these processes vary by IC, they include an opportunity to solicit input from stakeholders, including the scientific community, as well as review by IC staff and leadership. Examples of IC strategic planning activities include the following: NIAID used a framework organized around various scientific areas that encompassed its research to develop the strategic plans it published in 2000, 2008, and 2013. When developing the plans, NIAID convened groups of NIAID subject matter experts for each scientific area and considered input offered by external organizations and scientists to guide the process. The experts deliberated priorities identified for each scientific area and suggested revisions to the draft plan. For example, officials stated that NIAID has seen a shift in the emphasis on research efforts related to human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) from studying single-drug therapy to the current focus on finding improved HIV treatments and tools for preventing infection. This new focus was then incorporated as part of the strategic plan published in 2013. Draft strategic plans were reviewed by the NIAID Advisory Council. After a final review by the director of NIAID, the finalized plans were published on NIAID’s web site. NHLBI officials said that, as of November 2013, the IC was revising the strategic plan it published in 2007 and part of its strategic planning process under the direction of a new director appointed in 2012. As part of the revised process, NHLBI created the Strategic Investment and Management Steering Committee that will help establish strategic goals as well as advise leadership about whether proposed activities are consistent with the IC’s strategic plan. Officials stated that the strategic planning process will engage both NHLBI scientific staff as well as the larger extramural community because they are knowledgeable about research gaps within their respective fields. Further, NHLBI officials stated that input provided by stakeholders also helps to inform development of disease-specific planning documents, such as one for asthma and another on sleep disorders. Annual planning activities: In addition to the development of strategic plans, some IC officials we interviewed told us that they conduct annual planning activities as a part of their process to set priorities for research funding. They told us that these annual planning activities address ongoing and emerging needs. While the processes vary across the ICs in terms of the level of structure and formality, officials stated that during the annual planning process these ICs typically consider factors such as scientific needs, research gaps, scientific opportunities, public health needs, and the need to balance the various types of research conducted to address all areas of the mission. Examples of annual planning activities include the following: NIAID uses an annual planning process to develop and select initiatives that address special needs, gaps, and opportunities in relevant research areas. According to NIAID officials, these initiatives are developed through a structured process centered around two primary events—the annual Summer Policy Retreat and the annual Winter Program Review. NIAID staff also review their scientific areas for the latest information on burden of illness and state of scientific progress, which is used to inform the development of future initiatives. According to agency officials, NIAID staff use this information, as well as input from the scientific community, to prepare and present a concept of an initiative to NIAID’s National Advisory Council, which reviews, comments on, and decides whether to approve the initiative. Once an initiative has been approved, NIAID solicits research applications through a request for application, a program announcement, or a request for proposal. NIDDK uses an annual process that requires NIDDK scientific program staff within the IC’s divisions to review the research portfolio and to identify research opportunities. According to NIDDK, this effort is informed by a number of factors, such as ongoing input from the extramural research and health advocacy communities and recent research advances. They said that each division then presents initiative concepts built on these research opportunities at NIDDK senior leadership meetings. The NIDDK director prioritizes the initiative concepts and determines which will move forward based on consideration of merit and other factors. These concepts go through a clearance process, including review from the NIDDK National Advisory Council. NIDDK develops funding opportunity announcements based on these identified research opportunities, such as requests for application that address specific scientific questions or diseases. In addition, NIDDK conducts divisional planning and prioritization activities during monthly meetings, retreats, and other activities, including consideration of opportunities for funds to support targeted emphasis areas. Review and evaluation of research portfolio: Officials from the five selected ICs we interviewed stated they conduct reviews of their research portfolio to help ensure existing priorities reflect and align with current scientific opportunities, research gaps, and emerging science. This includes periodic formal program assessments of their research portfolio, which the ICs used to determine if the IC is meeting its overall priorities and goals, to maintain portfolio balance, and to make any changes to priorities over time as science evolves. Examples of portfolio review and evaluation activities include the following: NCI officials stated that the IC conducts portfolio analyses, which involve examining research areas NCI has funded to identify opportunities for research. Officials stated that the portfolio analysis is an iterative process that occurs continually throughout the year. One example of this portfolio analysis has involved publishing broad questions related to five themes for the research community on NCI’s website and using the responses received to inform development of requests for application and program announcements published by NCI. Officials said they aim to generate about four new questions each year and that the questions are generally very broad and do not usually address specific diseases. For example, one question recently asked how the level, type, or duration of physical activity influences cancer risk and prognosis. NIGMS conducts formal program evaluations to determine what benefits the program has produced overall, including scientific advances, new technologies, and cutting-edge paradigm shifts, such as changes in the understanding of human biology. In addition, officials told us that NIGMS divisions conduct an annual review, which is generally prepared for NIGMS’s director, and provides information about the scientific advances that can be attributed to the work conducted in each division. This includes items such as the number of grants funded, the number of new principal investigators, the types of research conducted, and an overview of any new science performed. While the individual ICs set their own priorities, according to NIH officials, the OD provides leadership to the ICs and helps coordinate priority setting activities, especially for efforts that involve multiple ICs. For example, NIH officials reported that the director of NIH meets with all IC directors weekly to discuss research priorities, investments, and concerns that may affect an IC or NIH overall. In addition, the Advisory Committee to the Director assists the director of NIH with making major planning and policy decisions, including those related to research priorities and the allocation of funds for major, NIH-wide projects and identifies areas that have the potential for significant improvements both within NIH and within the scientific community at large. For example, the NIH Director charged the committee with examining NIH’s data needs to determine whether the agency is positioned to handle significant amounts of data and how to make these data accessible to researchers. In addition, NIH established a special division within the Office of the Director—the Division of Program Coordination, Planning, and Strategic Initiatives—to help manage large and complex research portfolios across NIH. The offices within this division organize formal discussions to obtain input from all of the IC directors as well as members of the scientific community to identify areas of significant scientific interest or need that typically span the interests of multiple ICs. NIH officials also stated that although each IC has at least one strategic plan in place, in some instances NIH has developed strategic plans for particular disease or topic areas that cross multiple ICs, such as one related to women’s health research. NIH Funding Levels Varied Widely For Selected Categories of Research, Conditions, and Diseases Corresponding to the Leading Causes of Death and Chronic Conditions In fiscal year 2012, NIH reported funding levels that ranged widely—from $13 million for projects in one RCDC category to more than $5.6 billion for another—for the 40 different RCDC categories we examined. (See table 1.) We determined that these RCDC categories were the best match for the most frequent causes of death in the United States, the most frequent causes of death globally, and the most prevalent chronic conditions in the United States. (See app. II for an explanation of how we identified these diseases and conditions and matched them with RCDC categories.) NIH officials said that RCDC is the agency’s official system for reporting research funding across the agency’s ICs and it provides a method for reporting consistent information about NIH funding. As noted by the agency, RCDC produces standard reports across all ICs using a process that is reproducible. To facilitate this, all ICs are to use the same definitions of research, diseases, and conditions for the RCDC categories. Further, the system allows for detailed reports, including the total funding for a category as well as specific information about the projects under each category, such as the title of the project, the IC supporting the research, and the project number. Agency officials said that the system was not designed to be able to estimate a total, non-duplicated amount of funding specific to a given disease or condition because RCDC categories are neither mutually exclusive nor exhaustive. Specifically: Projects may be reported in multiple categories. NIH officials said that the categories within the system are not mutually exclusive and therefore a project may be included in multiple categories. Officials told us that, on average, a single project may fall into five or six RCDC categories. Categories may be contained in another category. Some categories are inherently related and therefore an entire RCDC category can also be contained in another category. For example, the category of cancer includes other cancer categories, such as breast cancer, lung cancer, or prostate cancer, and the research funding reported in each of these categories will also be reported in the cancer research total. Categories do not exist for all diseases. RCDC also does not capture the funding for all diseases that NIH studies. As of November 2013, RCDC publicly reported on 235 different categories, but agency officials told us that there are an additional 30 to 40 categories used for congressional reporting only, and about 50 categories on a waiting list to be developed for inclusion in the system. Not all projects that NIH funds are included. The protocols that NIH develops for tracking funding include certain thresholds for inclusion. Therefore, a project that is only minimally related to a particular category may not be included in the funding for that category because the description of the project does not adequately match the terms defining that category. NIH officials told us that 3 to 5 percent of NIH funded research projects do not appear in any RCDC category. Some ICs have their own systems that track and report funding within their research portfolios. Of the five selected ICs we reviewed, two had their own systems for tracking their funding. NCI maintains a publicly available website that communicates funding decisions across the research supported by NCI. The system was developed in 1998 and categorizes research into more than 40 specific cancer types, as well as almost 50 research topics that are not disease-specific. Funding for individual projects may be separated and reported in the NCI system by the specific types of cancers being studied. According to NIH officials, the system is limited to those research projects funded by NCI and therefore does not include information about research studies addressing cancer that are funded solely by other ICs. NCI officials told us that they use the system to report data to stakeholders, including Congress, and that the analyses they conduct based on these data enhance NCI’s ability to plan and monitor its scientific investment. NIAID also has an internal coding system which, according to agency officials, was instituted in 1979 and codes each individual research project it funds by areas of study. Staff members manually assign codes. NIAID officials noted that projects may have multiple codes depending on the specific goals of the project, and these codes reflect the percentage of relevance to that specific goal, such that the total values add to 100 percent. Officials noted that the system helps NIAID respond to requests for funding information that is not included in RCDC. For example, when addressing a question about how much NIAID has invested in influenza research, officials said they examine research project funding for the various subtypes of influenza such as H1N1 and avian influenzas (including H5N1 and H7N9). These subcategories are included in the internal coding system, but are not covered by RCDC. Officials noted that NIAID uses RCDC for the official budget numbers that NIH reports to Congress. Agency Comments We provided a draft of this report to HHS. The Department provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Department of Health and Human Services, the Director of the National Institutes of Health, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Studies about the Relationship between Research Funding and Burden of Disease To provide information on studies about the relationship between research funding and burden of disease at the National Institutes of Health (NIH) and in other countries, we identified such studies relevant to NIH through on-line searches and we provide information on the two most recent of the studies. Additionally, for other countries, we identified published studies through a search of various databases, and we provide information from two of the more recent such studies. We did not identify research funding levels in other countries. Studies have been conducted that examine the relationship between research funding and burden of disease in the United States and other countries. Some researchers have reported that no single measure of disease burden captures the impact of various diseases and that different measures of burden may result in different conclusions about funding. Studies on the relationship between research funding by NIH and burden of disease found varied relationships depending upon the burden of disease measures used. For example, A 2013 study assessed the allocation of research funds across 107 diseases using NIH’s Research, Condition, and Disease Categorization system (RCDC) funding data and found a strong and statistically significant relationship between NIH funding and deaths and hospitalizations. As a result, according to the study, the data suggest that NIH funding is responsive to these two measures of disease burden. Further, the study noted that the data were consistent with the argument that it is more feasible for NIH to respond to disease burden considerations through directed or applied research funding, such as funding for clinical trials, than through investigator- initiated basic research. A 2011 study to assess the correlation between NIH research funding and burden of disease using data from 2004 and 2006 found that current levels of NIH disease-specific research funding correlated modestly with U.S. disease burden. The study noted that there could be a number of reasons that alignment of funding and burden of disease were not better correlated, including that basic science research has consistently accounted for 55 percent of NIH spending and it is difficult to attribute to a specific disease, contributing uncertainty to the analysis and reducing correlations. Measures of burden of disease for this study were incidence, prevalence, mortality, years of life lost to premature death (YLL), and disability-adjusted life years (DALY). Studies of research funding in other countries have also found variations in relationships between funding and burden of disease. For example, A 2012 study found that research expenditures by governmental agencies and charities for four of the foremost chronic diseases in the United Kingdom—cancer, coronary heart disease, stroke and dementia—was not aligned with burden of disease. Specifically, research funding for dementia and stroke was disproportionately small in comparison with funding for cancer and coronary heart disease. In this study, burden of disease was measured by prevalence and DALYs. A 2004 study found a significant relationship between research funding by the National Health and Medical Research Council in Australia and several measures of burden of disease, variations over the 6-year span reviewed in the study using different measures. Measures of burden of disease for this study were incidence, mortality, YLLs, years of life lost to disability, and DALYs. The National Health and Medical Research Council is the main Australian body charged with the responsibility of supporting medical and public health research and training in Australia. Appendix II: Leading Causes of Death and Chronic Conditions and their Corresponding National Institutes of Health Categories To determine the leading causes of death in the United States, we reviewed data on causes of death in 2011 that were reported by the Centers for Disease Control (CDC). First, we identified the 15 leading causes of death reported by CDC. Then, we identified subcategories of those leading causes of death. To do so, we used as the cut off the number of deaths for the 15th leading cause of death—which was 18,090 deaths from pneumonitis due to solids and liquids. We included those sub-categories of causes of death, for example pneumonia, where the number of deaths reported was greater than 18,090. See table 2 for the leading causes of deaths in the United States in 2011, including the sub- categories with at least 18,090 related deaths in 2011, and the number of deaths attributed to these causes. To determine the leading causes of death globally, we reviewed the analysis in the Global Burden of Disease Study. We identified the 15 leading causes of death globally in 2010 that were reported in this study. See table 3 for the leading causes of deaths globally. To determine the most prevalent chronic diseases and conditions in the United States, CDC provided us with a list of the 13 most prevalent chronic diseases and conditions for adults, and four subcategories. See table 4 for a list of the most prevalent chronic diseases and conditions for adults identified by CDC. To determine the corresponding categories from the National Institutes of Health’s (NIH) Research, Condition, and Disease Categorization system (RCDC) for the diseases and conditions that are the leading causes of death in the United States, the leading causes of death globally, and the most prevalent chronic diseases and conditions in the United States, we identified the International Classification of Diseases (ICD-10) codes associated with each of these diseases and conditions and, using their related descriptions, compared them with the RCDC category descriptions. We then provided our list of diseases, conditions, and ICD- 10 codes with the corresponding RCDC categories to NIH for the agency’s review and concurrence. NIH officials confirmed most of our matches. For some of the diseases and conditions, officials noted the RCDC category that we identified was the closest match, but the category was substantially broader than the RCDC category it was selected to represent or it was substantially narrower than the RCDC category it was selected to represent. In other cases, NIH officials noted that there was not an RCDC category that was a close enough fit to the disease category we were trying to represent. See table 5 for the leading causes of death in the United States and globally, as well as the most prevalent chronic diseases and conditions, and their corresponding RCDC category or categories. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Karen Doran (Assistant Director), George Bogart, Adrienne Daniels, Carolyn Feis Korman, Cathy Hamann, Natalie Herzog, Amy Leone, and Andrea Richardson made key contributions to this report.
NIH is the nation's leader in sponsoring and conducting biomedical research. In fiscal year 2012, NIH had a budget of almost $31 billion, over 80 percent of which was used to fund extramural research that supports scientists and research personnel working at universities, medical schools, and other research institutions. Twenty-four of NIH's 27 ICs that support extramural research are focused on particular diseases, conditions, or research areas, and these ICs have their own appropriations. Decisions about which projects are funded are made by these individual ICs. NIH reports funding for 235 research, condition, and disease categories in RCDC. GAO was asked to review NIH funding related to leading diseases and health conditions. GAO examined (1) how research priorities are set at NIH, and (2) NIH allocations of research funding across selected diseases and conditions. For five ICs—National Cancer Institute; National Heart, Lung, and Blood Institute; National Institute of Allergy and Infectious Diseases; National Institute of Diabetes and Digestive and Kidney Diseases; and National Institute of General Medical Sciences—GAO reviewed documents and interviewed IC officials about priority setting. GAO reviewed NIH fiscal year 2012 funding reported by RCDC for 40 research, condition, and disease categories related to the leading causes of death in the United States and globally, and the most prevalent chronic diseases and conditions in the United States. Individual institutes and centers (ICs) at the National Institutes of Health (NIH) set their own research priorities, and GAO found that the five selected ICs—awarding the largest amount of research funding—that it reviewed did so considering similar factors and using various priority-setting approaches. Agency officials stated that the ICs' mission and appropriations inform priority-setting approaches. Some IC officials noted that because the costs of potential research projects generally exceed the available appropriation, the ICs must prioritize among research projects. In priority setting, IC officials reported taking into consideration scientific needs and opportunities, gaps in funded research, the burden of disease in a population, and public health need, such as an emerging public health threat like influenza that needs to be addressed. While each IC GAO examined had its own approach for setting priorities, they all considered the input of stakeholders, including the scientific community, and used some similar strategies. All five ICs developed strategic plans, though the process varied by IC. Some ICs also used annual planning activities in various forms, which then guided funding opportunity announcements. All five ICs also conducted reviews and evaluations of their research portfolios to ensure that their priorities align with scientific opportunities, research gaps, and emerging science. In addition to these efforts at the IC level, agency officials told GAO that the NIH Office of the Director provides leadership and coordinates priority setting activities, especially for those activities that involve multiple ICs. NIH reported funding levels that varied widely for the 40 different Research, Condition, and Disease Categorization system (RCDC) categories GAO examined that correspond to the leading causes of death and the most prevalent chronic conditions. For example, NIH reported actual fiscal year 2012 funding levels ranging from $13 million for projects in the fibromyalgia category to more than $5.6 billion for projects in the cancer category. Although these categories are part of NIH's RCDC, which is used to categorize the research activities across the agency, agency officials said that the system cannot estimate a total, non-duplicated amount of funding that is specific to a given disease or condition. This is because RCDC categories are neither mutually exclusive nor exhaustive. For example, projects may be included in multiple RCDC categories, some categories are related to each other and therefore some categories may also be included within another, and funding for all diseases is not captured in the system. While RCDC is NIH's official system for reporting research funding across the ICs, two of the five ICs that GAO reviewed—the National Cancer Institute (NCI) and the National Institute of Allergy and Infectious Diseases—had their own systems for tracking their funding, which allowed them to provide more detailed information than that available from RCDC. For example, NCI has a publicly available website that specifies funding for more than 40 specific cancer types as well as almost 50 research topics that are not disease-specific. Funding for individual projects may be separated for specific studies into those cancer types. According to officials, the system enhances NCI's ability to plan and monitor its scientific investment. The Department of Health and Human Services provided technical comments, which GAO incorporated as appropriate.
Background Federal Law and Policy Call for the Development of an ISE Because of the information-sharing weaknesses among federal departments and agencies that became apparent after September 11, the Congress and the administration have called for a number of terrorism- related information-sharing initiatives, including the development of an ISE, as the following instances illustrate: Section 1016 of the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act), enacted December 17, 2004, as amended by the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act), enacted August 3, 2007, requires the President to take action to facilitate the sharing of terrorism-related information by establishing an ISE. The Act required the President to, among other things, appoint a Program Manager to plan for, oversee implementation of, and manage the ISE, and established an ISC to assist the President and Program Manager in these duties. In addition, the Act required the President, with the assistance of the Program Manager, to submit to Congress a report containing an implementation plan for the ISE no later than 1 year after the date of enactment (enacted December 17, 2004) and specified 11 elements to be included in the plan. These elements include, among other things, the function, capabilities, resources, and concept for the design of the ISE; project plan; budget estimates; performance metrics and measures; and defined roles for all stakeholders. The Act also required annual performance management reports, beginning not later than 2 years after enactment, on the state of the ISE and of information sharing across the federal government. On December 16, 2005, the President issued a memorandum to implement measures consistent with establishing and supporting the ISE. The memorandum sets forth five information sharing guidelines: (a) defining common standards for how information is acquired, accessed, shared, and used within the ISE; (b) developing a common framework for sharing information between and among executive departments and agencies; state, local, and tribal governments; law enforcement agencies; and the private sector; (c) standardizing the procedures for sensitive but unclassified information; (d) facilitating the sharing of information between executive departments and agencies and foreign governments; and (e) protecting the information privacy rights and other legal rights of Americans. The memorandum also directs the heads of executive departments and agencies to actively work to promote a culture of information sharing within their respective agencies and that ongoing information-sharing efforts be leveraged in the development of the ISE. In October 2007, the President issued a National Strategy for Information Sharing. The strategy is focused on improving the sharing of homeland security, terrorism, and law enforcement information related to terrorism within and among all levels of government and the private sector and articulates the administration’s vision on terrorism-related information sharing. The strategy notes guiding principles and efforts taken to improve information sharing across all levels of government, the private sector, and foreign partners to date. It also contains an appendix that elaborates on the roles of federal, state, local, and tribal authorities in information sharing and expands on the role of state and major urban area fusion centers. Scope and Purpose of the ISE The ISE is not bounded by a single federal agency or component. While the Program Manager has been placed within the Office of the Director of National Intelligence, from an operational perspective, the ISE is to reach across all levels of government as well as the private sector and foreign partners. As such, the program is a broad-based coordination and collaboration effort among various stakeholders. In essence, the ISE can be viewed as a set of cross-cutting communication links—encompassing policies, processes, technologies—among and between the various entities that gather, analyze, and share terrorism-related information. According to officials at the Office of the Program Manager, their focus is primarily to ensure that all appropriate terrorism-related information is made available to analysts and others who need it when they need it. The Program Manager is not responsible for the collection or analysis of terrorism- related information. The ISE implementation plan, released by the Program Manager in November 2006, is to be the guiding document describing how the ISE is to be implemented. This plan addressed at a very general and preliminary level the ISE’s information-sharing strategy, roles, and needs. The document set out to include: (1) an operational concept; (2) the implementation overview; (3) a summary of desired operational capabilities; (4) means to develop an architecture and standards; (5) an approach to sharing with non-federal partners; (6) ISE enabling activities; (7) implementation management; (8) recommendations on a structure for expansion and future management; and (9) a summary of implementation actions. The plan also acknowledged numerous challenges to be addressed, including promoting a culture of information sharing, protecting information privacy, and handling terrorism-related information. Under the plan, the ISE is comprised of five “communities of interest,” encompassing intelligence, law enforcement, defense, homeland security, and foreign affairs. Each community may comprise multiple federal organizations and other stakeholders; information is to be shared across these communities. Key ISE Players and Roles ISE leadership lies with the presidentially appointed Program Manager, for whom the Intelligence Reform Act, as amended, lays out specific requirements. Pursuant to the Act, the Program Manager, in consultation with the head of any affected department or agency, has governmentwide authority over the sharing of terrorism-related information within the scope of the ISE and is required to plan for, oversee implementation of, and manage the ISE. For example, the Program Manager, in consultation with the ISC and consistent with the direction and policies issued by the President, the Director of National Intelligence, and the Director of the Office of Management and Budget, is to issue governmentwide procedures, guidelines, instructions, and functional standards, as appropriate, for the management, development, and proper operation of the ISE. In fulfilling this responsibility, the Program Manager must, among other things, take into account the varying missions and security requirements of agencies participating in the ISE and ensure the protection of privacy and civil liberties. The implementation plan further described areas of responsibility in broad terms for the Program Manager. The plan states, for example, that the Program Manager is to “act as the central agent to improve terrorism-related information sharing among ISE participants by working with them to remove barriers, facilitate change, and ensure that ISE implementation proceeds efficiently and effectively.” In interpreting these responsibilities, the Program Manager has exercised discretion by focusing on, for example, facilitating information sharing across the five ISE communities. To support the development of the ISE, as of June 2008 the Program Manager has a staff of about 11 government staff and 31 contractors organized into three divisions—technology, policy and planning, and business process. Interagency support and advice to the Program Manager on the development of the ISE is provided through the ISC. The ISC is chaired by the Program Manager and is currently composed of 16 other members, each designees of: the Secretaries of State, Treasury, Interior, Transportation, Health and Human Services, Commerce, Energy, and Homeland Security; the Department of Defense’s Office of the Secretary of Defense as well as the Joint Chiefs of Staff; the Attorney General; the Director of National Intelligence; the Director of the Central Intelligence Agency; the Director of the Office of Management and Budget; the Director of the FBI; and the Director of the National Counterterrorism Center. The ISC is an advisory body, which among other things, is expected to advise the President and the Program Manager on development of policies, procedures, guidelines, roles, and standards necessary to establish, implement, and maintain the ISE; work to ensure coordination among the federal agencies participating in the establishment, implementation, and maintenance of the ISE; and identify and recommend solutions to gaps between existing technologies, programs, and systems used by federal agencies for sharing information and the parameters of the proposed information-sharing environment. The ISC and Program Manager are supported by various task and working groups. For example, the Foreign Government Information Sharing Working Group, with coordination and assistance from the PM-ISE, helped develop a checklist of issues to be taken into account in negotiating international agreements. Similarly, an Alerts and Notifications Working Group was established to assist the PM-ISE and ISC members in their efforts to identify the alerts and notifications to be available to federal and non-federal ISE participants. Another area of roles and responsibilities for the ISE lies with individual federal agencies (including those that belong to the ISC and those that do not), state and local governments, and private sector entities. In accordance with the Intelligence Reform Act, as amended, any federal department or agency using or possessing intelligence or terrorism-related information, operating a system in the ISE, or otherwise participating or expecting to participate in the ISE must fully comply with information- sharing policies, procedures, guidelines, rules and standards established pursuant to the ISE. The departments and agencies must further ensure the provision of adequate resources for systems and activities supporting operation of and participation in the ISE, ensure full department or agency cooperation in the development of the ISE to implement governmentwide information sharing, and submit, as requested, any reports on the implementation of ISE requirements within the department or agency. State and local governments also play a role in the ISE through, for example, their law enforcement efforts to prevent crimes. As such, these governments are coordinated with and participate in implementing the ISE. Private sector organizations may share terrorism-related information on a voluntary basis through existing or newly developed ISE mechanisms as well. For example, the ISE leverages existing national plans such as the National Infrastructure Protection Plan, which established mechanisms for public and private sector organizations to share critical infrastructure information on 17 critical infrastructure sectors, such as banking and finance, energy, chemical, and transportation. Initial Steps to Define a Structure and Approach to Implement the ISE Have Been Taken, but Work Remains to Define What the ISE Is to Include, to Design How It Will Operate, and to Outline Measurable Steps and Time Frames to Achieve Implementation and Desired Results To guide the design and implementation of the ISE, the Program Manager has issued an implementation plan, completed a number of tasks contained in it, and other independent and ongoing information-sharing initiatives have been integrated into the ISE, but the plan does not include some important elements needed to implement the ISE. The plan provides an initial structure and approach for ISE design and implementation, as well as describes a two-phased approach for implementing the ISE by June 2009. Completed activities include, among other things, development of proposed common terrorism information sharing standards (CTISS) for sharing terrorism-related information. In addition, other federal, state, and local initiatives to enhance information sharing across the government have been or are being incorporated into the ISE. Based on existing federal guidance as well as our prior work and the work of others, standard practices in program and project management for defining, designing, and executing programs include (1) defining the program’s scope, roles and responsibilities, and specific results to be achieved, along with the individual projects needed to achieve these results, and (2) developing a road map, or program plan, to establish an order for executing specific projects needed to obtain defined programmatic results within a specified time frame and measuring progress and cost in doing so. While efforts to date may represent the groundwork needed to facilitate terrorism-related information sharing in the future, work remains to define and communicate the scope and desired results to be achieved by the ISE, the specific milestones and time frames for achieving the results, and the individual projects and the sequence of projects needed to achieve these results. Without such elements the Program Manager risks not being able to effectively manage and implement the ISE. The Implementation Plan Provides an Initial Structure and Approach for Designing and Implementing the ISE Issued in November 2006, the implementation plan provides an initial structure and approach for ISE design and implementation and incorporates Presidential Guidelines as well as ISE requirements spelled out in the Intelligence Reform Act. For example, the plan includes steps towards developing standardized procedures for managing, handling, and disseminating sensitive but unclassified information as well as protecting information privacy, as called for in the Presidential Guidelines. For the most part, the plan also maps out a timeline for further defining what information, business processes, and technologies are to be included in the ISE and exploring approaches for implementing the ISE. For example, the plan describes a two-phased approach to implementing the ISE by June 2009, with Phase 1 scheduled for the November 2006 to June 2007 time frame and generally covering set-up activities and building relationships among stakeholders and Phase 2, beginning July 2007, covering design as well as implementation. This approach is intended to develop the ISE incrementally over a 3-year period. The two phases are comprised of 89 action items organized by priority areas. These priority areas address important aspects of the ISE, from defining information- sharing capabilities and technologies to protecting privacy and measuring performance (see table 1). Forty-eight of the action items, all part of Phase 1, were to be completed by June 2007. Of these 48, 18 were completed on time and an additional 15 were completed by March 2008 (see app. I for details). Examples of completed activities covered by these action items include: The development of proposed common terrorism information sharing standards—a set of standard operating procedures intended to govern how information is to be acquired, accessed, shared, and used within the ISE. According to the Program Manager, the proposed standards document the rules, conditions, guidelines, and characteristics of business processes, production methods, and products supporting terrorism-related information sharing. These standards are intended to address the Presidential Guideline that required the Director of National Intelligence— in coordination with the Secretaries of State, Defense, Homeland Security, and the Attorney General—to develop and issue such standards. These standards are an important early activity because of the structure they are intended to establish for sharing across all ISE stakeholders. The development of procedures and markings for sensitive but unclassified information to facilitate the exchange of information among ISE participants. We reported in March 2006 that federal agencies use numerous sensitive but unclassified designations that govern how this information must be handled, protected, and controlled and that the confusion caused by these multiple designations creates information- sharing challenges. Therefore, we recommended the issuance of a policy that consolidates sensitive but unclassified designations where possible and addresses their consistent application across agencies. Consistent with our recommendation, in May 2008 the Administration established controlled unclassified information (CUI) as the single categorical designation throughout the executive branch and established a corresponding CUI framework for designating, marking, safeguarding, and disseminating information designated as CUI. Once implemented, this effort could help improve access to information and improve information sharing. Establishment of an initial operating capability for the Interagency Threat Assessment and Coordination Group (ITACG). The purpose of the ITACG is to support the efforts of the National Counterterrorism Center to produce federally-coordinated terrorism-related information products intended for dissemination to state, local, tribal, and private sector partners through existing channels established by federal departments and agencies. This effort is expected to help address concerns that federally produced terrorism-related information that state, local, tribal, and private sector organizations need for law enforcement and homeland security purposes is sometimes conflicting or not getting to them. The establishment of a Federal Fusion Center Coordination Group to identify federal resources to support the development and maintenance of a network of state-sponsored fusion centers. Most states and many local governments have created state and local fusion centers to address gaps in information sharing, such as those that occurred on 9/11. These centers are collaborative efforts to detect, prevent, investigate, and respond to criminal and terrorist activities. In October 2007, we issued a report on the characteristics of and challenges for fusion centers and stated that the centers were particularly concerned about sustaining their operations over the long term. We recommended that this group, through the ISC and the Program Manager, determine and articulate the federal government’s role in, and whether it expects to provide resources to, fusion centers over the long-term to help ensure their sustainability. According to ISE program management officials, work is ongoing to (1) complete a baseline capability assessment of designated state and major urban-area fusion centers and (2) develop a coordinated federal support plan that articulates resources being provided to the fusion centers. The implementation of electronic directory services pages to help identify sources where terrorism information may be located within the federal government, as called for in the Intelligence Reform Act. In meeting this requirement, the electronic directory services are described as a collection of directories that enables ISE users to search for and locate information by accessing the appropriate people, organizations, data, and services related to the counterterrorism mission. The Program Manager expects to develop similar directories for state, local, and tribal stakeholders. Furthermore, work has been done towards accomplishing some action items that are not yet complete. For example, agencies, with leadership from the PM-ISE, have been working to develop a core training module intended to provide an introduction to the ISE and to further promote the development of a culture of information sharing. The incomplete action items are generally those that require a greater level of stakeholder involvement and, according to officials at the Office of the Program Manager are taking longer than anticipated to complete, but will not delay work on Phase 2 items. However, the action items do not address all the activities that must be completed to implement the ISE, according to officials at the Office of the Program Manager, and several activities identified in the implementation plan will not be implemented as identified in the plan. For example, one activity identified in the plan included the implementation of an electronic directory of services containing green pages in the unclassified domain. As identified in the plan, the green pages were to provide a searchable listing of counterterrorism-related information-sharing resources, systems, and data repositories to support users searching for specific data and capabilities. Further, the pages were to provide system descriptions and technical and operational contact information for gaining access. However, according to officials at the Office of the Program Manager, aggregating the information for the green pages would no longer enable the information to be posted in an unclassified domain. Therefore, the green pages will no longer be completed for the sensitive but unclassified security domain. Appendix I provides further detail on the status of each Phase 1 action item. Federal, State, and Local Agency Initiatives Are Being Leveraged to Enhance Information Sharing and Guide Implementation of the ISE Federal, state, and local agencies have their own initiatives to enhance information sharing across the government that are being leveraged in designing and implementing the ISE. Examples of these initiatives include: The Director of National Intelligence (DNI) issued a 100-day plan in April 2007, followed by a 500-day plan in September 2007 that focused on integrating the intelligence agencies and their missions in a collaborative manner. One area of focus in these plans is improved information sharing. As a result of this effort, the DNI reported that an implementation plan was developed to standardize identity and access policies across agencies, networks, and systems. The 100-day plan notes that as it is implemented, its results are intended to be leveraged by the Program Manager as part of the ISE because it is anticipated to improve communication within the intelligence community—one of the five communities that have been designated as critical to the ISE. The National Counterterrorism Center (NCTC) was established in 2004 in response to recommendations from the 9/11 Commission to operate as a partnership of intelligence agencies so that they can analyze and disseminate national intelligence data. The center works to ensure that intelligence agencies have access to and receive all-source intelligence support needed to execute their counterterrorism plans or perform independent, alternative, and mission-oriented analysis. As previously noted, in recognition of fusion centers as important mechanisms for information sharing, the federal government—including the Department of Homeland Security (DHS), the Department of Justice (DOJ), and the Program Manager—is taking steps to partner with these centers. Although they were created primarily to improve information sharing within the state or local area, the implementation plan identifies the creation of an integrated national network of fusion centers to promote two-way sharing with the federal government, as discussed earlier. Toward developing this network, the Program Manager and stakeholder agencies have sponsored fusion center conferences and provided staff, technical assistance, and funding to these centers. The FBI’s Terrorist Screening Center (TSC)—established in September 2003—maintains the U.S. government’s consolidated watch list of known or suspected terrorists and sends records from the list to agencies to support terrorism-related screening. The 9/11 Commission determined that agencies’ failures to share information they had on several of the terrorists was a major factor in the lead-up to the 9/11 attacks, and we recommended in a 2003 report that agencies develop such a consolidated database of terrorist records. In response, the TSC created its consolidated database, which was completed in 2004. The TSC receives the majority of its watch list records from the NCTC, which compiles the information on known or suspected international terrorists from federal agencies. The FBI provides information on known or suspected terrorists who operate within the United States. The TSC consolidates this information and sends it to federal agencies that use it for screening purposes, such as the screening of visa applicants and airline passengers. As noted in the annual report, the founding of the TSC is considered to be a key milestone in establishing the ISE. We and the Inspector General for the Department of Justice have also recommended ways in which agencies can enhance the watch list and agencies’ terrorist-screening processes, such as addressing vulnerabilities and creating an interagency governing entity. Further Detailing What the ISE Is to Achieve and How It Will Operate Should Better Guide Implementation The Program Manager, together with the ISE stakeholders, have followed standard practices in program and project management for defining, designing, and executing programs by identifying action items and strategic goals to be achieved in the implementation plan. However, work remains in, among other things, defining and communicating the scope and desired results to be achieved by the ISE, the specific milestones to be attained, and the individual projects—or initiatives—and execution sequence needed to achieve these results and implement the ISE. Standard practices in program and project management include (1) defining the program scope, roles and responsibilities, and specific results to be achieved, along with the individual projects needed to achieve these results, and (2) developing a road map, or program plan, to establish an order for executing specific projects needed to obtain defined programmatic results within a specified time frame and measuring progress and cost in doing so. Further Defining and Communicating Key Elements of the ISE Will Help Address the Limitations of the ISE and Further Describe How the ISE Is to Operate First, toward defining the scope of the ISE, the implementation plan restates the text of the Intelligence Reform Act, noting that the ISE encompasses “the sharing of terrorism information in a manner consistent with national security and with applicable legal standards relating to privacy and civil liberties” and that the ISE is defined as “an approach that facilitates the sharing of terrorism information.” Indeed, this is a broad scope requiring the Program Manager and stakeholders, such as members of the Information Sharing Council, to further define what the ISE, as a program, is to include as well as the scope of what it can address. Fundamentally, the Program Manager and stakeholders are still trying to fully define the scope and design of the ISE, and a more complete set of activities needed to achieve it than those that were included in the implementation plan, including, for example all of the terrorism-related information that should be a part of the ISE; what types of terrorism-related information ISE participants have and where such information resides; how the information can be put into a “shared space” so that a cross- sector of users can easily access and study information from different agencies; how this access can be provided while still protecting sensitive information and privacy interests; what information systems and networks will be integrated as part of the ISE and how; and methods for motivating agencies to invest in the ISE, be held accountable for ensuring that all relevant information is made available to ISE stakeholders, and identifying and implementing the specific projects needed to ensure the ISE runs effectively. Further, the plan notes that the Intelligence Reform Act requires that the ISE ensure direct and continuous online electronic access to information and presents several action items intended to identify approaches for sharing information, including the use of technologies. However, the plan does not lay out a set of action items with related milestones for identifying, among other things, needed resources such as all the information to be made available as part of the ISE, the source of the information, and what limitations exist in making this counter-terrorism information available. In accordance with standard practices for program management, these are all elements critical for conveying the scope of what the ISE is to include, garnering an understanding among stakeholders of needs to be met as part of implementing the ISE, and identifying restrictions in stakeholder abilities to do so. We recognize that defining all of these elements is a complex undertaking, especially because of the numerous ISE stakeholders that need to coordinate and the many existing and often stovepiped or independent methods stakeholders use for meeting their information needs that often were not developed with sharing in mind. Nevertheless, further defining and communicating key elements of the ISE, such as the scope and expected results, along with a road map for meeting needs in accordance with standard practices for program management will help, among other things, communicate the breadth and limitations of the ISE as a program and further describe how the ISE is to operate. Second, the plan does not communicate the scope, or parameters, of stakeholder roles and responsibilities in such a way that stakeholders can understand what they will be held accountable for in implementing and operating the ISE. For example, the plan identifies the Program Manager’s role as responsible for information sharing across the government, overseeing the implementation of and managing the ISE, and working together with the ISC, but does not articulate aspects of how the Program Manager has interpreted this role in contrast to that of other stakeholders. For instance, the officials at the Office of the Program Manager noted: The Program Manager’s office works on developing or improving existing business processes that affect information sharing among two or more of the five ISE communities, but does not focus on processes that are internal to ISE members unless they directly impact the wider ISE. Agencies, therefore, are to define ISE related business processes and other requirements internal to their organizations along with how the information will be used and drive their own analytical efforts. The Program Manager’s role focuses on determining if a policy, business process, legal or technical issue is preventing the sharing of information between two or more communities and on helping to resolve these types of issues rather than issues that impact sharing within a community, such as homeland security. This information on the parameters of the Program Manager’s role and responsibilities was not transparently communicated in the plan but is critical for stakeholders, the Congress, and other policy makers to clearly understand, provide for accountability, and ensure the ISE is effectively implemented. Without clearly understanding their roles and responsibilities, stakeholders may not adequately prepare for and provide each other the information and services needed to prevent terrorist attacks. According to officials at the Office of the PM-ISE, departments and agencies, not the Program Manager alone, are responsible for defining the ISE’s scope and expected end state. Accordingly, in November 2007 they held a first-time off-site with ISC members to focus on ISE priorities, clarify responsibilities, and emphasize the importance of everyone’s active participation and leadership. Moreover, the meeting was held to rectify any misperceptions and reinforce that all ISE stakeholders are to define the ISE. However, according to officials at the Office of the Program Manager, problems in department and agency participation make it difficult for the ISC to function as an advisory body for ISE implementation. Among other things, officials noted that departments and agencies do not always provide representatives with the authority to speak on behalf of the agency and inconsistent attendance by ISC representatives has been an issue. Since issuance of the plan, on October 31, 2007, the National Strategy for Information Sharing was issued, in part, further communicating the scope of the ISE and stakeholder roles. The strategy reaffirmed that stakeholders at all levels of government, the private sector, and foreign allies play a role in the ISE. The strategy also outlined some responsibilities for ISE stakeholders at the state, local, and tribal government levels. In addition, the strategy further defined the role of the Program Manager as also assisting in the development of ISE standards and practices. However, the strategy did not further clarify the parameters of the Program Manager’s role and what is within the scope of his responsibilities in “managing” the ISE and improving information sharing versus other ISE stakeholders. Third, the Program Manager and stakeholders are still in the process of defining the programmatic results to be achieved by the ISE as well as the associated milestones and projects needed, as standard practices in program management suggest for effective program planning and performance measurement. Existing federal guidance as well as our work and the work of others indicates that programs should have overarching strategic goals that state the program’s aim or purpose, that define how it will be carried out over a period of time, are outcome oriented, and that are expressed so that progress in achieving the goals can be tracked and measured. Moreover, these longer-term strategic goals should be supported by interim performance goals (e.g., annual performance goals) that are also measurable, define the results to be achieved within specified time frames, and provide for a way to track annual and overall progress (e.g., through measures and metrics). The implementation plan, as an early step in planning for the ISE, identifies six strategic ISE goals to be achieved. These goals include, for instance, to the maximum extent possible, the ISE is to function in a decentralized, distributed, and coordinated manner. However, the plan does not define what this goal means, set up interim or annual goals and associated time sensitive milestones to be built upon to achieve the overall goal, or define how agencies will measure and ensure progress in meeting this goal in the interim or overall. Instead, the plan notes that performance measures will be developed at a later date. Moreover, the plan does not present the projects and the sequence in which they need to be implemented to achieve this strategic goal in the near term or in the future, or the specific resources needed and stakeholder responsibilities. Therefore, work remains in developing the road map for achieving this strategic goal. Since the plan’s issuance, officials in the office of the Program Manager and stakeholders have developed several performance measures and, as of March 2008, were in the process of further refining them. Yet, our review of a draft of these performance measures showed that they continue to focus on counting activities accomplished rather than results achieved and do not yet outline the sequence of projects needed to implement the ISE and measurably report on progress in doing so. Further, the plan identifies seven priority areas to be addressed in implementing the ISE. These include, for example, sharing with partners outside the federal government, promoting a culture of information sharing, and establishing ISE operational capabilities. But like the strategic goals, the priority areas represent general tasks and themes to be addressed as part of the ISE and do not define expected results in a measurable form, along with supporting performance goals, measures, and deadlines for achieving the programmatic results. Without these elements, ISE stakeholders may not understand the interim or final ISE they are to achieve, assess progress towards implementing the ISE, or hold stakeholders accountable for their contributions in ensuring that the ISE succeeds. Fourth, although required by the Intelligence Reform Act, the implementation plan did not provide a budget estimate that identified the incremental costs associated with designing, testing, integrating, deploying, and operating the ISE but indicated that steps to develop a budget estimate would be taken in the future. In part, this is because the ISE was in such an early stage of development that it would be difficult for agencies to know what to cost out for an estimate. Developing a budget estimate, however, is a commonly used tool for effective program management. While the Program Manager has been working with agencies and the Office of Management and Budget to determine the cost of implementing the ISE, officials at the Office of the Program Manager stated that the total cost of the ISE has not yet been accounted for and that attaining an overall estimate may not be achievable. This is because it is difficult for agencies to isolate and separate out what actions they are undertaking solely to implement the ISE versus ongoing operations. We recognize that attaining an accurate and reliable cost estimate for the ISE is a difficult undertaking, complicated further by the fact that stakeholders are still defining the scope of the ISE, results to be attained, and the projects to support it. However, without information on how much the ISE will cost, Congress and stakeholders will be unable to determine whether the expenses associated with the ISE are worth the results attained and in some cases unable to determine what has been accomplished given the expended resources. Toward addressing this cost issue, the PM-ISE, in collaboration with OMB, has since issued program guidance intended to assist in estimating and tracking ISE costs in ISE priority areas, such as suspicious activity reporting, developing ISE shared space, and alerts, warnings, and notifications. Finally, while the implementation plan states that Phase 1 will conclude with the development of a detailed plan for implementation, including goals, measures, and targets, a revised plan will not be issued. Instead, officials at the Office of the Program Manager indicated that they consider the implementation plan to be a living document with initiatives identified at the outset of development being refined as needed based on experience. Officials at the Office of the Program Manager acknowledged that the 89 action items contained in the plan do not address all of the activities that must be completed to implement the ISE. This is because at the time the plan was produced, agreement on how the ISE is to function and what it is to include had not been reached among the stakeholders. Work toward reaching these agreements remains ongoing. Therefore, program officials stated that an assessment of the ISE’s progress based on the action items identified in the plan alone would not give a true sense of progress toward a fully functioning and executed ISE. Accordingly, the PM-ISE intends to adjust the plan, beginning with refinements in the next annual report. For example, according to officials at the Office of the Program Manager, to avoid delaying progress, the office plans to revise and update certain implementation plan actions in the course of developing the June 2008 Annual Report. In addition, officials at the Office of the Program Manager stated that based on their experience in Phase 1, they are deleting action items that are no longer valid and updating others to reflect the ISE’s current approach for implementation. Making midcourse corrections to further determine and articulate the end design of the ISE, or at least more accurately specify what is to be achieved in the near term and at various milestones thereafter, is in accordance with standard practices in program and project management. However, given the ISE’s many stakeholders and the work that remains to be done in defining the scope of the ISE, the desired results to be achieved, and the supporting projects and milestones, it is important that the revisions, in accordance with standard practices for program management, provide for an effective road map to implement the ISE and measure achieved progress in implementing the ISE and in improving information sharing. Without such a road map, the Program Manager and stakeholders risk not being able to effectively manage and implement the ISE. An ISE Enterprise Architecture Framework Has Been Developed, but Its Usefulness May Be Limited without Further Defining ISE Results Subsequent to the implementation plan, in August 2007, the Program Manager issued the ISE Enterprise Architecture Framework Version 1.0 (ISE EAF), a planning document and tool intended to further inform ISE implementation efforts, but its usefulness in guiding the ISE to meet terrorism-related information-sharing needs may be hindered by the lack of defined programmatic results to be achieved. As reported by the Program Manager, the ISE EAF is to help improve information-sharing practices, reduce barriers to sharing, and institutionalize sharing by providing a new construct, or framework, for planning, installing, and operating nationwide information resources within the ISE. Such resources may include, for example, business processes and information technologies. Further, as noted in the EAF, it is to be used to guide the implementation of the ISE, accounting for current capabilities and setting the direction and steps towards the envisioned or To-Be capabilities. Because the ISE is composed of many organizations, the ISE EAF can be looked at as a collection of independent stakeholder enterprise architectures that were initially designed to support individual missions, but are now being leveraged to facilitate terrorism-related information sharing among these organizations. In doing so, the ISE EAF is to assist in identifying the relationships needed to facilitate terrorism information sharing among these organizations and is to serve as a tool for understanding what, where, and for what purpose current capabilities and resources, such as information technology systems, may exist. Enterprise architectures generally use strategic planning elements to align potential system solutions with program needs. While the ISE EAF is intended to augment organizations’ enterprise architectures for the purpose of sharing terrorism-related information, work remains to determine the ISE’s desired program outcomes or specific results to be achieved, potentially limiting the effectiveness of the ISE EAF in guiding the ISE to meet terrorism-related information sharing needs. Unlike agency enterprise architectures, the ISE EAF does not seek to identify, for example, business processes and information flows at an operational level, the level at which organizations determine how specific investments in technologies will be used to support business needs and provide needed information. Instead, the ISE EAF relies on the prerogative of individual departments and agencies to define operational processes and information flows as part of their enterprise architectures. Officials at the Office of the Program Manager noted that OMB and the ISC agencies were very specific about the level of detail the ISE EAF was to take, noting that the ISE EAF helps inform, but not direct, how departments and agencies do their work at the operational level—individually or together. However, without further defining outcomes to be achieved and identifying how individual agencies are to work together to meet ISE information-sharing needs at the level where work is done, the ISE EAF may be limited in its usefulness for improving the sharing of terrorism-related information. The Program Manager Has Issued the First Annual Report and Is Developing Initial Performance Measures, but Neither Can Yet Be Used to Determine How Much Progress Has Been Made and What Remains To describe progress in implementing the ISE to date, the Program Manager issued an annual report—in response to the Intelligence Reform Act’s requirement for a yearly performance management report—in September 2007 that highlighted individual accomplishments and included annual performance goals and has since developed some performance measures, but neither effort shows how much measurable progress has been made toward implementing the ISE and how much remains to be done. In keeping with federal guidance, our work, and the work of others in strategic planning, performance measurement, and program management, the annual report contained four performance goals for 2008. Additionally, some initial performance measures have been developed, but they do not address all aspects of the annual performance goals or strategic goals and do not show how they represent interim milestones to ensure attainment of desired results or outcomes. According to officials at the Office of the Program Manager, these performance measures are currently being refined in consultation with the ISC to provide the needed framework to measure real progress made. We acknowledge that creating such measures is difficult, particularly since the program is still being designed, but until these measures are refined to account for and communicate progress and results, future attempts to measure and report on progress will be hampered. The Annual Report Cited Accomplishments Made in Implementing the ISE, but Not the Extent of Progress Achieved and Remaining Work The annual report conveyed individual ISE-related accomplishments as of September 2007 but did not provide Congress and policy makers with information on what portion of the ISE has been completed as a result of this work and what portion remains. The report lists the preliminary actions taken to prepare for establishing the ISE, such as designation of the Program Manager, the President’s memorandum providing guidelines for the ISE, and submission of the implementation plan to the Congress. The report also cites individual accomplishments that contribute to the ISE, some of which were accomplished under the implementation plan— such as establishment of an electronic directory service for users to find contact information for organizations that have counter-terrorism missions—and others achieved prior to and or separate from efforts to create the ISE. For instance, the report cites several accomplishments attained prior to the December 2004 Intelligence Reform Act and its call for an ISE, including the creation of the National Counterterrorism Center (NCTC) in August 2004 and the establishment of the Terrorist Screening Center (TSC) in 2003. In part, because ISE implementation remains in the early stages, the annual report highlighted these discrete accomplishments without putting them in an overall context that showed how much progress has been made and remains toward implementing the ISE. While, as previously noted, the implementation plan identified a two phased approach for implementing the ISE along with 89 action items—the only means presented in the implementation plan for tracking completion of ISE implementation—the report did not provide a one-for-one reporting on the status of these action items as steps for implementing the ISE or identify how much of the implementation had been completed. Thus, the Congress and policy makers do not yet have the information they need to assess the amount and rate of progress, remaining gaps, and the need for any intervening strategies. Performance Measures Are Being Developed Although They Do Not Yet Address All Aspects of the Annual Performance Goals In accordance with existing federal guidance as well as our work and the work of others in strategic planning, performance measurement, and program management, programs should have overarching strategic goals that state the program’s aim or purpose, define how it will be carried out over a period of time, are outcome oriented, and are expressed so that progress in achieving the goals can be tracked and measured. Moreover, these longer-term strategic goals should be supported by interim performance goals (e.g., annual performance goals) that are also measurable, define the results to be achieved within specified time frames, and provide for a way to measure and track annual and overall progress (e.g., through measure and metrics). Accordingly, the implementation plan contained six overall strategic goals and the annual report contained four annual performance goals for 2008, as shown in tables 2 and 3. While not reflected in the first annual report, the Program Manager and agencies have begun to develop performance measures to improve future reporting on progress in implementing the ISE and information sharing overall, but these measures focus on counting activities accomplished rather than results achieved to show the extent of ISE implementation and attaining the ISE’s strategic goals. In accordance with our work and federal guidance on strategic planning and performance measurement, the newly developed measures represent an effort to more concretely and quantitatively assess progress in implementing the ISE and improving information sharing. The performance measures include, for example, the number of ISE organizations with a procedure in place for acquiring and processing reports on suspicious activities potentially related to terrorism, but not how the reports are used and what difference they are making in sharing to help prevent terrorist attacks. Similarly, the measures attempt to assess the creation of a culture of sharing by tabulating the percentage of relevant ISE organizations that have an information-sharing governance body or process in place, but not by measuring the outcome—such as how and to what extent cultural change is being achieved. Indeed, these measures are an important first step in providing quantitative data for assessing progress made in information sharing and help to inform Congress and other stakeholders on specific information sharing improvements. But, taking the measures to the next step—from counting activities to results or outcomes—while difficult, is important to assess results achieved. The Program Manager and ISE stakeholders have not yet developed measures to address all aspects of the annual performance goals. For example, one 2008 performance goal identified in the annual report is to establish capabilities that allow ISE participants to create and use quality terrorism-related information by improving business processes, developing a common enterprise architecture framework, refining common standards, and instituting effective resource management for governmentwide programs. Based on the description of this performance goal, one ISE performance measure that supports this goal includes attaining the percentage of applicable ISE organizations that have adopted the common terrorism information sharing standards during the past or preceding fiscal year(s). However, performance measures in support of all topics identified in the goal, such as instituting effective resource management for governmentwide programs, have not been developed. Further, the performance measures are not presented in a way that explains how they represent milestones toward attaining the strategic goals or intended outcomes. According to officials at the Office of the Program Manager, as of March 2008, they are refining their measures in consultation with the ISC to provide an improved framework to measure progress made. Yet, our review of a draft of these performance measures showed that they continue to focus on counting activities accomplished rather than results achieved. We acknowledge that creating such measures is difficult, particularly since the program is still being designed, but until these measures are refined to account for and communicate progress and results, future attempts to measure and report on progress will be hampered. Conclusions Although the Program Manager and stakeholders have made progress in implementing a number of initiatives, successfully implementing the ISE remains a daunting task. While efforts to date may represent the groundwork needed to facilitate terrorism-related information sharing in the future, over 3 years after passage of the Intelligence Reform Act, the ISE is still without a clear definition of the specific results to be achieved as part of the ISE or the projects, stakeholder contributions, and other means needed to achieve these results. The Program Manager, together with the ISE stakeholders, have followed standard practices in program and project management for defining, designing, and executing programs by identifying action items and strategic goals to be achieved in the implementation plan. However, work remains in, among other things, defining and communicating the scope and desired results to be achieve by the ISE, specific milestones to achieve the results, and the individual projects and execution sequence needed to achieve these results and implement the ISE. Until this work is complete, further efforts may result in independent contributions to improving information sharing rather than an ISE with improved and coordinated sharing of terrorism-related information among stakeholders, a critical need exposed by the terrorist attacks of September 11. Given that the ISE requires extensive buy-in from stakeholders and the Program Manager is relying on stakeholders to provide technology and other resources to make the ISE work, it is critical to develop a road map for implementing the ISE and improving information sharing that communicates the scope and specific results to be achieved by the ISE, the key milestones and individual projects needed to implement the ISE, needed resources, and stakeholder responsibilities. Without such a road map, the Program Manager risks not being able to effectively manage and implement the ISE. Furthermore, efforts to report on progress to date have provided examples of individual actions taken to improve information sharing but have not yet included an accounting of how far the Program Manager and stakeholder agencies are in achieving an effectively functioning ISE and what remains to be done. By not doing so, stakeholders do not have a measurable way to ensure that the sharing of terrorism-related information has improved and by how much, nor the information needed to understand the resources and time frames required to achieve the intended results of the ISE. Until the Program Manager and stakeholders more fully define the specific results the ISE is to attain and develop a set of measures to assess progress in achieving the goals—including, at a minimum, what has been done and what remains to be accomplished— Congress and stakeholders will not know how far the nation has come in implementing an ISE intended to improve governmentwide information sharing. Recommendations To help ensure that the ISE is on a measurable track to success, we are recommending that the Program Manager, with full participation of relevant stakeholders (e.g., agencies and departments on the ISC), take the following two actions more fully define the scope and specific results to be achieved by the ISE along with the key milestones and individual projects or initiatives needed to achieve these results, and develop a set of performance measures that show the extent to which the ISE has been implemented and sharing improved—including, at a minimum, what has been and remains to be accomplished—so as to more effectively account for and communicate progress and results. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Secretaries of Defense, Homeland Security, and State, as well as the Attorney General, the Director of National Intelligence, and the Program Manager for the ISE or their designees. In a June 6, 2008, letter, the Program Manager for the ISE provided written comments, which are summarized below and included in their entirety in appendix II. The Program Manager generally agreed with our recommendations to more fully define the scope and results to be achieved by the ISE and develop a comprehensive set of performance measures that show the extent to which the ISE has been implemented and sharing improved. While the Program Manager agreed with our recommendations, he commented that the ISE is a governmentwide transformational effort— emphasizing that the ISE is an evolutionary process—and not a traditional “program.” Therefore, according to the Program Manager, trying to audit this interagency initiative strictly within program parameters presents problems. We agree that the ISE is a governmentwide transformational effort, that it is not a traditional “program,” and that it involves an evolutionary process. In fact, our report states that the ISE is not bounded by a single federal agency or component and that it is a broad-based coordination and collaboration effort among various stakeholders. While we agree that the ISE is not a traditional “program,” in that it is not operated and funded by a single department or agency, it is an activity that does receive government funding and can be reviewed using program and project management principles. As such, we based our evaluation of the ISE on a broad set of program and project management criteria, including the Government Performance and Results Act of 1993, related guidance issued by OMB, and our prior work on results-oriented government, program management, and federal coordination and collaboration. Further, while we recognize that approaches to implementing the ISE and improving information sharing may evolve over time as technologies and needs change, calling the ISE an evolutionary process does not exempt it from following the practices outlined in our report. Following these practices will help ensure that reports of progress by the Program Manager on behalf of the ISE at large are based on measures of results achieved toward implementing the ISE—that is measured based on what the ISE is to be, include, and accomplish in, for example, 3 years—rather than ad-hoc claims of progress. With regard to efforts for assessing the ISE’s progress, the Program Manager noted that in the 2007 annual report he introduced a performance management approach and his office has since established a performance baseline—in the fall of 2007— and measured agencies’ progress against this baseline through an assessment performed in the spring of 2008. Our report acknowledges these efforts. However, our review of the performance measures developed in support of the performance management approach shows that these measures: (1) focus on counting activities accomplished rather than results achieved to show the extent of ISE implementation and attaining the ISE’s strategic goals and (2) are not presented in a way that explains how the measures represent milestones toward attaining the strategic goals identified in the implementation plan or intended outcomes. In his comments, the Program Manager further noted that the June 2008 annual report, which was not released by the time we issued this report, would provide more current data on performance measurement. However, our review of a draft of the measures to be incorporated in the 2008 report showed that they continue to focus on counting activities accomplished rather than results achieved. Unless the 2008 report corrects these shortfalls and establishes a performance management mechanism whereby short-term annual goals serve as steps for assessing the ISE’s progress towards achieving longer- term strategic goals, it and future reports on progress will fail to provide the Congress and other policy makers the meaningful information needed to understand what progress has been made in attaining the defined strategic results for the ISE and improving information sharing. Finally, the Program Manager said that although the report mentions that one of the challenges of the ISE is interagency attention and priority to ISE initiatives, the report does not make any recommendations in this regard. We agree that interagency collaboration in the ISE is a challenge and individual departments and agencies, not the Program Manager alone, have responsibilities in implementing the ISE. However, to effectively hold these agencies accountable for ISE progress, existing issues identified in our report—such as defining the outcomes to be achieved and defining clear roles and responsibilities—must first be addressed. Given the ISE’s many stakeholders and recognizing the Program Manager’s key leadership role for managing the ISE, we maintain that these issues must be addressed by the Program Manager, with full participation of relevant stakeholders (e.g., agencies and departments on the ISC). Without doing so, the Program Manager may continue to face challenges in attaining agency buy-in and holding stakeholders accountable for ISE progress. Officials in the Office of the Program Manager also provided technical comments on the draft that have been incorporated, as appropriate. The Secretaries of Defense, Homeland Security, and State; the Attorney General; and the Director of National Intelligence responded that they did not have any comments on the report. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to the Program Manager for the ISE, the Director of National Intelligence, and the Secretaries of the Departments of Defense, Homeland Security, Justice, and State; and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact either Eileen Larence at 202-512-8777 or larencee@gao.gov, or David Powner at 202-512-9286 or pownerd@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Status of Phase I Action Items as of March 1, 2008 Table 1 below provides the status of each of the 48 Phase 1 action items identified in the ISE implementation plan as of July 9, 2007, nine days after their planned completion date and as of March 1, 2008. These action items encompass many areas for development in the ISE, ranging from activities such as identifying capabilities and technology to privacy protection and performance measures. As the table indicates, based on our analysis of status information reported by the Program Manager, at the end of phase one’s scheduled completion, 18 of 48 action items had been completed and 30 remained incomplete. Eight months later, 33 of 48 action items had been completed, with 15 remaining incomplete. In determining the status of the action items, we reviewed documentation provided by the Program Manager, but did not evaluate the effectiveness of the actions taken. Appendix II: Comments from Office of the Program Manager for the Information Sharing Environment Appendix III: GAO Contacts and Acknowledgments GAO Contacts Acknowledgments In addition to the contact named above, Susan H. Quinlan, Assistant Director; Richard Ascarate; Jason Barnosky; Amy Bernstein; Joseph Cruz; Thomas Lombardi; Lori Martinez; and Marcia Washington made key contributions to this report.
The attacks on 9/11 underscored the federal government's need to facilitate terrorism-related information sharing among government, private sector, and foreign stakeholders. In response, the Intelligence Reform and Terrorism Prevention Act of 2004 mandated the creation of the Information Sharing Environment (ISE), which is described as an approach for the sharing of terrorism-related information. A presidentially appointed Program Manager oversees ISE development with assistance from the Information Sharing Council (ISC), a forum for 16 information sharing officials from federal agencies and departments. GAO was asked to report on (1) what actions have been taken to guide the design and implementation of the ISE and (2) what efforts have been made to report on progress in implementing the ISE. To perform this work, GAO reviewed related laws, directives, guidance, and ISE planning and reporting documents and interviewed officials from the Program Manager's office and key agencies who serve on the ISC. To guide ISE design and implementation, the Program Manager has issued an implementation plan, completed a number of tasks therein, and included other information sharing initiatives in the ISE, but the plan does not include some important elements to implement the ISE. The plan provides an initial structure and approach for ISE design and implementation. For example, the plan includes steps toward protecting information privacy and describes a two-phased approach for implementing the ISE by June 2009 consisting of 89 action items. Completed activities include, among others, development of proposed common terrorism information sharing standards. In addition, other federal, state, and local initiatives to enhance information sharing across the government are being incorporated in the ISE. These initiatives include partnering with state and local area fusion centers--created primarily to improve information sharing within a state or local area--to develop a national network of these centers. Nevertheless, Office of the Program Manager officials said that the 89 action items do not address all the activities that must be completed to implement the ISE. Work remains, including defining and communicating the ISE's scope, such as determining all terrorism-related information that should be part of the ISE, and communicating that information to stakeholders involved in the development of the ISE. In addition, the desired results to be achieved by the ISE, that is, how information sharing is to be improved, the specific milestones, and the individual projects--or initiatives--to achieve these results have not yet been determined. Defining the scope of a program, desired results, milestones, and projects are essential in providing a road map to effectively implement a program. Without such a road map, the Program Manager and stakeholders risk not being able to effectively manage implementation of the ISE. To report on progress in implementing the ISE, the Program Manager issued an annual report in September 2007, which highlighted individual accomplishments and included several annual performance goals, and has since begun to develop performance measures, but neither effort provides for an assessment of overall progress in ISE implementation and of how much work remains. Some individual accomplishments contributing to the ISE occurred under the implementation plan; others, prior to and separate from ISE creation efforts. In keeping with federal guidance, GAO's work, and the work of others in strategic planning, performance measurement, and program management, the implementation plan contained six strategic goals and the annual report four performance goals for 2008. Also, the Program Manager has begun to develop some performance measures, but they focus on counting activities accomplished rather than results achieved. For example, the measures include the number of ISE organizations with a procedure in place for suspicious activity reports, but not how the reports are used and what difference they are making in sharing to help prevent terrorist attacks. GAO acknowledges that creating such measures is difficult, particularly since the program is still being designed, but until these measures are refined, future attempts to measure and report on progress will be hampered.
Background When employees are hired, they must complete an IRS Form W-4 (Employee’s Withholding Allowance Certificate) so that the employer can withhold the correct federal income tax from their pay. Shortly after the end of each calendar year, every employer who pays employees remuneration for services must furnish a wage statement to each employee and submit this information to SSA. Wage statements include the employee’s name, SSN, and the amount of wages earned, among other things and generally, employers use the employee names and SSNs on the Forms W-4 to prepare them. Wage statements are the critical documents used by SSA for assigning social security benefits. When an employer submits a wage statement to SSA, SSA posts the employee’s earnings to the employee’s earnings record, which is then used to determine an individual’s eligibility for, and amount of, retirement, disability, or survivor benefits administered by the agency. When the wage statement contains a name/SSN combination that does not match SSA’s records, SSA conducts a series of procedures to try and match them to its records. If these procedures do not result in a match, then SSA is unable to post the employee’s earnings to the employee’s earnings record and instead, the wage statement and the associated wages are placed in the Earnings Suspense File (ESF). The ESF represents the cumulative amount of wage statements for 1937 through 2003 for which SSA does not have a matching name/SSN. Through tax year 2001, which is the most recent year for which complete data are available, the ESF included over 244 million records involving wages of over $421 billion. Since 1990, this file has been increasing by an average of 5 million records and at least $17 billion annually. These records and their associated earnings are only removed from the ESF when the wages can be matched and posted to an individual’s earnings record. Items can be removed through SSA efforts to match names and SSNs or when a person provides an accurate name/SSN combination and information that proves he or she earned the wages. If they are not removed, employees will not receive benefits based on these wages. Besides affecting SSA’s workload and burdening individuals with trying to work with SSA to prove they earned certain wages, errors in SSNs on wage statements also affect IRS and individual taxpayers. For example, an individual may give an employer someone else’s name and SSN. The employer then files a wage statement with the SSN. The person to whom the SSN belongs will not claim the wages on his or her tax return. IRS will then send a notice to the individual about the unclaimed wages. IRS and the individual must then work together to resolve the problem, adding to both IRS’s workload and the burden on the taxpayer to comply with the tax laws. In addition, IRS must try to identify which individual actually earned the wages. Legislation Authorizes IRS to Penalize Employers Who File Inaccurate SSNs Internal Revenue Code Section 6721 authorizes IRS to penalize employers for failure to file an information return by the required filing date, failure to include complete information, and failure to include correct information including accurate SSNs. This legislation also places limits on the penalties including providing for the waiver of penalties if the behavior of employers in failing to provide information or report complete and accurate information was due to reasonable cause and not a result of willful neglect. The penalties were authorized as a tool to help IRS ensure that employers and others file information returns with complete and accurate information. Prior to the enactment of the Tax Reform Act of 1986, IRS was authorized to assess penalties for failure to file information returns including wage statements filed by employers. The 1986 Act authorized IRS to assess penalties for failure to include all required information on information returns and for failure to include correct information, including accurate SSNs. The 1986 Act also established the amount of a penalty, $5 per information return, limited those penalties to a maximum of $20,000 per filer in any calendar year except in cases of intentional disregard, and included the reasonable cause waiver. OBRA of 1989 increased the penalty amounts to $15 to $50 for each non- filed, incomplete, or inaccurate information return with the amount depending on if and when corrections were made and included three provisions that limit penalties assessed against employers. The first provision, which was included in previous legislation, is the reasonable cause waiver. IRS regulations, which are discussed later in this report, provide guidance for determining if someone acted with reasonable cause rather than with willful neglect. The second provision limits total penalties assessed against an employer to a maximum dollar amount in any calendar year. The maximums range from $25,000 to $100,000 for small businesses and from $75,000 to $250,000 for large businesses with the penalty amount depending on if and when corrections are made. The maximum penalty amounts, which are presented in table 1, are the total maximum penalty amounts that can be assessed against an entity regardless of how many information returns it must file and for any of the three reasons for which a penalty can be assessed. For example, if a large business could be assessed a $200,000 penalty for failing to file dividend information returns and a $100,000 penalty for failing to file wage statements with accurate SSNs, the maximum penalty that could be assessed is not $300,000 but rather $250,000. These penalty amounts are for unintentional failures. The third provision limiting penalties is the “de minimis provision.” Under this provision, penalties will not be assessed for incomplete or inaccurate information returns if they are corrected on or before August 1 of the calendar year in which the required filing date occurs. The number of information returns this provision applies to shall not exceed the greater of 10 returns or one-half of 1 percent of the total number of returns to be filed. IRS Does Not Have a Dedicated Compliance Program to Penalize Employers Who File Wage Statements with Inaccurate SSNs According to IRS officials, IRS has the capability to identify employers who file wage statements with inaccurate SSNs but does not have a dedicated compliance program for penalizing them. However, at times, this is done within the context of an employer’s employment tax examination. IRS’s regulations for implementing the penalty provisions include the steps an employer can take to demonstrate that any filing of wage statements with inaccurate SSNs was due to reasonable cause and not willful neglect. In addition, IRS has been conducting an assessment of 100 “egregious” employers who filed large numbers of wage statements with inaccurate SSNs or had a high rate of such filings to determine whether and how to implement a penalty program that would create incentives for employers to improve the accuracy of SSNs included on wage statements. No Dedicated Compliance Program for Penalizing Employers According to IRS officials, although IRS has the capability to identify employers who file wage statements with inaccurate SSNs, it does not have a dedicated compliance program for penalizing them. In September 2002, TIGTA also reported that IRS did not have such a compliance program and recommended that IRS initiate a regularly scheduled program for proposing penalties when employers file wage statements with inaccurate SSNs. In response to the TIGTA report, IRS management said that IRS did not develop a regularly scheduled program because of concerns about the level of resources needed to administer the program and the complexity of administering such a program in comparison to the benefits that it would have for tax administration. Management added that they are developing a program for identifying and penalizing employers when warranted. According to IRS officials, when the legislation was passed to authorize penalties for failure to file complete and accurate information returns, Congress was primarily concerned with Form 1099 information returns, such as for interest income and dividends and distributions, rather than wage statements because of the possible effect on tax revenues. IRS officials said that the non-reporting of Form 1099 income by taxpayers had a greater effect on tax revenues since no taxes had been withheld from income reported on Forms 1099, whereas, in most cases, some taxes had been withheld from income reported on wage statements. IRS started assessing penalties related to some Forms 1099 in the early 1990s and has gradually added other types of Forms 1099 to the penalty program. Now IRS believes that compliance associated with Forms 1099 has shown improvement allowing the agency to focus on developing a penalty program for employers that file wage statements with inaccurate SSNs. Employment Tax Examinations Can Be Used to Assess Penalties Employment tax examinations may result in penalties for filing wage statements with inaccurate SSNs. Employment tax examinations cover areas such as determining if social security and Medicare taxes were properly withheld, whether persons were properly classified as either employees or as independent contractors, and whether the employers provided the appropriate information returns to IRS and their employees. According to the Director of the SBSE Office of Employment Tax, IRS staff may review the accuracy of SSNs on wage statements to determine if penalties should be proposed when conducting an employment tax examination. Even though the filing of wage statements with inaccurate SSNs is not a criterion used to select employers for an employment tax examination, if IRS has an indication that there is a problem with inaccurate SSNs, the tax examiner is to draw a sample of wage statements to determine if the SSNs on them are the same as the SSNs on the Forms W-4. If they are the same, then no penalty is proposed. If any or all are dissimilar, the employer would be instructed to make annual solicitations as described later in this report or face an IRS penalty. LMSB employment tax examination officials said that they do not routinely compare SSNs on wage statements to the SSNs on Forms W-4; however, an examination could result in the assessment of a penalty. IRS Regulations Require Employers to Solicit Employees’ SSNs The IRS regulations that implement the penalty provisions require employers to solicit SSNs from their employees but they are not required to verify them. Draft temporary regulations for assessing penalties for failing to provide complete and correct information on information returns related to the Tax Reform Act of 1986 were circulating within the IRS as early as December 1986. IRS issued temporary regulations in September 1987 that included a solicitation for public comments. IRS received public comments on the draft regulations and issued final regulations in April of 1991. Although OBRA of 1989 subsequently revised the penalty provisions, these initial regulations did not reflect any revisions relative to OBRA even though they were issued almost a year and a half after OBRA was passed. IRS issued temporary regulations for OBRA in February 1991, and after receiving public comments and holding a public hearing, issued final regulations for the revised penalty provisions in December 1991. The regulations include guidance for implementing the reasonable cause waiver included in the legislation. IRS officials said that when they developed this guidance, they tried to balance encouraging voluntary compliance with the law with the burden placed on filers for complying. Under the waiver, filers of information returns will not be assessed a penalty for any of the three types of failures included in the legislation, including failing to file a wage statement with an accurate SSN, if they can show that the failure was due to reasonable cause and not willful neglect. To do so, they must demonstrate either there were significant mitigating factors for the failure or that the failure arose from events beyond their control. In addition, they must demonstrate they acted in a responsible manner both before and after filing the information returns including wage statements. Significant Mitigating Factors and Events Beyond the Filer’s Control The regulations include examples of significant mitigating factors or events beyond the filer’s control. Examples of significant mitigating factors are that the filer was never required to file this type of information return before or the filer has an established history of complying with the information return reporting requirements. Examples of events beyond the filer’s control are the unavailability of business records due to unforeseen conditions, such as a fire, that damaged business records or actions of other parties such as IRS, the filer’s agent, or the payee of the item included in the information return. In the case of the latter, the filer must show that the failure resulted from the failure of the payee to provide the information to the filer or that the payee provided incorrect information to the filer. Thus, an employer may claim that the failure to include correct information on a wage statement was due to the actions of the payee if the employer received an SSN from the employee, relied on that number in good faith, and used it on a wage statement. Acting in a Responsible Manner Acting in a responsible manner means that the filer exercised the same degree of care that a reasonably prudent person would use under the circumstances both before and after the failure. It also includes taking steps to avoid the failure such as attempting to prevent it if it was foreseeable, acting to remove the cause of the failure, or correcting the failure as promptly as possible. When a penalty is proposed for an incorrect SSN, filers must then comply with special rules to demonstrate they acted in a responsible manner. In the case of an employer who files wage statements, the employer needs to make an initial solicitation for an employee’s SSN at the time the employee begins work. The employee’s SSN is documented upon IRS Form W-4. Following the initial solicitation, no additional solicitation for the SSN is required unless the IRS notifies the employer that the employee’s reported SSN is incorrect. Following the receipt of such notice, the employer may be required to make up to two annual solicitations for the correct SSN. The first annual solicitation of an employee’s SSN is required only if the IRS notifies the employer that the employee’s SSN is incorrect and if the employer’s records contain that incorrect SSN at the time it receives the notice. The solicitation for the SSN must be made by December 31 of the year in which the notice was received (or by January 31 of the following year if notified in December) and may be made in person or by mail or telephone. The solicitation is necessary only if there will be reportable payments to that employee in that year. A second annual solicitation would be required if the employer receives an IRS notice of an incorrect SSN for the employee in any subsequent year. An employer may rely on the SSN that an employee provided in response to a solicitation, and the employer may use that SSN in filing a wage statement for that employee. If an employer receives an IRS notice of an incorrect SSN provided by an employee after having made two annual solicitations and reporting the SSN provided by the employee, the employer is not required to make further solicitations. Acting with Intentional Disregard The rules for determining if an employer acted responsibly do not apply if the employer acted with intentional disregard for the requirement to file an accurate SSN. Intentional disregard occurs when the employer knows, or should know, of the regulation to file accurate SSNs and chooses to ignore the requirement. Indications of intentional disregard include the employer did not promptly correct the error upon discovery or the employer has a pattern of failures. IRS Is Reviewing 100 Employers to Determine Whether and How to Implement a Penalty Program IRS is determining whether and how to develop a penalty program for assessing penalties for filing wage statements with incorrect SSNs. The IRS Program Director for Penalties and Interest said that the program will likely have multiple components with the primary focus on the most “egregious” filers; however specific details have not been developed. As part of the process to develop the penalty program, IRS is conducting, but has not finished, a review of 100 employers who are “egregious” filers of wage statements with incorrect SSNs and the results of these reviews will be used to help develop the penalty program. The intent of this review is to gain insight and data to help IRS determine whether and how to implement a penalty program that would create incentives for employers to improve the accuracy of SSNs filed on wage statements while avoiding unnecessary burden and hardship. IRS selected the 100 employers from two lists of the most “egregious” filers provided by SSA for tax year 2000. SSA developed the lists from data it maintains on the number of wage statements filed by employers with SSNs and name combinations that did not match SSA’s records. IRS selected fifty employers from a list of the 100 employers that filed the most wage statements with inaccurate SSNs. These employers were among the nation’s largest employers and the number of wage statements they filed with incorrect SSNs was a function of the large numbers of wage statements they filed. The second fifty employers were from a list of 100 employers who filed the highest percentage of wage statements with inaccurate SSNs. These employers generally issued fewer than 1,000 wage statements but had error rates of 93 percent or higher. As part of the review, IRS officials made on-site visits, interviewed employers, and examined various documents. The objectives were to determine the processes employers used to obtain SSNs to report on wage statements, if the employers attempted to verify SSNs, how they dealt with letters from SSA indicating that some employees’ names and SSNs did not match SSA’s records, the barriers to obtaining correct SSNs, and whether any penalties should be assessed. IRS’s LMSB Division has completed the review of the larger employers and summarized the results in an April 2003, report. As of May 2004, IRS’s SBSE Division had reviewed 28 of the 50 employers who had filed the highest percentage of wage statements with inaccurate SSNs. Of the remaining 22 employers, 21 are out of business and one could not be located. In early 2004, IRS requested that SSA provide another list of 50 “egregious” employers from which IRS will select 22 for review. As of May 2004, IRS has not received the new list. When it does, officials estimate it will take 4 to 5 months to complete the additional reviews. At that time, SBSE officials will prepare a report documenting the results of all 100 reviews. When the entire review has been completed, options will be developed for the IRS Commissioner to consider which would increase the likelihood that employers would file accurate SSNs on wage statements. Although IRS’s Regulations Meet Statutory Requirements, It Is Unlikely Employers Will Be Penalized; IRS Will Consider Changes IRS’s regulations for penalizing employers who file wage statements with inaccurate SSNs meet the statutory requirements; however, under current regulations, employers are unlikely to be penalized for filing wage statements with inaccurate SSNs and IRS has no record of ever penalizing an employer for inaccurate SSNs on wage statements. Based on its review of the 50 large and mid-size businesses included in IRS’s review of 100 “egregious” employers, LMSB officials and employers developed recommendations intended to reduce or eliminate the filing of wage statements with inaccurate SSNs. IRS officials then said that they would consider a range of changes to improve the accuracy of SSNs on wage statements. Regarding the part of the review related to small and self- employed businesses, IRS’s SBSE Division concluded that for the employers reviewed to date, many of their employees for whom inaccurate SSNs had been filed appeared to be aliens. If IRS takes steps to improve the accuracy of SSNs reported on wage statements, there could be implications for thousands of illegal immigrants and their employers, and in turn, on other federal programs which specifically deal with federal immigration policy. Consequently, other federal agencies, including DHS, could be affected by revisions to IRS’s approach for ensuring that the accuracy of SSNs on wage statements is improved. The Internal Revenue Code requires that (1) employers be financially penalized if they fail to file accurate SSNs on wage statements and (2) penalties should be waived if reasonable cause exists for the failure. The regulations that IRS has established to implement the laws contain both of these dimensions; therefore the regulations are consistent with the legislation. However, the criteria for meeting the reasonable cause waiver included in the regulations are easy to meet making penalization of employers very unlikely. This reduces or eliminates the potential effectiveness of penalties in encouraging more accurate filing of SSNs on wage statements. As previously described, to qualify for a reasonable cause waiver, an employer is responsible only for soliciting an SSN from each employee from one to three times depending on when the employer has been contacted by the IRS and told that the SSN provided on the wage statement is inaccurate. The current regulations do not hold employers to more stringent requirements, such as verifying the accuracy of an SSN provided by an employee, or ensuring that an employee provides an alternative SSN when their reported SSN was determined to be inaccurate. If employers voluntarily wish to verify SSNs, their options are limited. IRS maintains a system for verifying the taxpayer identification numbers in its records, of which individual SSNs are one type. However, IRS is prohibited by law from verifying SSNs for employers. Pending legislation would allow IRS to do so. Employers can voluntarily use SSA’s Employee Verification Service (EVS) prior to filing wage statements to determine if SSNs are valid. Under EVS, employers can call an 800 number to verify SSNs for up to five employees, submit a paper request for up to 50 employees, or submit requests for larger numbers using tapes or discs. Paper requests can take up to 30 days to process. SSA is also pilot-testing another system—the social security number verification service (SSNVS)-- for voluntary use by employers to verify SSNs and, according to SSA officials, hopes to roll out the program nationwide in January 2005. When using SSNVS, employers can verify up to ten SSNs via computer and receive an instant response and for larger requests, can upload a file to SSA and receive a response within 1 business day. Our attempt to determine if IRS has ever penalized employers for filing a wage statement with an inaccurate SSN found no evidence of any penalties being assessed. IRS does not have any information documenting that any employer has ever been assessed such a penalty. IRS collects data on the number and dollar amount of penalties assessed for all information returns, including wage statements that are filed late; filed with missing or inaccurate taxpayer identification numbers, which include SSNs; or which were not filed on magnetic media when required. IRS does not separately collect the same statistics for wage statements. However, according to the Program Director for Penalties and Interest, he has requested that this information be collected in the future. He hopes that changes can be made to IRS’s computer systems so that IRS can begin collecting this information for wage statements by the end of fiscal year 2005. IRS’s recent review of the 100 “egregious” employers provides evidence that under the current criteria for meeting the reasonable cause waiver, it is improbable that employers would be penalized for filing wage statements with inaccurate SSNs. As of May 2004, IRS concluded that none of the 78 employers examined should be assessed a penalty. This includes 50 employers reviewed by the LMSB Division for which IRS officials could not find any evidence that they had ever been penalized. This does not mean, however, that the employers filed wage statements free of inaccurate SSNs. It simply means that they solicited and used the SSNs provided by the employees to prepare the wage statements regardless of whether the SSNs provided by the employees were accurate. Again, under the regulations implementing the reasonable cause waiver, as long as the employers solicit SSNs from employees and use those SSNs to prepare wage statements, they will meet the reasonable cause waiver and will not be penalized. The LMSB Division prepared a report in April 2003 summarizing the results of its review of the 50 “egregious” employers who filed large numbers of wage statements with inaccurate SSNs. The report includes five barriers identified by employers to filing wage statements with accurate SSNs and 13 recommendations that LMSB officials and employers think may reduce or eliminate the filing of wage statements with inaccurate SSNs. The recommendations fall into four general areas: legislative and regulatory changes, SSA changes, employer changes, and IRS changes. All would represent substantive changes to the current requirements for employers. For example, one recommendation is that employers use SSA’s EVS to verify the SSN of all new hires. Another recommendation is to require employers to review and obtain a copy of the employee’s social security card prior to hiring the new employee. A third recommendation is that if re-solicitations of SSNs are required, employers would verify new SSNs provided by employees using the SSA EVS system. These initiatives would require legislative and regulatory changes and might result in changes to the reasonable cause standard. The division also pointed out that several employers have taken innovative steps to address the compliance problem. One such initiative is using a service that will conduct a background check on each employee, including verifying the SSNs, for a minimal fee. Assessing the viability of these recommendations was not within the scope of our evaluation. Regarding the small and self-employed businesses review, after completing the reviews of 28 employers, SBSE preliminarily concluded that (a) employers’ acceptance of an employee’s information without any verification by the employer is common practice, (b) employers have no responsibility to verify the accuracy of an employee’s SSN, and (c) many of the employees whose SSNs were identified in the review as inaccurate appeared to be aliens. Although the options for improving the accuracy of SSNs on wage statements have not yet been developed, IRS officials said they would consider a range of changes including, as LMSB’s report suggests, requiring employers to verify SSNs provided by employees on Forms W-4. In judging how to proceed, the officials said that a number of factors would need to be weighed. For example, the regulations for information returns currently apply uniformly to all providers of such returns. To the extent that addressing the problem of inaccurate SSNs on wage statements might adversely affect other providers of information returns, IRS may need to consider separate regulations only applying to entities in their capacity as employers. IRS officials indicated that one initiative they have discussed is changing the criteria employers must meet to qualify for the reasonable cause waiver if they file inaccurate SSNs on wage statements. They added the Department of the Treasury would have to agree to this change. In addition, in crafting changes to encourage more accurate wage statements, officials said that they would need to assess any increased burden placed upon employers, such as increases in cost. The Commissioner of Internal Revenue, in testimony before Congress, has raised the issue of whether a more rigorous program could drive employers into underreporting wages paid to employees, which could result in more employees not participating in the federal tax system. This would be contrary to what IRS sees as part of its basic mission: to assure that all appropriate taxpayers are part of the system. In addition, the Commissioner of IRS recently testified that any increase in IRS’s compliance activities in this area may place an increased demand on resources and that “absent added funding for such activities, this would likely come at the expense of other compliance activities.” Finally, some options for addressing the issue of inaccurate SSNs may require legislative changes, such as authorizing IRS to make its taxpayer identification number verification system available to employers for purposes of verifying employees’ SSNs. Although IRS officials did not focus on them in discussing their review with us, certain other issues likely would be relevant when considering changes in employers’ responsibilities. SBSE’s preliminary conclusion that many of the employees whose SSNs were identified as invalid in its review appeared to be aliens raises the question of whether addressing the accuracy of SSNs has immigration-related implications for employers and other federal agencies. Even though incorrect SSNs may result from such things as inadvertent errors on employees’ or employers’ part or from individuals not reporting a name change to SSA, some portion of inaccurate SSNs likely are due to illegal aliens using false or stolen SSNs when completing Forms W-4 for employers. TIGTA estimated, for example, that 353,000 taxpayers could be identified as illegal aliens from IRS tax year 2000 data and that of these at least 265,000 had wage statements with invalid SSNs. Since significant numbers of individuals with inaccurate SSNs on Forms W-4 may be illegal aliens, some employers have raised concerns about whether being required to verify employees’ SSNs would trigger obligations to other federal agencies, such as DHS. In general, if questions arise about an employee’s work authorization, DHS guidance provides that an employer might need to take certain actions, such as providing the employee another opportunity to provide proper DHS Form I- 9 documentation. An earlier pilot test of a voluntary employment verification system illustrates other issues that may affect employers and other federal agencies if IRS requires SSN verification. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 directed the Attorney General to conduct a pilot program of an employment verification system intended to prevent employers from hiring unauthorized aliens. The voluntary participants in this system agreed to electronically check newly hired employees’ SSNs against SSA’s SSN records and, if the SSNs did not match and the employees were unable to resolve the mismatch, to terminate their employment. This pilot program includes the employer contacting DHS to verify employment eligibility of a potential employee. The basic pilot of the program is operating in six states (California, Florida, Illinois, Nebraska, New York, and Texas) and can be expanded to all 50 states by December 2004. In evaluating the basic pilot program, researchers found that the system confirmed that 87 percent of the employees were authorized to work but that some employers took adverse actions against employees on the basis of a tentative nonconfirmation (e.g., the employee’s name and SSN did not match SSA’s records) although they were prohibited from doing so under the program. Federal law protects certain individuals from unfair immigration-related employment practices of a U.S. employer. The federal government entity charged with oversight of the laws protecting against unfair immigration-related employment practices is the Office of Special Counsel for Immigration Related Unfair Employment Practices, which is part of the Civil Rights Division of the Department of Justice. IRS officials are aware that changes to the reasonable cause standard could have consequences for other federal agencies. For example, when TIGTA recommended that IRS initiate a regularly scheduled penalty program to penalize employers for submitting inaccurate SSNs on wage statements, IRS was contacted by the Departments of Labor, Justice and Homeland Security, seeking information. Regarding coordinating any initiatives with other federal agencies, IRS officials said that they would take other agencies’ views into account most likely through their formal comments on any regulatory proposal IRS and Treasury would make. IRS Plans to Design an Evaluation of Its Penalty Program if a Program Is Adopted If IRS implements a program to assess penalties against employers who file wage statements with inaccurate SSNs, IRS plans to evaluate the effectiveness of its program on curtailing the filing of such wage statements. IRS officials said they will design the evaluation after a penalty program is adopted. In its September 2002 report, TIGTA recommended that the Program Director for Penalties and Interest develop a methodology for monitoring and analyzing the results of the penalty program and that the data collected should include the number and amount of penalties proposed, assessed, waived (and the reason for the waiver), and collected. TIGTA also said the data should include the number of wage statements corrected in response to penalty actions. Developing and implementing a design for evaluating whatever program IRS moves ahead with will be important so that IRS can understand how well the new program succeeds in increasing SSN accuracy on wage statements while minimizing potential adverse effects like decreased participation of wage earners in the tax system. Conclusions Inaccurate SSNs on wage statements contribute to growth in the SSA Earnings Suspense File, increase IRS’s workload to ensure that wages are properly identified for the individual earning them, and burden individuals who must work with SSA and IRS to resolve disputes that may affect their tax obligations and social security benefits. Under the Tax Reform Act of 1986 and OBRA of 1989, Congress provided IRS authority to penalize employers who submit wage statements with inaccurate SSNs to help assure the accurate reporting of wages. Both Acts emphasize the use of penalties as a tool to help ensure that employers file information returns with complete and accurate information but also include a reasonable cause provision under which penalties can be waived. However, the reasonable cause standard is not difficult for employers to meet. IRS has no record of penalties having been levied against employers and none have been levied to date as part of IRS’s study of employers with substantial numbers or percentages of wage statements with inaccurate SSNs. Because little or no likelihood exists for penalties to be levied, the potential of the statutory penalty tool to encourage greater accuracy of wage statements is compromised. Further, because employers have no responsibility to verify SSNs, opportunities to detect and correct SSN inaccuracies before SSA and IRS need to react to them and possibly consider penalizing employers are lost. Accordingly, thoroughly exploring options to change the reasonable cause standard, including possibly requiring that employers take steps to verify the accuracy of SSNs provided by employees, must be a critical part of IRS’s consideration of how to make their penalty program more effective. Nevertheless, because a change to the reasonable cause standard could have consequences for IRS, employers, and other federal agencies, determining whether and how to modify the standard will require balancing numerous, sometimes conflicting, issues. Accordingly, in considering options for revising the standard, IRS would benefit from understanding these potential consequences. Although IRS officials anticipate receiving formal comments from other federal agencies on any regulatory proposal they may publish, they do not have plans to work with representatives of these agencies as they initially consider what options may best address inaccurate SSNs. Recommendations We have two recommendations. First, given the central role that the reasonable cause standard plays in defining the responsibilities of employers and thereby potential progress in improving the accuracy of SSNs on wage statements, we recommend that the Commissioner of Internal Revenue consider options for revising this standard. Second, because changes to this standard likely would have consequences for other federal agencies, the Commissioner should ensure that as IRS officials consider whether and how to modify this standard, representatives of other potentially affected federal agencies are consulted prior to issuing any proposed regulations. Agency Comments and Our Evaluation The Commissioner of Internal Revenue provided written comments on a draft of this report in an August 10, 2004, letter, which is reprinted in appendix II. The Commissioner agreed with our recommendations, saying that IRS would consider revisions to the reasonable cause standard and that IRS must proactively work with other potentially affected agencies on possible changes to the standard. Regarding possible changes to the reasonable cause standard, we encourage IRS to explore options for having employers verify the accuracy of SSNs. Then, if an employee provided an inaccurate SSN, the employer could take timely actions to obtain a valid SSN. In agreeing that IRS should work with other agencies, the Commissioner did not indicate when or how IRS planned to do so. We believe it is important for IRS to work with the other affected agencies before draft regulations are developed so IRS has the benefit of the agencies’ views when designing the draft regulations. The Commissioner of SSA provided written comments on a draft of this report in an August 6, 2004, letter, which is reprinted in appendix III. The Commissioner said that due to the possible impact on SSA, employers, and employees, SSA should be involved in the development of any requirement imposed on employers to verify SSNs provided by employees, thus agreeing with our recommendation that IRS should consult other federal agencies prior to making any changes to the reasonable cause standard. In addition, at SSA’s request, we clarified a description of a pilot program for verifying employment eligibility. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Chair and Ranking and Minority Member of the Senate Committee on Finance; the Chair and Ranking Minority Member of the House Committee on Ways and Means and its Subcommittees on Oversight and on Social Security; the Chair and Ranking Minority Member of the Senate Committee on the Judiciary and the Chair and Ranking Minority Member of its Subcommittee on Immigration, Border Security and Citizenship; the Chair and Ranking Minority Member of the Senate Special Committee on Aging; the Chair and Ranking Minority Member of the House Committee on the Judiciary and the Ranking Minority Member of its Subcommittee on Immigration, Border Security and Claims; the Chair and Ranking Minority Member of the House Select Committee on Homeland Security; the Chair and Ranking Minority Member of the House Committee on Government Reform and its Subcommittee on National Security, Emerging Threats, and International Relations; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Commissioner of Social Security; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Boris Kachura, Assistant Director. If you have any questions regarding this report, please contact him at (202) 512-3161 or kachurab@gao.gov or me at (202) 512-9110 or brostekm@gao.gov. Key contributors to this report were Shirley Jones, Jean McSween, Jay Pelkofer, and Shellee Soliday. Objectives, Scope, and Methodology Our first objective was to describe the statutory provisions authorizing the Internal Revenue Service (IRS) to penalize employers who file Forms W-2 (Wage and Tax Statements), hereafter referred to as wage statements, with inaccurate social security numbers (SSNs). To address the first objective, we identified the laws that gave IRS this authority, verified this with IRS Office of Chief Counsel staff, and researched the legislative history of those laws. Our second objective was to describe IRS’s current program for identifying and penalizing employers who file wage statements with inaccurate SSNs including any changes under consideration. To address the second objective, we reviewed IRS’s current penalty program with various IRS officials including the Program Director for Penalties and Interest and reviewed a Treasury Inspector General for Tax Administration (TIGTA) report addressing the same subject. We interviewed Small Business/Self Employed (SBSE) Division and Large and Mid-Size Business (LMSB) Division Employment Tax officials to determine how employment tax examinations can be used to identify employers who should possibly be assessed a penalty for filing wage statements with inaccurate SSNs. We also sought from the Program Director for Penalties and Interest, the SBSE Office of Employment Tax Director, and the LMSB National Program Manager for the Employment Tax Program any data collected by IRS that would show the number of employers who had been assessed penalties for filing wage statements with inaccurate SSNs. We reviewed the IRS Office of Chief Counsel file related to the regulations implementing the legislation identified under objective one as well as the regulations themselves and interviewed Chief Counsel staff and the National Program Director for Penalties and Interest about what employers must do, under the regulations, to avoid a penalty for filing a wage statement with an inaccurate SSN. We also reviewed information IRS provided to employers about their responsibilities in relation to filing accurate SSNs. We reviewed documents related to IRS’s review of 100 “egregious” employers who filed large number of wage statements with inaccurate SSNs including an LMSB Division report summarizing the results of its review of 50 of those employers as well as discussed the study with the Program Director for Penalties and Interest. These discussions covered, among other things, the objectives and methodology used for the reviews, any results to date, and how IRS planned to use the results, including IRS’s plans for developing a penalty program. Our third objective was to describe the extent to which IRS’s program meets legislative requirements and the likelihood that employers will be penalized for filing wage statements with inaccurate SSNs under that program. To address the third objective, we compared the information we collected related to the first objective—the statutory provisions—to the information we collected related to the second objective—specifically IRS’s program for identifying and penalizing employers who file such wage statements. We also reviewed any data IRS collected on penalties assessed against employers who filed such wage statements and the regulations implementing the legislation. We reviewed IRS’s examination of 100 “egregious” employers who filed large numbers of wage statements with inaccurate SSNs, and the LMSB Division report summarizing the results of its review of 50 of the 100 employers. We also met with Social Security Administration (SSA) officials to obtain information about the number of wage statements filed by employers with names and SSNs that do not match SSA’s records, the relationship between wage statements with inaccurate SSNs and social security benefits, and the tools they make available to employers to verify SSNs. In addition, we reviewed literature and reports related to the use of SSNs by the Department of Homeland Security and discussed this use with knowledgeable GAO staff. Our fourth objective was to describe how IRS is evaluating, or planning to evaluate, the effectiveness of its program on curtailing the filing of wage statements with inaccurate SSNs. To address the fourth objective, we interviewed the IRS Program Director for Penalties and Interest and reviewed the TIGTA report referred to previously. We did not review and analyze any documents related to an evaluation design since IRS does not have one. We did not conduct a data reliability assessment because IRS does not collect data on penalties related to wage statements filed with inaccurate SSNs. We performed our work from November 2003 through June 2004 in accordance with generally accepted government auditing standards. Comments from the Internal Revenue Service Comments from the Social Security Administration GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
Inaccurate social security numbers (SSN) on wage statements contribute to growth in the Social Security Administration's (SSA) Earnings Suspense File, increase the Internal Revenue Service's (IRS) workload to ensure that wages are properly identified for those earning them, and burden individuals who must work with SSA and IRS to resolve disputes that may affect their social security benefits and tax obligations. IRS's ability to penalize employers for submitting inaccurate SSNs on wage statements is intended to promote SSN accuracy. Items GAO was asked to describe included: (1) the statutory provisions authorizing IRS to penalize employers who file wage statements with inaccurate SSNs; (2) IRS's program to penalize such employers; and (3) the extent IRS's program meets legislative requirements, the likelihood of any penalties, and any program changes being considered. IRS is authorized to penalize employers who fail to file information returns or fail to include complete and correct information on them. Prior to 1986, IRS was authorized to assess penalties for failure to file information returns. The Tax Reform Act of 1986 added penalties for failure to include complete and correct information, established penalty amounts, and had two provisions limiting those penalties--the "reasonable cause waiver" and a maximum of $20,000 in penalties per filer per calendar year. The Omnibus Budget Reconciliation Act (OBRA) of 1989 increased the penalty amounts and the maximum total penalty amounts, ranging from $25,000 to $250,000 per filer per calendar year and added a third limit--a "de minimis provision" limiting the number of penalties that can be assessed. These statutes apply to employers who submit wage statements with inaccurate SSNs. Both acts authorize penalties as a tool to help ensure that information returns include complete and accurate information. According to IRS officials, IRS has the capability to identify employers who file wage statements with inaccurate SSNs but does not have a dedicated compliance program for penalizing them. Currently employers may be penalized based on an employment tax examination. IRS regulations define the steps an employer needs to take to demonstrate that any filing of wage statements with inaccurate SSNs was due to reasonable cause. If reasonable cause exists, any potential penalty will be waived. To qualify for the reasonable cause waiver, employers must be able to demonstrate they solicited an SSN from each employee one to three times, depending on the circumstances, and that they used this information to complete the wage statements. Employers are not responsible for verifying the accuracy of an SSN. IRS is conducting a review of 100 "egregious" employers who filed large numbers or percentages of wage statements with inaccurate SSNs to determine whether and how to implement a penalty program. IRS's regulations that implement the penalty provisions meet the statutory requirements; however, the criteria for meeting the reasonable cause waiver is such that few if any employers are likely to be penalized for filing inaccurate SSNs. IRS has no record of ever penalizing an employer, including the employers who were contacted during IRS's review of "egregious" employers. IRS officials said they would consider changes, including requiring employers to verify SSNs provided by employees, as part of the "egregious" employer study. Requiring SSN verification, however, may affect employers and other federal agencies with roles related to federal immigration policy since some portion of inaccurate SSNs on wage statements is attributable to illegal aliens using invalid SSNs. IRS officials said they would likely take the views of other agencies into account after drafting regulations.
Background OCPO’s Oversight and Strategic Support Division manages the Chief Procurement Officer’s procurement oversight efforts. Figure 1 shows the division’s four branches. All branches except the strategic sourcing branch are responsible for aspects of the procurement oversight efforts. Component-level contracting activity is led by component Heads of Contracting Activity (HCA), who have overall responsibility for the day-to- day management of the component’s contracting function. OCPO has oversight responsibilities for all nine DHS HCAs—one for each of the seven components with procurement offices and one HCA each for the Office of Selective Acquisitions and the Office of Procurement Operations, which provide contracting support to all other components. The Selective Acquisitions and Procurement Operations HCAs report directly to the Chief Procurement Officer. The seven other HCAs report directly to their component heads, but their contracting authority is delegated to them from the Chief Procurement Officer. Figure 2 shows the organizational relationships among the HCAs, the Chief Procurement Officer, and other senior DHS leadership. The Oversight and Strategic Support Division’s strategic sourcing branch has management responsibilities for DHS’s strategic sourcing initiative. DHS defines strategic sourcing as a collaborative and structured process of critically analyzing DHS spending and using an enterprise approach to make business decisions about acquiring and managing commodities and services more effectively and efficiently across multiple components or the entire department. DHS’s strategic sourcing contract vehicles include contracts or agreements that have been established for use by two or more components. To maximize cost savings, DHS encourages component utilization of established strategic sourcing vehicles. DHS Has Improved Some Aspects of Its Procurement Oversight but Guidance Is Not Sufficiently Updated OCPO’s oversight has helped ensure that components address constructive assessments of their compliance with procurement regulations and policies and has increased the Chief Procurement Officer’s visibility into components’ progress against procurement-related metrics. OCPO has been less consistent in—but continues to hone—its implementation of other aspects of the program, including self assessments and parts of its acquisition planning reviews. However, at the time of our review, DHS had not issued updated policy or guidance reflecting OCPO’s current approach to procurement oversight, which led to a lack of clarity for components regarding what the oversight efforts entail. Procurement Oversight Focus Has Changed OCPO has maintained the overarching structure of its oversight program while making some modifications to reflect a more specific focus on procurement issues versus broader acquisition issues. As described in the management directive and guidebook, OCPO’s original oversight program included four types of reviews: on-site reviews, operational status reviews, self assessments, and acquisition planning reviews. It was based largely on GAO’s Framework for Assessing the Acquisition and assessed broad issues related to both Function at Federal Agenciesacquisitions and procurement. Our prior work on the acquisition oversight program in 2007 found that OCPO’s oversight plan generally incorporated effective acquisition management principles, but that DHS faced challenges in implementing its oversight program. OCPO began making changes to the program in 2008, starting a transition to what is now referred to as “procurement oversight.” OCPO’s current efforts, which at the time of our review were not documented in written policy or guidance, still include the four types of reviews, but no longer fully reflect the original guidance. For example, some reviews are now focused on procurement rather than broader acquisition-related topics, which now fall under the responsibility of other offices such as the recently created Program Accountability and Risk Management Division, which is responsible for acquisition program management oversight.current oversight efforts no longer examine cost, schedule, and performance variances for major investments, which are the responsibility of the Program Accountability and Risk Management Division. Table 1 compares the oversight review structure and focus as described in the original management directive and guidebook with current efforts as described by OCPO officials, since updated guidance was not available at the time of our review. Components Leverage Buying Power through DHS’s Strategic Sourcing Program Components Actively Support DHS’s Strategic Sourcing Program DHS component officials stated that most of their efforts to leverage buying power are through the department’s strategic sourcing program, which facilitates the development and award of contracts for the purchase of specific items or services across DHS. According to DHS data, DHS’s spending through strategic sourcing contract vehicles has increased steadily from $1.8 billion in fiscal year 2008, to almost $3 billion in fiscal year 2011, representing about 20 percent of DHS’s $14 billion in procurement spending for that year. The Office of Management and Budget’s Office of Federal Procurement Policy has cited DHS’s efforts among best practices for implementing federal strategic sourcing initiatives. DHS has implemented 42 strategic sourcing efforts, including indefinite-delivery indefinite-quantity contracts and blanket purchase agreementsengineering services. The department also has several new initiatives under development. for goods and services ranging from ammunition to DHS policies encourage components to consider, but do not require, the use of departmentwide strategic sourcing contract vehicles. Usage of all departmentwide contracts is “mandatory for consideration” unless otherwise approved by the Under Secretary for Management, and therefore must be considered by DHS components prior to awarding a contract. Before pursuing their own procurements, components are to review the DHS-wide intranet site that lists available strategic sourcing contract vehicles. Some component officials said that their staff routinely check that list to determine whether one of those vehicles can be used before initiating a new procurement effort. Further, to encourage increased establishment of strategic sourcing contract vehicles, the HSAM requires the components to involve the strategic sourcing program office to determine if the requirement lends itself to the establishment of a departmentwide contract. If a DHS component makes a decision to implement its own contract instead of a departmentwide contract, it must document in the acquisition plan and contract file the rationale for doing so and notify the Chief Procurement Officer for review and approval. DHS is taking steps to strengthen the use of strategic sourcing at the department and has drafted, but not yet issued, a management directive that would make use of strategic sourcing contract vehicles mandatory with exceptions. The Chief Procurement Officer has identified increasing strategic sourcing as a departmentwide priority and OCPO encourages the utilization and development of strategic sourcing vehicles in a variety of ways. OCPO officials explained that they host quarterly meetings and training sessions with the DHS Strategic Sourcing Working Group, meet with individual component programs and procurement offices, and post all contract information and ordering guides on the strategic sourcing web page. OCPO includes metrics in its quarterly reports to track components’ strategic sourcing contract vehicle utilization rates and savings, though it has not established component-specific goals or targets to further encourage use and development of strategic sourcing contract vehicles. OCPO officials told us that they consider their strategic sourcing program to be robust, and therefore do not currently think it would be worth the additional effort to develop and track component-specific goals. They said that if component participation were to decline, they might consider developing component-specific goals. Component officials we met with cited a variety of benefits associated with using DHS’s strategic sourcing program. Most components we interviewed stated that they rely on department-level strategic sourcing policy and efforts rather than developing their own. Several officials explained that, once the contract vehicle is in place, it is much quicker to award contracts. Some components cited economies of scale and indicated that they thought prices had gone down in some areas. In addition to using the vehicles, DHS components are involved with the program in a variety of ways. Component representatives serve on working groups to help identify potential strategic sourcing opportunities and develop shared requirements. Components are also tasked with serving as the lead for specific initiatives. For example, the Secret Service initiated a department-wide contract on tactical communications in 2012 and Customs and Border Protection led a department effort to obtain canines in 2011. Components Have Leveraged Contracts with Other Agencies Some components offered examples of contracts they leveraged with other agencies. For example, Customs and Border Protection leveraged a contract with the Department of Defense for air and marine assets in 2008 and the Secret Service partnered with the Defense Information Systems Agency and White House Communications Agency in 2012 to obtain an event planning, scheduling and reporting system. In another example, the Coast Guard received price discounts for its HC-130J aircraft starting in 2000 by leveraging an Air Force vehicle rather than contracting directly with the manufacturer. However, most components found it more efficient to use DHS’s strategic sourcing vehicles than to leverage contracts with other agencies. Components described several challenges associated with leveraging contracts with other agencies, such as additional up front planning, identifying shared requirements, and paying associated fees to the lead agency. Component officials indicated that they generally do not leverage contracts directly with other DHS components. Several component officials and the director of DHS’s strategic sourcing branch said that it is not an efficient use of time or resources for individual components to reach out directly to other components to identify shared requirements given the DHS-wide effort does this across components. The director also noted that if a component had an idea of a contract to leverage with another component, they should first share that example at a DHS-wide strategic sourcing forum in case any other components share that same requirement. However, if the components then determine only two components share that requirement, it would make sense for those two components to work together directly. Conclusions With its initial oversight efforts in 2005 and today’s more streamlined procurement oversight approach, OCPO has increased department-level insight into components’ procurement operations and recommended ways that components can take steps to improve their overall procurement operations. OCPO’s consistent and constructive on-site and operational status reviews also help ensure that components work towards common departmental procurement goals. However, we found that DHS had not updated its procurement oversight policy and guidance to reflect the increased focus on procurement and changes to the original oversight program. As a result, we found that component officials did not always know what was expected of them regarding the implementation of the procurement oversight efforts. DHS’s recent revision of its policy and guidebook is a step in the right direction, but inconsistencies in the documents as well as with the program as described to us by DHS officials could lead to further confusion, which would diminish the value of the program and opportunities for increased accountability for procurements. Recommendation for Executive Action In order to help ensure that DHS component officials understand what OCPO expects of them in its procurement oversight, we recommend that the Secretary of Homeland Security direct the Chief Procurement Officer to review and ensure consistency between Directive 143-05 and the Procurement Oversight Program Guidebook and with the department’s current procurement oversight efforts. Agency Comments and Our Evaluation We provided a draft of this report to DHS for review and comment. The draft included a recommendation that DHS issue updated policy and guidance to reflect changes to the department’s procurement oversight efforts. In written comments, the department concurred with our findings and recommendation. The department’s comments are reprinted in Appendix II. DHS informed us that, to address the recommendation in our draft report, it updated its directive and guidebook to better mirror current procurement oversight practices and that it will make both documents available on DHS’s internal website. DHS’s revisions to the policy and guidance are a step in the right direction. However, we found inconsistencies between the updated directive and guidebook as well as with the program as described to us by DHS officials. As we noted when we made our draft recommendation, component officials do not always know what is expected of them regarding the implementation of the procurement oversight efforts, which diminishes the value of the program. Therefore, while we acknowledge DHS’s most recent efforts to update the directive and guidebook, we continue to believe that additional action is needed and have therefore revised our recommendation. We are sending copies of this report to the Secretary of Homeland Security. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix IV. Appendix I: Scope and Methodology The objectives of this review were to (1) assess the Department of Homeland Security’s (DHS) efforts to implement procurement oversight, and (2) identify DHS components’ use of strategic sourcing to leverage their buying power. To assess DHS efforts to implement procurement oversight, we examined DHS’s procurement oversight policies and guidance; reviewed prior GAO reports on acquisition and procurement oversight; interviewed knowledgeable officials from the Office of the Chief Procurement Officer (OCPO) and from DHS components; and examined OCPO and component documentation of oversight efforts, including OCPO’s review schedule, quarterly reports, goal letters, operational status reports, and documentation of on-site reviews from 2007 to the present. Specifically, we interviewed the Chief Procurement Officer, the Director for Oversight and Strategic Support, officials from the four branches of the Oversight and Strategic Support Division, and senior officials, including four Heads of Contracting Activity (HCA), from the nine DHS contracting offices with HCAs to discuss the evolution of DHS’s oversight efforts. To evaluate the extent to which DHS components address the findings and recommendations identified in on-site reviews, we interviewed OCPO and component contracting officials, reviewed on-site review findings and recommendations, and analyzed components’ written responses regarding actions they took to address recommendations from their most we recent reviews. For all eight contracting offices that OCPO reviewed,examined the contracting offices’ descriptions of actions they took to address recommendations from their most recent on-site reviews and assessed their documentation of those actions. To identify DHS components’ use of strategic sourcing to leverage their buying power, we reviewed relevant policies and guidance, examined department and component documentation and interviewed OCPO and component officials on practices they employ, contracts they have leveraged, and views on resulting benefits. Specifically, we interviewed officials from DHS’s strategic sourcing branch, as well as all nine contracting offices with HCAs, to gain an understanding of DHS’s strategic sourcing processes, the availability of strategic sourcing contracting vehicles, and the potential of establishing component specific strategic sourcing goals. We conducted our work from March 2012 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Homeland Security Appendix III: Snapshot from a 2012 DHS Component Quarterly Report Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following individuals made key contributions to this report: Katherine Trimble, Assistant Director; Laura Holliday, Andrea Bivens, Laura Greifner, and Sylvia Schatz.
DHS bought over $14 billion in goods and services in fiscal year 2011--over one quarter of its budget--and processed over 100,000 transactions to support its homeland security missions. In 2005, DHS established an oversight program to provide department-level insight into components' procurement of goods and services and to identify successful acquisition management approaches. DHS has also established specific initiatives, such as a strategic sourcing program in 2003 to reduce procurement costs and gain other efficiencies by consolidating requirements. GAO (1) assessed DHS's efforts to implement procurement oversight, and (2) identified DHS components' use of strategic sourcing to leverage their buying power. To do this, GAO reviewed procurement oversight policies and guidance, interviewed officials from OCPO and DHS components, reviewed prior GAO reports, reviewed on-site review findings and recommendations, and examined DHS and component documentation of oversight and strategic sourcing efforts. The Department of Homeland Security's (DHS) Office of the Chief Procurement Officer (OCPO) continues to implement and has improved some aspects of its procurement oversight but has not sufficiently updated its guidance. OCPO's oversight has helped ensure that DHS components receive and address constructive assessments of their compliance with procurement regulations and policies. The oversight also has increased the Chief Procurement Officer's visibility into components' progress against procurement-related metrics. For example, OCPO establishes annual procurement goals for the components and tracks their progress in quarterly reports. OCPO has been less consistent in--but continues to hone its implementation of--other aspects of the program, such as self assessments and parts of its acquisition planning reviews. However, until GAO sent DHS a draft of this report recommending that DHS issue updated policy and guidance to reflect changes to the department's procurement oversight efforts, the department did not issue updated policy or guidance. This has led to a lack of clarity among components regarding what the oversight efforts entail. For example, some components did not complete a required self assessment in 2011. GAO's review of the revised policy and guidance found inconsistencies between the two and with current oversight efforts. DHS component officials stated that most of their efforts to leverage buying power are through the department's strategic sourcing program, which provides departmentwide contract vehicles for the purchase of specific items or services. According to DHS data, the department's spending through strategic sourcing contract vehicles has increased steadily from $1.8 billion in fiscal year 2008 to almost $3 billion in fiscal year 2011, representing about 20 percent of DHS's procurement spending for that year. The Office of Management and Budget has recognized some of DHS's strategic sourcing efforts as best practices. DHS policies encourage components to consider, but do not require, the use of strategic sourcing contract vehicles. The Chief Procurement Officer has identified increasing strategic sourcing as a departmentwide priority and OCPO encourages the utilization and development of strategic sourcing contract vehicles in a variety of ways, including hosting quarterly training sessions and posting contract information on the strategic sourcing web page. In addition, while components have leveraged contracts with other agencies, many found it more efficient to use DHS's strategic sourcing contract vehicles. DHS components generally do not use other components' contracts.
Background When GATT came into force in 1948, some member countries had active state trading programs and wanted to ensure their governments’ right to engage in market activities. However, governments with a dual role as market regulator and market participant can act in ways that protect domestic producers and disadvantage foreign producers. While the drafters of GATT 1947 accepted STEs as legitimate participants in trade, they recognized that STEs, especially those with a monopoly of imports or exports, could be operated to create serious obstacles to trade. GATT 1947 addressed STEs in article XVII (see app. I for the complete text). However, article XVII did not define the term “state trading enterprise,” and as discussed later in this report, GATT members have had problems understanding which entities were subject to the provisions of article XVII. As a result of the Uruguay Round, GATT 1994 defined STEs in the Understanding on the Interpretation of Article XVII of the General Agreement on Tariffs and Trade 1994 (the “Understanding”—see app. III for the complete text or page 11 of this report for the definition). All entities covered by this definition are subject to article XVII. Article XVII establishes a number of guidelines and requirements with respect to the activities of STEs and the obligations of member countries. For example, it stipulates that STEs shall act in a manner consistent with the principles of STEs shall make any purchases or sales in accordance with commercial considerations and shall allow enterprises from other member countries the opportunity to compete; member countries shall provide certain information to the GATT/WTO secretariat about their STEs’ activities; and member countries are not required to provide confidential information that (1) would impede law enforcement, (2) would be contrary to the public interest, or (3) would prejudice the legitimate commercial interests of their STEs. Information is provided to the GATT/WTO secretariat about STEs and their activities on the basis of a questionnaire adopted in 1960. (The full text of the questionnaire is contained in app. II.) GATT/WTO members are to provide responses, called “notifications,” to the questionnaire. Ideally, the notifications should provide enough transparency (openness) about STE operations to determine whether or not they are adhering to GATT disciplines. The questionnaire asks members to list their STEs, the products for which STEs are maintained, and the reasons for maintaining STEs. It also asks them to provide certain information about how their STEs function and statistics that indicate the extent of trade accounted for by STEs. Other portions of GATT 1947 and GATT 1994 also contain references to STEs. For example, countries that have negotiated with other GATT/WTO members to provide a certain level of protection for domestic producers cannot allow their STEs to operate in a way that affords a level of protection greater than was negotiated. Also, references made in certain GATT articles to import or export restrictions include those made effective through STEs. Scope and Methodology We prepared this report for congressional requesters to provide information about the nature of state trading in other countries and the treatment of STEs in GATT/WTO. To determine the extent of STEs in GATT member countries, the type of information available about STEs, and the level of compliance with article XVII, we reviewed article XVII notifications provided to the GATT/WTO secretariat from 1980 to 1994. (Details on our analysis of STE reporting are contained in apps. IV and V.) We also reviewed reports of the Panel on Notifications of State Trading Enterprises, member country position papers on article XVII presented during the Uruguay Round, and GATT/WTO secretariat notes on article XVII prepared for the Uruguay Round. Finally, we discussed the effectiveness of article XVII prior to the Uruguay Round with officials from the United States, GATT/WTO, and other countries. We discussed the results of the Uruguay Round as contained in GATT 1994 and the Agreement on Agriculture with officials from the United States, GATT/WTO, and other countries, including the chairmen of the WTO Working Party on State Trading Enterprises and the WTO Committee on Agriculture. We also reviewed relevant documents, including the Understanding and the Uruguay Round Agreement on Agriculture. We discussed the potential for an increase of STEs in GATT/WTO with relevant officials from the United States, GATT/WTO, the United Nations, and other countries. We also reviewed studies of economies in transition by the Organization for Economic Cooperation and Development (OECD) and other expert organizations. We discussed U.S. efforts to monitor the activities of STEs in other countries with respect to GATT/WTO requirements with officials from USTR, USDA/FAS, the Department of Commerce, and the International Trade Commission. We obtained oral comments on a draft of this report from the U.S. Trade Representative and the Department of Agriculture. Their comments are discussed on page 19. We conducted our review in Washington, D.C., and Geneva, Switzerland, from April 1995 to July 1995 in accordance with generally accepted government auditing standards. Compliance With Article XVII Reporting Requirements Has Been Poor A central objective of article XVII is the collection of information about STEs in member countries in order to provide transparency about their activities and ensure they operate in accordance with GATT disciplines. However, according to GATT/WTO and member country officials, article XVII has generally been ineffective in meeting this objective. The notifications we reviewed provided much of the information requested in the questionnaire, including sectors in which STEs operate, their purposes and activities, and some statistics about their operations. However, compliance with the notification requirement was limited during 1980 to 1994, as 79 percent of GATT members did not submit STE notifications during 1981, the best year of reporting. The evidence we obtained suggested the lack of compliance could be attributed to (1) confusion over the definition of STEs, (2) the lack of systematic review of notifications received, (3) the apparent low priority some GATT members assigned to article XVII’s reporting requirement, and (4) the overall burden associated with GATT reporting requirements. Under these circumstances, it is impossible to determine whether article XVII has yielded information on the full nature and extent of STE activity in GATT/WTO member countries. Moreover, the lack of notifications from most member countries has hindered GATT/WTO members in identifying all STEs in GATT/WTO member countries and determining whether they operate in accordance with GATT disciplines. Some STE Information Available Twenty-nine member countries submitted STE notifications to the GATT/WTO secretariat at least once during the period 1980 to 1994, with 21 of the countries reporting some form of state trading. These notifications provided some insight into the activities of STEs in member countries. For example, the majority of STEs described in these notifications operated in the agriculture sector, covering such products as grains and cereals, dairy products, beef and veal, and sugar (see fig. 1). Member countries also reported that they maintained state trading in alcoholic beverages and petroleum products. In addition to the products listed in figure 1, a few countries also provided notifications about state trading in salt, coal, inflammables, aircraft, and nuclear fuel. The notifications also provided information related to the purpose of STEs and how they operate. With respect to purpose, some member countries have reported using STEs to help agricultural producers “achieve their full potential in overseas markets,” to ensure “protection of the domestic agricultural production against low-priced imports,” and to ensure a “stable and adequate supply” of certain agricultural commodities as part of “national defense preparedness.” Regarding operations, member countries have reported that STEs acted as sole agents for production, imports, and/or exports in the sectors covered. Additionally, the STEs assessed levies on production and/or imports, issued export licenses, and received government guarantees on borrowed funds. Other state trading practices reported included government-guaranteed minimum prices and subsidized exports. The variety of state trading practices reported to GATT makes comparisons between countries difficult since the level of state involvement, and therefore impact on trade, may differ in each case. In general, most notifications have contained statistical information on STE operations, but the information has occasionally been less than requested in the questionnaire. For example, although the questionnaire asked that statistics be furnished on the value and quantity of imports, exports, and national production for the products notified where possible, several countries did not provide information covering national production. In addition, some countries provided information on the quantity, but not the value, of trade and production. Most GATT Members Did Not Submit STE Notifications In accordance with article XVII, each GATT/WTO member country should provide new and full responses to the questionnaire on state trading activities every 3 years, called “full notifications,” even if the country does not have any STEs. Additionally, GATT/WTO members should provide notifications of any changes to their state trading regimes in intervening years, called “updating notifications.” Nonetheless, compliance with article XVII was poor during the period we reviewed. Regular, full notifications of STEs by GATT members were the exception and not the rule. Even during 1981, the full notification year with the best response rate, approximately 79 percent of GATT member countries failed to submit a notification. (Article XVII notifications by year and by country from 1980 to 1994 are contained in apps. IV and V, respectively.) As shown in appendix IV, compliance with the full notification requirement every 3 years was poor. The number of countries responding during full notification years varied from a high of 18 notifications in 1981 (about 21 percent of GATT members) to a low of 7 notifications in 1990 (about 7 percent of GATT members). Only Finland, Norway, and Sweden provided full notifications for all five of the full notification years occurring during the period we reviewed. Between 1980 and 1994, a total of 29 countries responded at least once to article XVII, providing either full or updating notifications. In several cases, the updating notifications provided the same amount of information contained in some member countries’ full notifications. As shown in appendix V, Austria, Norway, South Africa, and Yugoslavia were the most regular reporters, providing notifications in at least 11 of the 14 years under review. However, of the 29 countries submitting any notification, about 62 percent of the countries reported 3 or fewer times during the 1980 to 1994 period. Eight countries, including the United States, reported once during the 14 years. Due to poor compliance by most countries over the period reviewed, the GATT/WTO secretariat may lack current information about STEs in GATT/WTO member countries. For example, in our review of notifications we found that 6 of the 29 countries that provided notifications during this period had not updated their notification since 1981, and another 5 countries had not updated their notifications since 1984. Whether or not the level of state trading in these countries has increased or decreased over the past 15 years remains unclear. In addition, the lack of information hinders GATT/WTO members in determining whether other member countries’ STEs are adhering to GATT disciplines. Various Problems May Explain the Lack of STE Reporting We reviewed documents that indicated that some GATT members were uncertain about the definition of STEs and the coverage of article XVII and that this uncertainty may have caused some countries to not report STEs. The GATT Panel on Notifications of State Trading Enterprises emphasized in a 1960 report that STEs encompass a variety of activities or entities. However, our review of STE notifications confirmed what GATT/WTO and member country officials told us—that some member countries continued to struggle with the definition of STEs. In one case, for example, a country decided not to report at all since “the meaning and coverage of the term ‘state enterprise’ in Article XVII:1(a) of the General Agreement are not clear.” Inconsistent responses to the questionnaire further illustrate this possible lack of understanding of the definition of STEs. For example, two Central European countries submitted notifications in 1984 claiming that they had no state trading in the meaning of article XVII. However, two other Central European countries with similarly structured economies both reported extensive STE activity during this same period. Considering that all four countries operated command economies in which most aspects of trade involved the government, the inconsistent answers demonstrated a possible lack of agreement regarding the article XVII questionnaire. The lack of article XVII reporting by some member countries may also be attributed to the absence of a regular process within GATT for reviewing the notifications submitted. For example, one member country, in explaining its decision not to submit a notification, noted “the absence of any regular procedure for examining notifications so as to afford greater transparency.” The member country went on to state that “one may also note that because of the absence of such a procedure, many countries do not see themselves in some way as ‘motivated’ to notify.” A U.S. official told us countries did not report STEs because there was no review of the notifications and thus no scrutiny over the notification process. Similar perceptions that article XVII reporting was not a priority may exist among other member countries. For example, an official of one country’s permanent mission told us that some countries have been lax in meeting their reporting responsibilities because they felt the disciplines on article XVII were not as rigid as other GATT disciplines. Finally, a USTR official suggested that the low response rate among developing countries might also have been linked to the burden of reporting under GATT. The official said many of the developing and smaller GATT member countries may not have the administrative capacity in their governments to comply fully with the multiple reporting requirements under the various GATT articles. Discussions with an official from one country’s permanent mission confirmed this observation. However, this explanation does not address why some of the larger GATT members either did not report during the period we reviewed or had low response rates. Uruguay Round Improved Some Aspects of Article XVII, but Weaknesses Remain Although state trading was not a major negotiating issue during the Uruguay Round, GATT member countries agreed to clarify article XVII to address some of the problems previously described. The clarifications are contained in the Understanding, which is part of GATT 1994. The Understanding provided a definition of STEs and contained several measures to address procedural weaknesses of article XVII. The Understanding did not change the questionnaire used to collect information about STEs, but WTO members have agreed to review the questionnaire and the adequacy of information provided about STEs. Because these measures have not been fully implemented, it is too early to assess whether Uruguay Round changes will improve compliance with article XVII and, thereby, increase the amount of information available about STE activities and improve the quality of such information. United States and Others Proposed Clarifications to Article XVII Officials from the United States, GATT/WTO, and other countries we contacted recalled that state trading and the revision of article XVII were not major issues during the Uruguay Round. Nevertheless, the United States and other countries identified problems with article XVII and proposed modifications during the late 1980s to correct these problems. The United States proposed clarifying the application of all GATT disciplines to STEs, particularly marketing boards, and increasing the transparency about state trading practices. The United States suggested transparency could be improved by creating a working party to clarify the definition of STEs, review and revise the questionnaire, and conduct periodic comprehensive reviews of STE notifications. Other countries noted the need to clarify the definition of STEs, improve the notification process, and better understand the role of STEs in trade. According to U.S. officials, the text of the Understanding was made final in 1990 with the expectation that the Uruguay Round would end shortly thereafter. Although negotiations did not end until December 1993, they said article XVII was not revisited after 1990. One U.S. official told us if the United States had known in 1990 that negotiations were to continue for 3 years, it might have sought additional improvements to article XVII. Uruguay Round Defined STEs and Addressed Procedural Weaknesses of Article XVII “governmental and nongovernmental enterprises, including marketing boards, which have been granted exclusive or special rights or privileges, including statutory or constitutional powers, in the exercise of which they influence through their purchases or sales the level or direction of imports or exports.” The Understanding addressed procedural weaknesses of article XVII by improving the process for obtaining and reviewing information. For example, the Understanding required member countries to review their policies on submitting notifications about STEs and consider the need to ensure transparency in order to permit a clear appreciation of STEs’ operations and their effect on international trade. It also gave member countries the opportunity to question information provided by another member country. If a member country believes another member country has not adequately met its notification obligation, it can raise the matter for discussion among WTO members and can submit a counternotification to the WTO Council for Trade in Goods if its concerns are not resolved. The Understanding further addressed procedural weaknesses by establishing the WTO Working Party on State Trading Enterprises (the Working Party). The Working Party’s responsibilities include (1) reviewing notifications and counternotifications; (2) in light of notifications received, reviewing the adequacy of the questionnaire and the coverage of STEs notified; and (3) developing an illustrative list of the kinds of relationships between governments and STEs and the kinds of activities engaged in by STEs. One U.S. official said the creation of a regular process in the Working Party to review STE notifications was an important step towards improving compliance with article XVII. The Working Party met for the first time on April 6, 1995, to discuss the timetable for its work program during the next year. Some Working Party members told us they expect to meet informally through the summer to prepare for their next official meeting in the fall of 1995, when they will begin formal work on meeting their responsibilities. Finally, in order to improve member countries’ knowledge about STEs, the Understanding authorized the GATT/WTO secretariat to produce a background paper on the operations of STEs as they relate to international trade. GATT/WTO members told us they expect the paper to describe the countries that engage in state trading, the products traded, and the attributes of their STEs. The paper, which is due at the next official Working Party meeting, is to consider information provided so far to GATT/WTO about STEs as well as the next set of full STE notifications that were due on June 30, 1995. The Understanding did not change the form or content of the questionnaire used to collect information about STEs. Officials from the United States, GATT/WTO, and other countries told us the questions in the questionnaire can be answered with very specific or very general information and, as a result, the information provided so far about STEs does not provide sufficient transparency about STE activities to ensure they are adhering to GATT disciplines. However, GATT/WTO members differed on the exact information necessary to achieve such transparency. For example, officials from the United States and some other countries told us they are interested in obtaining more detailed information about transaction prices. Other GATT/WTO members maintained that such information is confidential and related to an STE’s commercial interest and that countries are not required by article XVII to disclose this type of information. The Understanding obligated the Working Party to study the adequacy of the questionnaire. Some Working Party members, including U.S. officials, told us they hope their discussions will produce a revised questionnaire that could be implemented at the ministerial conference scheduled for late 1996. It Is Premature to Assess Whether Compliance With Article XVII Will Improve Several U.S. and GATT/WTO officials said they expect that compliance with article XVII’s notification requirement will increase. Although the next set of article XVII notifications was due June 30, 1995, the majority of WTO member countries, including the United States, did not meet the deadline. More notifications were expected to be submitted during the summer of 1995. Until all or most notifications have been received and the Working Party can begin to review them, it would be premature to assess whether the addition of a definition and procedural measures would increase compliance with article XVII’s reporting requirements and improve the information available about STEs. A GATT/WTO official suggested that the general willingness to comply with article XVII’s notification requirement would be affected by the notification decisions of major trading countries, such as Canada, European Union (EU) member countries, or the United States. For example, a representative from one country, which views its own level of state intervention in trade as comparable to the EU’s, told us his country would probably postpone its own notification of STEs until it saw how the EU interpreted the notification requirement. In addition, representatives from several countries’ permanent missions to GATT/WTO told us they think the United States should provide an STE notification for USDA’s Commodity Credit Corporation (CCC). Representatives from other countries’ missions told us they were not sure whether CCC would come under the Understanding’s definition of an STE. The United States reported CCC as an STE in 1979 but subsequently reported no state trading in 1984. No decisions have yet been made about enforcing compliance with article XVII and related procedural issues. For example, it is not clear how the Working Party will handle situations in which countries (1) do not comply with the notification requirement, (2) do not respond to all questions in the questionnaire, or (3) submit a counternotification about another country’s STEs. Officials from GATT/WTO and other countries said that in general, enforcing compliance with article XVII is up to the Working Party and therefore is dependent on the will of member countries. Agreement on Agriculture Applies to STEs According to officials from the United States and other countries, the treatment of STEs was also discussed during Uruguay Round agriculture negotiations because of the prevalence of STEs engaged in agricultural trade. STEs that trade agricultural products are subject to all disciplines contained in the Uruguay Round’s Agreement on Agriculture, including several specific references made to STEs. WTO members are asked to provide certain information to the WTO Committee on Agriculture regarding implementation of their Uruguay Round commitments, including those made effective through STEs. However, because the first implementation year will not be completed until 1996, it is too early to tell whether countries with agricultural STEs are meeting their commitments. The Agreement on Agriculture contained a variety of disciplines designed to liberalize trade in agricultural products. GATT/WTO members are required to make specific reductions in three types of agricultural support—market access restrictions, export subsidies, and internal support—over a 6-year period beginning in 1995. In the area of market access restrictions, countries are required to convert all nontariff barriers, such as quotas, to tariff equivalents and reduce the resulting tariff equivalents (as well as old tariffs) during the implementation period. In the area of export subsidies, countries are required to reduce their budgetary expenditures on export subsidies and their quantity of subsidized exports. Finally, countries are required to reduce an aggregate measurement of selected internal support policies. In addition, the agreement established a WTO Committee on Agriculture to monitor implementation of Uruguay Round commitments. The disciplines on market access restrictions contain two specific references to STEs. First, the definition of nontariff barriers subject to conversion to tariff equivalents includes nontariff measures maintained through STEs. Second, when providing information to the Committee on Agriculture regarding implementation, GATT/WTO members are asked to explain the administration of market access commitments. Where such commitments are administered by STEs, details about the STE and its relevant activities should be provided. The Chairman of the Committee on Agriculture told us it is premature to assess whether commitments on market access restrictions are being met. The first year of implementation of the Agreement on Agriculture will be completed during 1996, depending on member countries’ implementation dates. A USDA official emphasized the importance of STE notifications submitted to the Working Party on STEs, as this information could help determine whether GATT/WTO members with agricultural STEs are meeting their market access commitments. A GATT/WTO official told us that references to STEs in the export subsidy disciplines are less specific than those in the market access disciplines. The agreement defines the types of export subsidies subject to reduction. If any such subsidies were paid to or received by an STE, they would be subject to reduction. Moreover, export subsidies not targeted for reduction cannot be applied in a manner that allows member countries to circumvent their commitments to reduce export subsidies. This would include export subsidies provided to or by STEs. According to a GATT/WTO official, it should be relatively easy to determine whether countries are meeting their commitments to reduce export subsidies, because the relevant subsidies tend to be quantifiable and easily identified. The official suggested it may be more difficult to know whether countries are circumventing these commitments because other types of export subsidies, including some that could be provided through STEs, are not as easily identified. Officials from the United States, GATT/WTO, and other countries recalled that during the agriculture negotiations, obtaining disciplines on STEs was not considered to be as important as obtaining disciplines on market access restrictions, export subsidies, and internal support. GATT members focused on the latter group of policy tools because of their distortive effect on trade. U.S. officials told us the United States viewed the EU’s agricultural policies as particularly problematic. During the negotiations, the United States relied on the support of countries in the Cairns Group to achieve meaningful concessions from the EU. Because some countries in the Cairns Group use STEs in the agriculture sector, U.S. officials said it would have been counterproductive to ask these countries to support U.S. efforts and challenge their agricultural policies at the same time. Role of STEs in GATT/WTO May Increase The role of STEs in GATT/WTO may increase as countries whose governments play major or dominant roles in their economies apply to join GATT/WTO. A number of such countries have already applied to join GATT/WTO, including China, Russia, and Ukraine. Officials in current member countries and at the GATT/WTO secretariat observed that integrating these countries into the GATT/WTO trading system would be a tremendous challenge because their economic traditions and attitudes towards state trading differ significantly from those of most current members. Several of these officials said that the role of state trading in GATT/WTO is a key issue for future discussion. Some GATT/WTO members told us they are interested in strengthening the disciplines contained in article XVII, but they also said that substantive changes to the article’s text will not likely occur until the next round of multilateral trade negotiations that is expected to begin in 1999 or the year 2000. Studies and other available information indicate that STEs play a more significant role in these applicant countries than in countries that have provided STE notifications to GATT/WTO. According to an official at the United Nations Economic Commission for Europe (ECE), STEs still play a large role in the economies of countries in the former Soviet Union (FSU), particularly in the case of exports. For example, at the beginning of 1994 Ukraine still maintained STEs in a number of sectors, including machine building, transportation, agriculture, coal, oil, and gas. However, this official said the FSU countries are also committed to follow the example of the Central and East European countries and slowly eliminate state trading regimes. Thus, it is also possible the role of STEs in these countries will decline over time. Some GATT/WTO members told us the state’s role in China’s economy may not decrease. The state has been instrumental in opening China’s economy and implementing market-oriented reforms. One aspect of reform has been the decentralization of trade authority from the national Ministry of Foreign Trade and Economic Cooperation to provincial governments. However, recent studies by OECD and the Brookings Institution note that this decentralization has not ended government control over trade. U.S. officials told us that meetings concerning China’s accession to GATT/WTO have been dominated by discussions of state trading, as member countries attempt to understand the government’s economic role and negotiate disciplines on its ability to control trade. Officials from the United States, GATT/WTO, and other countries told us China has agreed to abide by the requirements of article XVII regarding the activities of its STEs. However, several officials from the United States and other countries indicated that article XVII alone is not sufficient to help GATT/WTO members develop an understanding of China’s STEs nor to discipline them should the need arise. Multiple Agencies Have STE Monitoring Role USTR coordinates STE monitoring, with the participation of several U.S. agencies. USTR has primary responsibility for monitoring developments related to WTO and requests assistance from other agencies through an interagency group called the Trade Policy Staff Committee (TPSC).Monitoring STE issues is the responsibility of the TPSC Subcommittee on WTO Market Access, which includes officials from USDA; the Departments of Commerce, Defense, Energy, the Interior, Labor, State, and the Treasury; and the Office of Management and Budget. The USTR official who chairs the subcommittee told us that because state trading is a broad subject, monitoring it requires a wide range of expertise. Therefore, these officials attempt to monitor the activities of STEs in other countries according to their areas of expertise and then provide this information to USTR. They are to help USTR review the next round of STE notifications. In addition to these monitoring activities, U.S. officials participate in the WTO Working Party on STEs. USTR officials told us they would rely on the Working Party to monitor other countries’ compliance with the reporting requirements of article XVII and the Understanding. U.S. officials also participate in the WTO Committee on Agriculture. According to officials from the United States and other countries, the United States has been active in proposing agricultural STEs as a topic for discussion within the committee. In addition, USDA/FAS officials told us senior USDA officials have informed senior GATT/WTO officials of the importance the United States places on such discussions. One U.S. official told us USDA/FAS would monitor the extent to which STE notifications have enough information to determine whether countries are meeting the commitments contained in the Agreement on Agriculture. If the notifications do not allow such a determination, the United States can request that additional information be provided to the committee. USDA/FAS and Foreign Commercial Service staff are also responsible for monitoring STE activities in the countries where they are located as part of their regular reporting responsibilities. For example, USDA/FAS reports on major commodity sectors like wheat and dairy have covered STE activities. A USDA/FAS official told us the reporting instructions are flexible, and more information can be requested from staff in the field as necessary. Agency Comments We requested comments on a draft of this report from the U.S. Trade Representative and the Secretary of Agriculture, or their designees. On August 1, we obtained oral comments from USDA/FAS officials, including the General Sales Manager of FAS; and on August 2, we obtained oral comments from USTR officials, including the Director, Office of Tariff Affairs. USDA and USTR officials generally agreed with the information presented in the draft report and noted that it presented an accurate picture of some of the problems with STEs in the WTO/GATT context. In addition, they provided some technical comments, which we incorporated into the report where appropriate. We are sending copies of this report to the Secretary of Agriculture and to the U.S. Trade Representative. We will also make copies available to other parties upon request. If you have any questions about the information contained in this report, please contact me at (202) 512-5889. Major contributors to this report are listed in appendix VI. Article XVII of the General Agreement on Tariffs and Trade (GATT) GATT Article XVII Questionnaire Adopted in 1960 The 1994 Understanding on the Interpretation of GATT Article XVII Article XVII Notifications by Year, 1980-94 Response rate to questionnaire (percent) 1981(F) 1984(F) 1987(F) 1990(F) 1993(F) Article XVII Notifications by Country, 1980-94 1980,1981,1983,1984, 1986,1988,1990,1993, 1994 1980,1981,1982,1983, 1984,1985,1986,1987, 1988,1989,1991,1993, 1994 1981,1982,1983,1984, 1986,1987,1988,1990, 1993 1981,1984,1985,1986, 1987,1988,1989,1990, 1991,1993,1994 1980,1981,1983,1984, 1985,1986,1987,1988, 1989,1990,1991,1994 1980,1981,1982,1984, 1987,1988,1989,1990, 1991,1993 (continued) Major Contributors to This Report General Government Division, Washington, D.C. Phillip J. Thomas, Assistant Director Michael Kassack, Adviser Stanton J. Rothouse, Senior Evaluator Rona Mendelsohn, Evaluator (Communications Analyst) European Office Office of the General Counsel, Washington, D.C. Office of the Chief Economist, Washington, D.C. Loren Yager, Assistant Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO reviewed the activities of other countries' agricultural state trading enterprises (STE), focusing on: (1) General Agreement on Tariffs and Trade (GATT) members' reporting of STE activities from 1980 to 1994; (2) Uruguay Round commitments relating to STE; (3) the potential increase of STE under GATT and the World Trade Organization (WTO); and (4) U.S. efforts to monitor the activities of other countries' STE with respect to GATT and WTO requirements. GAO found that: (1) some information on STE in GATT member countries has been obtained through the notification process, but only about 21 percent of the member countries complied with the reporting requirement between 1980 and 1994; (2) GATT and WTO officials attributed the noncompliance to definitional problems, the lack of a systematic review of STE notifications, and the overall burden and the low priority that some member countries assigned to GATT reporting; (3) WTO officials plan to evaluate the questionnaire used to collect information about STE in order to improve member countries' compliance with the reporting requirements; (4) the Uruguay Round Agreement on Agriculture requires member countries to reduce market access restrictions, export subsidies, and internal support and report on how they complied with these commitments, beginning in 1996; (5) while countries like Russia and China are undertaking privatization efforts to move toward more market-oriented economies, the role of STE in GATT and WTO will likely increase if these countries become members of GATT and WTO; and (6) the U.S. Trade Representative is responsible for monitoring STE activities and their compliance with Uruguay Round commitments, while the Foreign Agricultural Service and the Foreign Commercial Service are responsible for monitoring STE activities in the countries where they are located and reporting on STE activities as needed.
Background The armed services have a long-standing shortfall in their capability to adequately test electronic combat systems on aircraft and ships. From August 1989 through July 1991, we issued a series of reports identifying each service’s problems with their test equipment for electronic combat systems. To address these problems, in June 1993, the Air Force and Navy approved a Joint Mission Need Statement for a flight-line electronic combat systems tester to improve aircrafts’ electronic combat test capability. The Department of Defense designated the Air Force as the lead service, and the Air Force and Navy entered into a memorandum of agreement in December 1994 to establish a joint tester program. Following a concept development phase, an engineering and manufacturing development contract was awarded in March 1996. The tester has been developed to provide the Air Force and Navy with a flight-line test capability for aircraft electronic combat systems, to include both on-board systems and those mounted outside the aircraft in pods. The contractor for the tester, AAI Corporation, has developed a basic core test set that can be used with various aircraft. The basic core test set is supplemented by subsidiary test program sets and related software for each aircraft type and its specific systems. The tester provides an end-to-end test capability for electronic combat systems, including jammers, radar warning receivers, and other subsystems and their associated wiring. The tester inputs radio frequency signals into the aircraft’s antennae and then measures whether the signals were correctly received and the appropriate responses generated by the electronic combat systems. The tester can identify faulty wiring and also isolate the faulty system component to make the maintenance task easier. Developmental testing of the basic core test set and the test program set for the F-15C was completed in October 2000 and for the F/A-18C test program set in December 2000. Additional test program sets are to be developed for most of the current Air Force and Navy fighter aircraft equipped with electronic combat systems, and there will be growth potential for adapting the system for future aircraft. Quantities to be procured include 56 Air Force and 40 Navy basic core test sets with test program sets for the F-15C and F/A-18C, respectively. The total planned procurement for the basic core test set is 121 for the Air Force and 188 for the Navy. Test program sets for other aircraft are to be subsequently developed and procured. Although Behind Schedule and Over Cost Estimate, New Tester Is Performing Effectively Schedule slippage and cost growth have occurred in the tester program. However, the Air Force’s and Navy’s use of the new tester indicates that performance goals are being met and that a useful capability is likely to be achieved. The development schedule for the new tester has slipped about 2 years from the original plan’s schedule because the difficulty in designing the system was underestimated. This delayed the production decision for the tester until April 2001. Prior to the production decision, the services completed developmental testing but did not undertake operational testing of the tester as planned. Operational testing was deferred because the lead test agency—the Air Force Operational Test and Evaluation Command— was concerned that the tester contractor was still making design changes to the system and that operational testing should utilize articles that represent the final design to be produced. Consequently, additional developmental testing using available prototypes was substituted for operational testing to provide test data to support the production decision. If operational testing of the tester’s final design identifies a need for further design changes, the testers procured would require retrofit. Regarding program cost, the cost under the initial development contract for the basic core test set and the F-15C and F/A-18C test program sets was originally estimated to be about $12 million. As of January 2001, the cost of the contract had increased to $28.9 million. Ultimately, the program’s total cost will be a function of future decisions regarding the extent to which other aircraft and electronic combat systems, such as the radar warning receivers and radar jammers on the Air Force’s F-15E and the Navy’s F/A-18E/F, will use the new tester. These aircraft and their electronic combat systems will require the development and procurement of customized test program sets, as well as additional quantities of the basic core test set. According to the Air Force, the tester has performed effectively in testing. Developmental testing of the basic test set and the F-15C test program set was performed at Eglin Air Force Base from March through October 2000. According to the Air Force’s developmental test organization, the tester met or exceeded expectations for all test objectives. For the key performance parameter of demonstrating at least 90-percent success in fault detection, the tester detected and isolated all faults. The testing disclosed that 29 of 31 F-15Cs actually had one or more faults in their electronic combat systems. The faults detected ranged from the identification of parts needing to be replaced inside the electronic combat systems (so-called Group B) to the wiring, antennae, and control units that connect the systems to the aircraft (so-called Group A). According to program officials, no existing tester has previously been able to test the Group A equipment as well as the Group B systems. Moreover, the new tester provides an ability to augment an electronic combat system’s internal system check (referred to as Built in Test, or BIT). In the past, if a system’s BIT indicated a fault, maintenance technicians were forced to remove the system components from an aircraft to retest them in the maintenance shop—a time-consuming and cumbersome process. The new tester provides a check against the BIT without the system’s removal from the aircraft. The Air Force used the tester to test operational 33rd Fighter Wing F-15C aircraft at Eglin about to be deployed to Operation Southern Watch in Iraq. After successful testing at the 33rd, it was then used to test F-15C aircraft at the 1st Fighter Wing at Langley Air Force Base. These aircraft are regularly deployed to Operation Northern Watch in Iraq. At Langley, 12 of 13 F-15Cs thought to be fully mission capable actually had one or more faults in their electronic combat systems. The potential effects of some of these faults could have been that these aircraft would have entered combat with partially functioning protective systems; some of these faults would have left the systems nonfunctional. Navy test officials advised us that the tester also performed well with their F/A-18C aircraft, identifying faults that the Navy’s current test equipment had been unable to identify. The Navy performed developmental testing of the basic test set and the F/A-18C test program set at Naval Air Warfare Center, Weapons Division, Point Mugu, California; Miramar Marine Corps Air Station, California; Lemoore Naval Air Station, California; and Oceana Naval Air Station, Virginia, from September 1999 through January 2001. The Navy tested 16 aircraft in California, 14 of which had faults identified by the tester. Subsequently, 10 F/A-18C aircraft were tested at Oceana, and all were found to have unknown faults in their electronic combat systems. Each of the 10 aircraft had at least 3 faults disclosed by the new tester, and 1 aircraft had 12 faults. Because the tester works so well at disclosing faults, the services plan to expand its use to other electronic combat systems on other fighter aircraft. The Air Force intends to use the tester also on its F-16s and the Navy, on its F-14s. Potential Implications From Widespread Use of New Tester Because the tester has a much greater ability to identify electronic combat system problems, it can identify faults that the currently used test equipment is not able to find. The disclosure of these problems could have significant implications for readiness levels, logistics, and maintenance. Additionally, the failure to address problems with the electronic combat systems could encourage pilots to rely less on their electronic combat systems and more on other specialized aircraft designed to suppress enemy air defenses, such as the EA-6B. Readiness Issues The test results for the F-15C and F/A-18C have implications for readiness levels not only for those types of aircraft, but also for other aircraft using either the same or similar electronic combat systems (such as the F-15E and F/A-18E.) Readiness levels are lower than the services previously believed, since the F-15C and F/A-18C aircraft, which were previously (and reasonably) reported by the services as fully mission capable, actually have electronic combat systems with previously unknown faults. During our review, we found this to be true as a result of our direct observation of the new tester in use at Eglin. We observed four aircraft being tested for an upcoming Southern Watch deployment. In the testing that we observed, all four aircraft, which were believed to be fully mission capable, were found to have unknown faults that had to be repaired. The Air Force has a criterion that its F-15 fighter wings seek to maintain an 81-percent fully mission capable rate. However, combining the statistics for using the new tester in 2000 at the Eglin wing (29 of 31 aircraft had unknown faults) and the Langley wing (12 of 13 had unknown faults), the Air Force found that 41 of 44 F-15Cs tested were not fully mission capable. Likewise, since all 10 of the Navy’s F/A-18C aircraft tested at Oceana Naval Air Station with the new tester had three or more unknown faults, the Navy also could face unacceptably low readiness levels. Logistics Issues Once the services introduce the new tester for widespread usage, they are likely to find, as they did during testing, that the reliability of their electronic combat systems is much lower than previously thought. Consequently, more logistics support in the form of additional spare parts to fix previously undiagnosed faults will be required in the future. According to Air Force officials, on the basis of the new tester’s use on the F-15C aircraft at Eglin and Langley, the Air Force will experience a requirement for more frequent repairs and an added logistics problem. At Warner Robbins Air Logistics Center, we were advised that spare parts shortages already exist for F-15 electronic combat systems. Maintenance officials at both Eglin and Langley stated that these shortages cause them to use cannibalization—i.e., removing a working part from one aircraft to install it on another aircraft—to meet the wing’s flying schedule. For example, while we observed the new tester being used on operational aircraft at Eglin, several cannibalizations of electronic combat system parts were required before the testing could be completed. Maintenance officials told us that because spare parts were in limited supply, it was common for aircraft being tested to use cannibalized parts from another aircraft in order to be repaired. Although the scope of our review did not include an assessment of the impact of using the new tester on logistics for the Navy’s F/A-18C fleet, we believe that using the new tester could also reveal a significant future problem for F/A-18C operational deployments. Generally, even if the Navy does not have a spare parts shortage as serious as the Air Force’s, maintaining the readiness of deployed aircraft on carriers is more difficult because of the quantity limitations on spare parts storage aboard ship. A Navy maintenance person advised us that on his carrier’s recent deployment to Southern Watch, the spare parts for the electronic combat systems used on the F/A-18C were completely exhausted and maintenance personnel had to resort to cannibalization to maintain flight operations. This situation existed without the Navy’s having access to the new tester, which would likely identify even more parts needing to be replaced. Maintenance Burden Our review indicates that, in addition to the potential for heightened readiness and logistics concerns, the introduction of the new tester could increase the maintenance burden on the services because the new tester could identify many more repairs that have to be made. This could intensify existing pressures on maintenance personnel to resort to cannibalization. As we stated in our recent testimony for the Congress, making repairs via cannibalization requires at least twice the maintenance time as making repairs using new spare parts. Moreover, if use of the new tester results in further increases to the maintenance burden, it could also affect the Air Force’s problem in retaining skilled technicians. Reinforcing this, both Eglin and Langley maintenance officials advised us that there are already shortages of trained maintenance personnel at the 33rd and 1st wings. In fact, the Air Force Posture Statement 2000 cites low retention of maintenance technicians as one of four factors resulting in the 99-percent drop in the mission-capable rates of Air Force aircraft since 1994. Furthermore, given the test results associated with the use of the new tester on the F/A-18C, the Navy could expect a significant increase in its maintenance burden. However, we were not made aware of any particular retention problem associated with the maintenance burden being experienced by Navy personnel during this review. Reduced Electronic Combat Readiness Could Increase the Need for Suppression of Enemy Air Defenses The new tester’s use could cause pilots of Air Force and Navy combat aircraft to be reluctant to rely solely on their electronic combat systems for self-protection from enemy air defenses. Recognizing reduced readiness and reliability of their self-protection systems, pilots could look for greater support from other specialized aircraft designed to suppress enemy air defenses, such as the EA-6B. We recently reported that current suppression capabilities are not adequate. To the extent that the new tester discloses reliability problems with existing electronic combat self- protection systems, the need to improve suppression capabilities would only be that much greater. Using New Tester on Other Aircraft Types Could Reveal Similar Problems Given the experience from using the new tester on the F-15C and F/A-18C, it is likely that using the new tester will find a number of undisclosed faults in electronic combat systems. Many of the electronic combat systems on current aircraft are older systems that are already experiencing obsolescence problems, such as difficulty in acquiring spares due to vendors that go out of business or are no longer producing old technology equipment (referred to as “vanishing vendors”). The Air Force’s special test program, called Combat Shield, is used periodically to test a variety of types of operational aircraft for readiness. Typically, even without using the new tester, testing via Combat Shield has found that some aircraft in every wing tested have faults in their electronic combat systems, regardless of the aircraft type. For example, Combat Shield found undisclosed faults when testing was conducted at wings equipped with the F-16. In fact, Air Force and Navy officials have already identified emerging problems regarding readiness, logistics, and maintenance for other electronic combat systems. This applies to systems both internally carried or externally mounted on an aircraft. For example, the ALQ-131 jammer system, externally carried by several Air Force aircraft, is projected to have a mission capable rate of 30 to 40 percent by 2006 because of obsolescence and the lack of spares. Furthermore, according to Air Force officials at Warner Robbins Air Logistics Center, funding priorities have constrained both spare parts acquisition and sustaining the engineering needed to address the obsolescent parts issue. Conclusion The armed services have had problems for years with their ability to adequately test their electronic combat systems. The success of the new tester in providing improved test capability is a positive development. Because the tester has identified many more faults in the F-15C and F/A-18C electronic combat systems than the current test equipment was identifying, existing readiness, logistics, and maintenance problems with such systems could worsen. However, pilots would at least have greater knowledge about the readiness and reliability of their self-protection systems and their need for support from specialized aircraft designed to suppress enemy air defenses. On balance, we believe it makes sense for the Air Force and Navy to consider using the new test equipment on their nonfighter aircraft. Recommendation for Executive Action Because the new tester’s use provides the ability to identify previously unknown faults in electronic combat systems, we recommend that the Secretary of Defense direct the Air Force and the Navy to consider expanding the new tester’s use beyond fighter aircraft to other types of aircraft. Agency Comments In written comments on a draft of this report, the Department of Defense agreed with our finding that the new tester provides a much better capability to assess electronic combat systems than the services’ existing testers. It also agreed that once the services introduce the new tester for use on a widespread basis, they are likely to find that the reliability of the electronic combat systems is lower than previously thought. Consequently, more logistics support may be required in the future, and the maintenance burden may increase. The Department concurred with our recommendation. Scope and Methodology We reviewed the results of the Joint Service Electronic Combat Systems Tester development testing and determined program status through discussions with program office officials and a review of appropriate documentation. We discussed the status of the Air Force’s aircraft electronic combat systems with Air Combat Command officials responsible for these systems on all Air Force operational aircraft. We held discussions regarding logistics support and maintenance with officials at Warner Robbins Air Logistics Center responsible for Air Force electronic combat systems. We held similar discussions with officials at Jacksonville Naval Air Station regarding Navy aircraft electronic combat systems. We also observed and discussed the testing of operational F-15C aircraft with officials at the 33rd Wing at Eglin Air Force Base and discussed the results of similar tests with officials of the 1st Wing at Langley Air Force Base. These two Wings have about 40 percent of the Air Force’s F-15C aircraft. We also relied on our previous reviews of electronic warfare for background information on the existing logistics and maintenance problems with electronic combat systems. We conducted our review from August 2000 to August 2001 in accordance with generally accepted government auditing standards. We are sending copies of this letter to the Secretaries of the Air Force and Navy; to interested congressional committees; and to the Director, Office of Management and Budget. If you have any questions, please contact me on (202) 512-4841. Major contributors to this report were Michael Aiken, Terry Parker, and Charles Ward. Appendix I: Comments From the Department of Defense
The armed services have had problems for years with their ability to adequately test their electronic combat systems. The success of the new Joint Service Electronic Combat Systems Tester Program in providing improved test capability is a positive development. Because the tester has identified many more faults in the F-15C and F/A-18C electronic combat systems than has the current test equipment, existing readiness, logistics, and maintenance problems with such systems could worsen. However, pilots would at least have greater knowledge about the readiness and reliability of their self-protection systems and their need for support from specialized aircraft designed to suppress enemy air defenses. GAO believes that it makes sense for the Air Force and Navy to consider using the new test equipment on their non-fighter aircraft.
Background OBO was instituted on May 15, 2001, replacing State’s Office of Foreign Buildings Operations. OBO manages the construction of new facilities that can satisfy the State Department’s stringent security standards and provide U.S. diplomatic personnel secure, safe, and functional office and residential environments. Along with the input and support of other State Department bureaus, foreign affairs agencies, and Congress, OBO sets worldwide priorities for the design, construction, acquisition, maintenance, use, and sale of real properties and the use of sales proceeds. OBO is composed of five main offices: Planning and Development, Real Estate and Property Management, Project Execution, Operations and Maintenance, and Resource Management. The construction program is located primarily in the Project Execution Office, specifically in the Construction and Commissioning Division within that office. In response to terrorist threats, the State Department in 1986 began an embassy construction program, known as the Inman program, to protect U.S. personnel and facilities. In 1991, we reported that State was unable to complete as many projects as originally planned due to systemic weaknesses in program management, as well as subsequent funding limitations. This construction program suffered from delays and cost increases due to, among other things, poor program planning, difficulties acquiring sites, changes in security requirements, and inadequate contractor performance. Following the demise of the Inman program in the early 1990s, the State Department initiated very few new construction projects until the 1998 embassy bombings in Africa, which prompted additional funding for security upgrades and the construction of secure embassies and consulates. Through State’s security upgrade program, the department has done much since the 1998 bombings to upgrade physical security at existing overseas posts without building new embassy or consulate compounds. These security upgrades have included constructing perimeter walls, anti-ram barriers, and access control facilities at many posts. However, even with these improvements, most office facilities do not meet security standards that State developed to protect overseas diplomatic office facilities from terrorist attacks and other dangers. As of December 2002, the primary office building at 232 posts lacked desired security because it did not meet one or more of State’s five key security standards of (1) 100-foot setback between office buildings and uncontrolled areas, (2) perimeter walls and/or fencing, (3) anti-ram barriers, (4) blast-resistant construction techniques and materials, and (5) controlled access at the perimeter of the compound. Only 12 posts had a primary building that met all five standards. As a result, thousands of U.S. and foreign national employees may be vulnerable to terrorist attacks. After the 1998 attacks, State identified facilities at about 185 posts that would need to be replaced to meet security standards. OBO plans to construct the replacement facilities on embassy and consulate compounds that will contain the main office building, all support buildings and, where necessary, a building for USAID. While State continues to fund some security upgrades at embassies and consulates, it has shifted its resources toward those capital projects that would replace existing facilities with new, secure diplomatic compounds or substantially retrofit existing, newly acquired, or leased buildings. As shown in figure 1, funding for State’s capital projects has significantly increased since fiscal year 1998. State received about $2.7 billion for its new construction program from fiscal year 1999 through fiscal year 2003 and requested $890 million for fiscal year 2004. OBO in June 2003 estimated that beginning in fiscal year 2005 it would cost about $17.5 billion to replace the remaining vulnerable posts. As of September 30, 2003, State had started construction of 22 projects to replace embassies and consulates that are at risk of terrorist or other attacks. Toward the end of fiscal year 2003, State awarded contracts for an additional 7 projects. The timeline for funding and completing the remaining projects depends on the amount of funding State receives annually for the construction program. At the proposed fiscal year 2004 rate of funding, it will take more than 20 years to fully fund and build replacement facilities. OBO Mechanisms to More Effectively Manage the Embassy Construction Program Recognizing past problems managing State’s overseas construction program, OBO in 2001 began to institute organizational and management reforms in its structure and operations. OBO intended that these reforms— which are designed to cut costs, put in place standard designs and review processes, and reduce the construction period for new embassies and consulates—would bring rational and efficient management to OBO by using a results-based approach to program management. OBO has instituted the following seven key mechanisms over the past 3 years to better manage its expanded embassy construction program: the Long-Range Overseas Buildings Plan, which prioritizes and summarizes capital projects over 6 years; monthly project reviews at headquarters, where senior management officials review ongoing projects to identify and resolve current or potential issues at all stages of the project; an Industry Advisory Panel, which advises OBO on industry best practices in the construction sector; efforts to broaden the contractor pool through events such as Industry Day, where interested contractors are invited to learn about OBO’s construction program; ongoing work to standardize and streamline the planning, design, and construction processes, including the initiation of design-build contract delivery and a standard embassy design for most projects; additional training for OBO headquarters and field staff; and advance identification and acquisition of sites. Development of the Long- Range Overseas Buildings Plan To help manage State’s expanding large-scale construction program, OBO developed the Long-Range Overseas Buildings Plan, first published in July 2001 and most recently updated in March 2003. The latest version of the plan prioritizes proposed capital projects over 6 years, from fiscal years 2003 through 2008, based on input from State’s Bureau of Diplomatic Security, regional bureaus, and agencies with overseas presence. It describes and provides a justification for the foreign affairs community’s global and regional capital project requirements. According to OBO, it also provides the basis for proceeding in a logical and focused fashion to improve the security, safety, and functionality of facilities overseas. Each year the plan is updated to capture changes resulting from budget actions and requirements of posts overseas. According to the latest version of the plan, State plans to start replacing facilities at 75 vulnerable posts from fiscal year 2003 to fiscal year 2008 at an estimated cost of $7.4 billion. As described in the March 2003 plan and by OBO officials, State followed a multistep process in developing its phased site acquisition, design, and construction schedule for its security capital projects: The Bureau of Diplomatic Security completed its annual security evaluation of all the U.S. overseas posts, taking into account many factors affecting a post’s overall security level. The evaluation listed vulnerable posts and ranked them in terms of security issues. Because the terrorist threat is global and because the buildings have fundamental security problems, Diplomatic Security and OBO officials believe that there are a great many posts that are very vulnerable and in need of replacement, and that the differences in vulnerability do not make posts at the lower end of the list substantially safer than those at the top of the list. By congressional mandate, these posts are listed and ranked in bands of 20, through a process discussed in the following paragraphs. Congress directed that State spend its security capital funds, which are funded within the Embassy Security, Construction and Maintenance account, on the top 80 posts only. Working with the security-prioritized list, each regional bureau annually ranked all posts within its region that were in the top 80 replacement list based on such factors as threat, survivability, staffing trends, regional interests, and functionality. OBO officials told us this effort resulted in a prioritized list for State’s security capital projects for each of the six regional bureaus, which responds to the global nature of the transnational terrorism threat. Each year, as new posts are added, these posts usually go to the end of a bureau’s priority list. Finally, OBO combined the prioritized lists from the different regions into one master list, which, as mentioned above, OBO updates annually. The first six posts on the list were the top ranked post from each region. Posts 7 through 12 on the list were the second-ranked posts from each region, and so on. With the help of its Planning and Real Estate Offices, OBO then determined if a site already existed to build a new facility and, if not, when new sites could actually be acquired. When necessary, OBO rescheduled the list based on the likely available capital security funding in each year covered in the Long-Range Overseas Buildings Plan, opportunities or problems in acquiring a site, and constraints on the ability of construction companies to work in a particular country at the planned time. This prioritized and scheduled listing of projects then becomes the security capital portion of the Long-Range Overseas Buildings Plan. State also requests funds for regular capital projects to replace posts not in the top 80 that have compelling operational or other requirements that must be addressed. The Long-Range Overseas Buildings Plan includes descriptions of these regular capital projects. OBO’s development of the plan was a major advancement in ensuring the embassy construction program would be better managed. According to the OBO director, while the current plan is not a budget document, it is an important tool that provides information for the budget decision-making process. It presents OBO’s best understanding of the U.S. government’s most urgent diplomatic and consular facility requirements through 2008 and provides all stakeholders, especially other U.S. government agencies that rely on State for their overseas facilities, a road map of where the department is headed. Monthly Project Reviews at Headquarters As part of OBO’s ongoing efforts to improve accountability and performance, OBO in June 2001 began holding monthly project performance reviews at headquarters for senior OBO officials and project executives. At these meetings, senior managers convene to discuss developments in their areas of responsibility and their plan of action to address current or potential issues. According to OBO documents and our observations of five monthly meetings, the monthly project performance reviews covered the following topics: real estate and property management, including acquisitions and project planning and development, including project evaluation and project execution, including the status of both construction projects by region and security upgrade projects; interiors and furnishings; design and engineering issues, such as design management, standard embassy designs, value engineering, and energy and seismic concerns; and security management of ongoing projects; information management, including issues related to information other management concerns, including management support, human resources and financial management, and operations and maintenance. At these monthly meetings, senior OBO staff present information on internal and external operations. For instance, in reviewing internal operations, the Project Execution Office presents information about personnel vacancies, number of training events attended per month, performance indicators, and travel budget. The Project Execution Office’s Construction and Commissioning Division reports on construction-related issues, including the number of outstanding claims, contract modifications, and the status of each construction project. For each construction project, the division notes the completion of major milestones, such as congressional notification, site acquisition, contract award, and notice to proceed. It also assigns a color-coded rating—green, yellow, or red—to each project. This rating reflects the project executives’ assessment of current or future issues that could affect either the project’s cost or scheduled completion date, with green indicating the project is generally on track and red indicating a major issue. Establishment of the Industry Advisory Panel In February 2002, OBO held the first quarterly meeting of the Industry Advisory Panel, whose function is to keep OBO apprised of the private sector’s best practices in the construction and maintenance of facilities. The panel consists of volunteer industry representatives who meet quarterly to discuss issues related to OBO’s construction program and advise OBO management on the industry’s views on the most efficient processes, optimal solutions, and best new technologies. OBO prepares new topics of discussion for each meeting, and the experts respond based on their experience dealing with similar issues. At the meeting held on May 20, 2003, we observed that the panel and senior OBO officials discussed the following: how to more effectively apply Value Engineering—a method that looks for the best value to the government at each phase of the design process, to what extent private U.S. companies build to U.S. standards overseas and how much they rely on local materials and equipment, the best approach for estimating project costs and budgets, and criteria used to determine if direct-hire staff should fill an organization’s gap in required skills or specialized contractors. OBO takes minutes of each Industry Advisory Panel meeting and posts them on its Web site where they are available to the public. According to OBO officials, the panel has been very active in providing invaluable strategic industry insights into a variety of issues. They touch upon the latest innovations in the commercial world combining best practices, streamlined processes, and proven cost-effective methods. According to a recent General Services Administration survey of about 470 federal advisory groups, OBO's Industry Advisory Panel demonstrated superior results on the “people,” “process,” and “outcome” indices of the survey. Efforts to Broaden Contractor Pool OBO has expanded its efforts to increase competition for bids on its new embassy and consulate compound projects through outreach to potential contractors. For example, OBO has held two annual Industry Days where interested parties can attend presentations and information sessions about doing business with OBO. According to OBO, Industry Day 2002 attracted more than 350 representatives, with slightly more than half from small firms. Industry Day 2003 had about 450 participants. As a result of these efforts, OBO has increased the number of contractors prequalified to bid on OBO contracts from 5 to 14. OBO believes that increasing the number of prequalified contractors will likely increase the number of bids on a project—thus allowing OBO to select the best value for its money—and will be important to the expanding construction program. Standardizing and Streamlining the Design Process OBO has initiated two major efforts to standardize and streamline the design process for new embassy and consulate compounds. First, it developed a standard embassy design for three different sizes of compounds, with a standard design for a small, medium, or large main office building (see fig. 2). For each project, the contractor adapts the standard design to meet site- and post-specific requirements. OBO believes that standard designs will give it the ability to contract for shortened design and construction periods, control costs through standardization, and assist with State’s initiative to rightsize its overseas posts. Second, OBO uses design-build as a contract delivery method, instead of design-bid-build, for most of its new projects. According to the latest Long- Range Overseas Buildings Plan, OBO plans to award design-build contracts for 56 compound projects between fiscal years 2003 and 2008. State’s design-build process saves time by (1) avoiding the time needed to award separate design and construction contracts and (2) allowing construction to proceed before design is completed. Under this process, a compound could be one-third of the way through construction before the final design is completed. In Sofia, Bulgaria, for instance, the project was 30 percent complete before the contractor delivered the final design package. To minimize any cost and schedule risks associated with design-build contracts, building requirements must be fully and precisely identified early in the process. Training According to OBO officials, OBO has instituted additional training requirements for all OBO staff involved in the contracting process and for all field staff. To enhance their knowledge of contracting, headquarters and field staff take courses in areas such as acquisition procedures, principles of contract pricing, and government contract law. Staff can take classes offered by the Defense Acquisition University and other private institutions to meet their training requirements. Staff in the Construction and Commissioning Division enroll in additional courses that enhance their skills in such areas as computerized project planning, leadership and management, cost control, language training, and security and safety. These courses are designed to increase their effectiveness as project supervisors. During our visits to two new embassy construction sites in Sofia, Bulgaria, and Yerevan, Armenia, we observed that the OBO project directors and the contract project managers closely managed and supervised the projects. Project directors maintained oversight with the help of experienced and knowledgeable American and Foreign Service National staff. Project directors made daily visits to the construction site to observe worker performance and held weekly progress meetings with OBO and contractor staff. During the weekly meetings, OBO staff asked about the activity schedule, identified potential problems, and came to a consensus on solutions. We observed the OBO project management team in Sofia, which consists of seven engineers and assistants, interacting closely with the contractor staff to identify possible delays and oversee construction. For instance, the project director questioned the pace at which the contractor was laying concrete slab on one of the floors. The project director was able to convince the contractor to pour concrete slab on one of the floors a day or two ahead of schedule. Site Acquisition To address potential issues in site acquisition, OBO has used its Long- Range Overseas Buildings Plan to guide its contingency planning and give it the flexibility to continue the overall program if an individual site is not available in the planned year. Rather than hold up the appropriated funds for a given project, State will, with congressional support, shift funding to another project where a site is available. For example, OBO deferred the planned compound in Asmara, Eritrea, from fiscal year 2004 to fiscal year 2005 due to difficulties obtaining a site. The new embassy compound in Lome, Togo, which had been planned for fiscal year 2004, took the place of Asmara. For projects planned for construction from fiscal years 2005 through 2007, State has a supply of seven U.S. government-owned sites and five sites under contract in its regular and security capital programs. These 12 sites will offer some flexibility to State as it moves forward with its Long-Range Overseas Buildings Plan. OBO officials told us that they plan to continue acquiring sites ahead of time to provide the program with this type of scheduling flexibility over the foreseeable future. These management initiatives show promise for improving the cost and schedule performance of embassy and consulate construction projects. However, as discussed in the following section, it is still too early in the new program’s implementation to assess their effectiveness in achieving these goals. Status of and Challenges Facing the Construction Program As of September 30, 2003, State had started construction of 22 projects to replace embassies and consulates at risk of terrorist or other attacks. Eight of the 22 projects were started before OBO began to institute its recent management reforms, and the remaining 14 were started since then. None of the projects started after the reforms were implemented has yet been completed; only one is more than 50 percent complete. Over half of the 22 projects have faced challenges that have led or, if not overcome, could lead to extensions to or cost increases in the construction contract. OBO reports attribute project delays to such factors as changes in project design and security requirements, difficulties hiring appropriate labor, differing site conditions, and civil unrest. The U.S. government also has had difficulty coordinating funding for projects that include buildings for USAID, which could lead to increased costs and security risks. From fiscal years 1999 through 2003, State received approximately $2.7 billion for its new embassy construction program. As of September 30, 2003, State was still in the initial phase of the overall program, having awarded the contracts for 11 of its 22 projects in fiscal year 2002. In addition, the contracts for another 7 projects were awarded in late fiscal year 2003 (see figs. 3 and 4). Of the seven completed projects, six were new embassy compounds and one was a newly acquired building that was retrofitted to meet the required security standards. Status of Projects Awarded before OBO Instituted Management Reforms As shown in table 1, seven of the eight projects that started before OBO’s management reforms were implemented have been completed. All eight projects experienced cost increases in the construction contract, which typically accounts for 60 to 70 percent of the total project budget; however, none of the seven completed projects exceeded its approved budget, and the budget for one was lower than originally planned. In addition, six projects were extended 30 days or more beyond the project completion date. The primary reasons for the delays included contract modifications and security-related disruptions. OBO has attempted to manage project resources and keep its projects within their approved budgets by using funds from the projects’ contingency line items or, in some cases, a management reserve line item. The use of contingency and management reserve line items is an industry practice. In Istanbul, for instance, the cost of the construction contract increased by about $8.5 million. OBO covered this cost increase by using funds from the project’s contingency line item, which OBO includes in project budgets for this purpose. In some cases where OBO has awarded contracts at a much lower value than the original independent government estimate, it has established a management reserve to hold these extra funds to insure against potential cost increases later in the construction. The OBO director must approve the use of funds for that project from the management reserve. We did not review how OBO established its project budgets, how it determined the contingency and management reserve line item amounts, or how it used the funds from those line items. Further, OBO has also reevaluated its budget plans for ongoing and planned projects and has identified significant savings to be applied either to a project whose contract bid had come in above the approved budget or to new projects. For example, in the March 2003 project performance review, OBO identified anticipated savings of about $63.6 million for six projects. OBO used these funds to sign a contract for a new construction project in Freetown, Sierra Leone, during fiscal year 2003. In the fiscal year 2002 appropriations conference report, Congress commended State for identifying such budget savings and urged the department to use them to significantly exceed the level of activity described in the budget request. OBO officials told us that the amount of such savings would decrease over time as the bureau improves its cost estimates. Status of and Challenges Encountered by Projects Awarded since OBO Instituted Management Reforms From fiscal year 2001, when OBO began to institute its management reforms, through the end of fiscal year 2003, State had started construction of 14 projects to replace vulnerable embassies and consulates. As shown in table 2, as of July 2003, OBO expected 13 of these 14 projects to come in at or under their approved budgets and 1 project—Conakry, Guinea---to come in 6 percent over the approved budget. Six of these projects have had increases in their construction contract costs ranging from 2 percent to 11 percent above their original contract value. In addition, the project in Sao Paulo, Brazil, added 48 contract modification days to its original project completion date. This project, a major renovation initiated at the end of August 2002, missed its scheduled completion date of August 28, 2003, and was completed on October 15, 2003. Table 2 provides more information on challenges that have affected or may affect the cost and schedule of the projects that were initiated after OBO made reforms to its management practices. Once a contract has been awarded, any subsequent changes to the design of the building are likely to have cost and schedule implications. In State’s design-build process, design and construction sometimes occur simultaneously. Any changes to the design can require changes in the construction schedule. A key component of the planning process for new embassy construction projects is the development of staffing projections. Staffing projections present the number of staff likely to work in the facility and the type of work they will perform. These are the two primary drivers of the size and cost of new facilities. Changes to staffing projections after Congress has appropriated money for a construction project may result in redesign and could lead to lengthy delays and additional costs, according to an OBO official. There is little room for flexibility after the budget is submitted given budgetary and construction time frames. Officials from Diplomatic Security, the State Department bureau that initiates changes for security reasons, make every effort to have security requirements finalized before a contract is awarded, but changes in technologies or new analyses sometimes make design modifications necessary. Although the bureau does not insist that previously awarded contracts be modified to reflect these kinds of changes, OBO makes a decision about what is most prudent for security reasons in determining whether to modify the contract. At both embassy construction projects that we visited, State added security or other requirements that increased costs and led to an extension in the contract completion date. At the U.S. embassy in Sofia, State added security requirements late in the design phase that increased the cost of the $50 million project by about $2 million and led to a 2-month extension to the original contract completion date. As in Sofia, Yerevan has had to adapt recent security modifications to include, among others, the addition of a generator and changes to the mail screening room. Finding Appropriate U.S. and Local Labor Contractors on at least two projects have had difficulty finding appropriate workers at the right time. For example, one project—a major retrofit of existing buildings in Sao Paulo, Brazil—was completed in about 14 months rather than 12 months due in part to a lack of skilled labor. In March 2003, OBO reported delays in executing this project because the contractor had not yet hired critical craftsmen, particularly U.S. and Brazilian certified welders. At the project we visited in Yerevan, which OBO considers to be on track, the contractor had not hired enough local laborers because of a shortage of qualified construction workers in Armenia. OBO officials said that the contractor hired skilled workers from neighboring countries and made up the lost time on the project. In addition, each project requires U.S. supervisors and laborers with security clearances to work in certain areas. However, contractor representatives told us that as State’s overall construction program accelerates and the demand for U.S. workers with security clearances escalates, this form of labor could command a premium. Some contractor officials stated that there could be a shortage of these workers in the near term, which could result in delays that could potentially affect the duration and cost of the overall program. Others said the workers will be available but will demand a higher price for their labor, which would increase contract costs. Differing Site Conditions In four ongoing projects where OBO had raised concerns about the projects’ progress, contractors had reported site conditions that differed from what they had originally anticipated. According to OBO documents, this difference could affect the projects’ cost or schedule because it could require the contractor to construct a different type of foundation for the buildings. At the construction site we visited in Yerevan, a project OBO considered on track as of July 2003, the contractor determined that it had not thoroughly analyzed the soil conditions at the site and would need to blast away about 9 feet of rock from the site to make room for the foundation. This blasting process caused about a 6-week delay, time that the contractor made up as the project progressed. Political and Civil Unrest or Other Unforeseen Events Many ongoing and planned projects are located in developing countries with the potential for political and civil unrest and thus pose unpredictable challenges to State in its embassy construction work. For example, civil unrest delayed the start of the project in Abidjan, Cote D’Ivoire, in 2002, leading to delays in the project schedule and potential cost increases. Further, political upheaval in Zimbabwe forced OBO to postpone construction of the new embassy in Harare from fiscal year 2002 until at least fiscal year 2005, according to OBO’s most recent Long-Range Overseas Buildings Plan. On the other hand, State decided to replace the embassy in Kabul, Afghanistan, and brought the construction project to the front of the 2002 schedule following the U.S. and allied military action there that responded to the September 11 terrorist attacks. Site Acquisition Although OBO has developed a flexible approach to deal with problems in acquiring sites for new embassy compounds, the issue of site acquisition could become more important as OBO increases the number of projects it undertakes each year. In the short term, the shifting of projects across fiscal years, as discussed earlier, keeps the overall program on track; however, in the long term, the number of difficult site acquisitions per year may increase. If the less complicated site acquisitions continue to be pulled to the front of the line, and more complicated ones pushed back, State may have increasing difficulty obtaining sites for its annual program. Coordinating Funding for Construction of Compounds with USAID Buildings As mentioned earlier in this report, OBO attempts to build embassy and consulate compounds that contain the main office building, all support buildings, and, where necessary, a building for USAID. In several cases, however, OBO has started to build compounds without the proposed USAID building because funding for the USAID building was not available. In compounds where USAID is likely to require desk space for more than 50 employees, USAID and OBO informally agreed that USAID would secure funding in its appropriations for a separate building on the compound. If USAID does not secure funding for its building at the same time as the new embassy compound, the compound is constructed as scheduled, but the USAID building may be built either after the rest of the compound, later in the construction process, or not at all. If a USAID building is constructed after the rest of the compound, the overall costs to the government would likely be higher because the contractor must remobilize the construction staff. The delay could also pose a security risk and inconvenience to post operations, as construction personnel and equipment would be coming into and out of the site on a regular basis. OBO officials told us that five projects were awaiting funding for the construction of the proposed USAID building on the compounds. At the U.S. embassy in Yerevan, funding for the compound’s USAID building was not available when the compound construction contract was awarded. Therefore, USAID staff will not move to the new site concurrent with the rest of the embassy’s staff. Rather, USAID may be forced to remain at the current, insecure facility at an additional cost until completion of its annex unless alternative arrangements can be made. The Ambassador told us that USAID was one of the most important missions at the embassy and that not having it colocated on the compound would create a major inconvenience to the embassy’s operations and decrease mission effectiveness. Figure 5 shows the central location of the proposed USAID building within the new U.S. embassy compound in Yerevan. As of September 2003, one completed project and five ongoing construction projects—including Yerevan—had to delay or postpone building the USAID annex due to a lack of USAID funding at the start of construction for the rest of the compound. Other locations included the recently completed project at Nairobi, Kenya; as well as the ongoing projects in Tbilisi, Georgia; Conakry, Guinea; Abuja, Nigeria; and Phnom Penh, Cambodia. In addition, according to an OBO official, two projects that will receive security capital funding this year—Bamako, Mali, and Kingston, Jamaica—may not have funding for the planned USAID buildings at the time of construction, although funding may become available sometime during construction. The U.S. government has had mixed success in dealing with this problem of coordinating funding. For example, for the new compound in Nairobi—the location of one of the 1998 embassy bombings—State awarded a construction contract for the USAID building in September 2003, 7 months after the rest of the compound had been completed. In another case, Dar Es Salaam, funding became available in time for OBO to modify the construction contract and complete the USAID building at the same time as the rest of the compound. We plan to do additional work in the near future on the issue of coordinating USAID funding with funding for new embassy and consulate compounds. Conclusion Providing secure and safe office facilities at U.S. embassies and consulates is a critical task that will require sustained funding and management attention over many years. To sustain support for this program, the State Department must demonstrate that it is exerting effective management, resulting in projects that are on time and within approved budgets. We believe that State has put in place a number of mechanisms that together represent a positive management approach with the potential to achieve favorable program results. However, it is too early to assess whether these new mechanisms will ensure that State can consistently achieve cost and schedule targets on individual construction projects over the course of the program. Agency Comments and Our Evaluation The Department of State provided written comments on a draft of this report (see app. III). In the comments, State said that the report is a fair and accurate representation overall of the department’s overseas construction process and provided additional information on (1) how State prioritizes and plans for its construction projects, (2) the problems in funding USAID building projects, and (3) other capital construction projects being implemented by OBO. We revised the text of the report to include information on how Diplomatic Security and OBO view the relative vulnerability of facilities at overseas posts. State also provided technical comments, which we incorporated in the report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of this report to other interested members of Congress. We will also provide copies of this report to the Secretary of State and the Director of the Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www. gao.gov. If you or your staff has any questions about this report, please call me at (202) 512-4128. Another contact and staff acknowledgments are listed in appendix IV. Scope and Methodology To determine whether the Bureau of Overseas Buildings Operations (OBO) has mechanisms in place to more effectively manage State’s construction program to replace vulnerable embassies and consulates, we (1) reviewed the report of the Overseas Presence Advisory Panel and earlier GAO reports that outlined problems in embassy security and State’s embassy construction program and (2) interviewed OBO and contractor officials about specific steps OBO has taken to improve program management, including the usefulness of and rationale behind both the standard embassy design for new embassy and consulate compounds and the design-build contract delivery method. We also attended quarterly meetings of the Industry Advisory Panel where industry representatives provided advice and information on industry best practices to senior OBO management officials, as well as monthly project performance reviews where senior OBO officials addressed issues related to embassy construction projects. Further, we visited two field locations—in Sofia, Bulgaria, and Yerevan, Armenia—where we observed the level of management and supervision at the new embassy construction sites and the contractor’s performance on the projects. To determine the status of the overall construction program, as well as its current and potential challenges, we reviewed capital projects—whether a completely new embassy or consulate compound, a new building, or a major retrofit of an existing building—that would bring the post up to current security standards. Table 3 provides the list of projects included in this review: 7 completed projects and 15 ongoing projects whose contracts were awarded from fiscal years 1999 through 2002. We excluded the Dili, East Timor, project from the scope of our review because it was an interim office building. Table 4 shows the seven projects whose contracts were awarded in late fiscal year 2003 that are outside the scope of our review. This table does not include the recently started projects in Athens, Moscow, or Beijing because OBO is utilizing the design-bid-build process for these three projects and has yet to award their construction contracts. We also reviewed the State Department’s Long-Range Overseas Buildings Plan, monthly project performance documents, contract modifications, and other OBO documents. We interviewed key State Department officials from OBO and Diplomatic Security and contractor officials currently working on new embassy construction projects. We visited the ongoing projects in Sofia and Yerevan to determine the types of problems that could affect cost and schedule and what OBO and the contractor are doing to overcome these problems. Contracts for the design and construction of these projects were awarded in September and August 2001, respectively. The contractor broke ground around September 2002. When we visited the sites in July 2003, the contractor was pouring concrete slabs for the floors. We did not verify data provided by OBO. We conducted our work between October 2002 and September 2003 in accordance with generally accepted government auditing standards. Information on Embassy Construction Projects’ Contractors and Building Size This appendix provides information on the contractors responsible for each of the 22 ongoing embassy or consulate construction projects. It also indicates which projects are using standard embassy design and the respective sizes of these projects. Table 5 is a list of contractors currently working on a new embassy or consulate construction project or compound renovation. Company locations are provided to show the geographic dispersion of the companies to which State awards its contracts. Table 6 is a list of the projects employing a standard embassy design and their size. Standard embassy designs were not used until fiscal year 2002. OBO plans to use the standard design for most future projects, unless the embassy involves a large degree of complexity or has special significance to the United States, such as Beijing. Comments from the Department of State The following are GAO’s comments on the Department of State letter dated October 27, 2003. GAO Comments 1. We relied primarily on information from the March 2003 Long-Range Overseas Buildings Plan and discussions with OBO officials in drafting this section of the report. We revised the text to include information on how Diplomatic Security and OBO officials view the relative vulnerability of facilities at overseas posts. 2. We plan to do additional work in the near future on the issue of the U.S. government’s efforts to coordinate USAID funding with funding for new embassy and consulate compounds. 3. Our work focused on the replacement of vulnerable embassies and consulates through construction projects that would bring the post up to current security standards. As a result, our report does not discuss these projects. GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Janey Cohen, Jessica Lundberg, Judy McCloskey, Nanette Ryen, and Michael Simon made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Since the 1998 bombings of two U.S. embassies in Africa, the State Department has done much to improve physical security at overseas posts. However, most overseas diplomatic office facilities still do not meet the security standards State developed to protect these sites from terrorist attacks and other dangers. To correct this problem, State in 1999 embarked on an estimated $21 billion embassy construction program. The program's key objective is to provide secure, safe, and functional compounds for employees overseas--in most cases by building replacement facilities. In 2001, State's Bureau of Overseas Buildings Operations (OBO)--which manages the program--began instituting reforms in its structure and operations to meet the challenges of the embassy construction program. This report discusses (1) OBO's mechanisms for more effectively managing the embassy construction program and (2) the status of and challenges facing the program. We received comments from State, which said that the report is a fair and accurate representation overall of the Department's overseas construction process. OBO in 2001 began instituting organizational and management reforms designed to cut costs, put in place standard designs and review processes, and reduce the construction period for new embassies and consulates. OBO now has mechanisms to more effectively manage the embassy construction program, including (1) an annual Long-Range Overseas Buildings Plan to guide the planning and execution of the program over a 6-year period; (2) monthly project reviews at headquarters; (3) an Industry Advisory Panel for input on current best practices in the construction industry; (4) expanded outreach to contractors in an effort to increase the number of bidders; (5) ongoing work to standardize and streamline the planning, design, and construction processes, including initiation of design-build contract delivery and a standard embassy design for most projects; (6) additional training for OBO headquarters and field staff; and (7) advance identification and acquisition of sites. State's program to replace about 185 vulnerable embassies and consulates is in its early stages, but the pace of initiating and completing new construction projects has increased significantly over the past two fiscal years. As of September 30, 2003, State had started construction of 22 projects to replace facilities at risk of terrorist or other attacks. Overall, 16 projects have encountered challenges that have led or, if not overcome, could ultimately lead to extensions in the completion date or cost increases in the construction contract. According to OBO, project delays have occurred because of such factors as changes in project design and security requirements; difficulties hiring appropriate American and local labor with the necessary clearances and skills; differing site conditions; and unforeseen events such as civil unrest. In addition, the U.S. government has had problems coordinating funding for projects that include buildings for the U.S. Agency for International Development. None of the projects started since OBO instituted its reforms has been completed; thus GAO believes it is too early to assess the effectiveness of the reforms in ensuring that new embassy and consulate compounds are built within the approved project budget and on time.
Background Guardianship In general, state courts appoint a guardian for individuals when a judge or other court official determines that an individual lacks the capacity to make important decisions regarding his or her own life or property. Depending on the older adult’s needs and relevant state laws, a court may appoint a “guardian of the person” who is responsible for making all decisions for the older adult, or a “guardian of the estate”—or conservator—who only makes decisions regarding the older adult’s property. When state courts appoint guardians, older adults often forfeit some or all of their decision-making powers. Depending on the terms of the court’s guardianship appointment, older adults may no longer have the right to sign contracts, vote, marry or divorce, buy or sell real estate, decide where to live, or make decisions about their own health care. Courts can generally appoint different types of guardians including the following: Family guardians. According to the Center for Elders and the Courts, courts favor the appointment of a family member or friend, often called a family guardian. However, it may not always be possible to find family or friends to take on this responsibility. Professional guardians. A professional guardian may be hired for a fee to be paid by the older adult, and may serve more than one older adult at a time. Some states require that a professional guardian be certified. This requirement is described in additional detail later in this report. Public guardians. If an older adult is unable to find a capable family or friend and is unable to afford the fees and associated expenses of hiring a professional guardian, a public guardian—whose cost is funded by the state or local government—may be appointed. Elder Abuse Elder abuse is a complex phenomenon. Table 1 describes the types of elder abuse, according to the National Center on Elder Abuse. Each of these can affect older adults with guardians, as well as those without. The categories include physical, sexual, and emotional abuse, as well as financial exploitation, neglect, and abandonment, but it is not uncommon for an older adult who has been abused to experience more than one type of abuse simultaneously. The Extent of Elder Abuse by Guardians Is Unknown, and Available Information Varies by State and Locality, but Some Efforts Are Under Way to Gather More Data Courts Lack Comprehensive Data on Older Adults in Guardianships and Elder Abuse by Guardians, but Some Courts Have Limited Information The extent of elder abuse by guardians nationally is unknown due to limited data on the numbers of guardians serving older adults, older adults in guardianships, and cases of elder abuse by a guardian. While courts are responsible for guardianship appointment and monitoring activities, among other things, court officials from the six selected states that we spoke to were not able to provide exact numbers of guardians for older adults or of older adults with guardians in their states. Also, on the basis of our interviews with court officials, none of the six selected states appear to consistently track the number of cases related to elder abuse by guardians. Court officials from the six states we spoke with described the varied, albeit limited, information they have related to elder abuse by guardians and noted the various data limitations that prevented them from providing reliable figures on the extent of elder abuse by a guardian. California. A court official in California stated that while the Judicial Council of California collects information about requests for restraining orders to prevent elder abuse, it does not separately identify those cases alleging elder abuse by a guardian. The council also collects the number of new guardianships filed each year statewide. The official stated the number of new adult guardianships is partially estimated because about half of the courts in the state report a combined number of guardianships for minors and adults. Florida. A court official in Florida acknowledged that the court does not collect guardianship and elder abuse information such as the number of guardians for older adults, the types of guardians currently serving in guardianship roles for older adults, and the number of elder abuse hearings conducted. This official cited lack of funding as a barrier for collecting this type of information. Detailed information on financial exploitation specifically may be available at the county level. For example, officials from one county in Florida told us that it collects data on the number of guardianships and the assets guardians control, and also identified the amount of fraud over a 4-year period. Minnesota. A court official in Minnesota told us that the state differentiates between guardianship of the person and conservatorship of the estate. The state collects figures on the (1) number of guardianship cases, (2) number of conservatorship-only cases, and (3) number of combined guardianship and conservatorship cases; and can break these figures out by minors and adults. The state also has a statewide program housed in the court system—the Conservator Account Auditing Program—that audits the financial reports that guardians of the estate (or conservators) are required to submit electronically through a system called MyMNConservator. This system can calculate the total assets under court jurisdiction in Minnesota, which are presented in an annual report. According to the annual report, the program audits accounts with assets over a certain threshold at regular intervals and upon referral by the court. However, one of these officials told us that this system does not track the age of the individuals with guardians of the estate, so the number of older adults in this arrangement is not identifiable. Ohio. An official from the Supreme Court of Ohio told us probate courts in the state report to the Supreme Court quarterly aggregate caseload data including the number of pending guardian applications for adults, the number of new applications for the appointment of guardians, and the number of guardianships closed, but the data are not classified by the age of the person under guardianship. Additionally, although local courts may do so, the Supreme Court of Ohio does not capture the number of complaints related to guardianships. Court officials directed us to state Adult Protective Services (APS) elder abuse complaint data. Texas. Court officials in Texas told us that every county is required to submit monthly information to the Office of Court Administration pertaining to active guardianships. However, officials told us that some counties do not report any active guardianships (considered to be underreporting), and some counties overreport on active guardianships that should have actually been closed, such as when the person under guardianship is deceased. Washington. A court official in Washington stated that while she could provide the number of adult guardianships statewide, she could not provide this information specifically for older adults. Further, the state’s Certified Professional Guardian Board publishes the number of grievances against professional guardians each year its annual Grievance Report, but does not identify which were for older adults. This official stated that while the court has case information on abuse by professional guardians, it does not track information on abuse by family guardians. Representatives from nongovernmental organizations we spoke with also told us that the way cases are classified in the court system makes collecting data on elder abuse by guardians difficult. For example, representatives from the Center for Elders and the Courts told us that few cases appear to be clearly labeled with phrases such as “elder abuse” in the court system, making it difficult to identify the universe of these cases. These representatives explained that cases of elder abuse may appear as other charges, such as assault, battery, or theft. Identifying all cases involving elder abuse, and more specifically that by a guardian, would require a difficult manual review of a large volume of court cases. Further, stakeholders we spoke to noted that instances of elder abuse by guardians can be difficult to prosecute, reducing the number of known cases in the legal system and presenting an additional challenge to identifying the extent of elder abuse by guardians. Collecting reliable information about court practices related to guardianship can also be challenging. At the request of SSA, the Administrative Conference of the United States (ACUS) administered and analyzed the results of a survey of judges, court staff, and guardians to review guardianship practices in state courts in 2014. The survey collected information regarding appointment, monitoring, and discipline of guardians; caseloads and electronic case-management capabilities; and court interaction with federal agencies and other organizations. However, in administering this survey, ACUS was unable to identify a sample of courts that were representative of the guardianship practices in all states as no comprehensive list identifying courts or judges that have oversight of adult guardianship cases exists, which makes it impossible to generalize the findings to a known universe. In the absence of reliable data, information on individual cases can provide some insight into the types of abuse guardians have been found to inflict on older adults under guardianship. In a 2010 report, we identified hundreds of allegations of abuse, neglect, and exploitation by guardians in 45 states and the District of Columbia between 1990 and 2010. At that time, we reviewed 20 of these cases and found that guardians had stolen or otherwise improperly obtained $5.4 million from 158 incapacitated victims, many of whom were older adults. Table 2 provides a summary of eight new cases in which guardians were found to have financially exploited or neglected older adults under guardianship in the last 5 years. Seven of these cases were identified using public-record searches, while the eighth was referred to us during one of our interviews. We examined court records, police reports, or other relevant documents to corroborate key information about each case. The illustrative examples of selected closed cases of elder abuse by a guardian we identified are nongeneralizable and cannot be used to make inferences about the overall population of guardians. Stakeholders we spoke to described their observations about elder abuse by a guardian. According to stakeholders, financial exploitation is among the more common types of elder abuse. Similarly, all eight of the closed cases of elder abuse by a guardian we found, presented above in table 2, were examples of financial exploitation. A prosecutor in one of the states we spoke to shared her observation that the majority of financial exploitation by professional guardians is done through overcharging for services that were either not necessary or were never performed. One representative commented that greed was a driving factor for guardians to financially exploit persons under guardianship. Some stakeholders we spoke to also expressed concerns that guardians may become overwhelmed by their guardianship responsibilities, or may not have the proper training and education to understand and perform their guardianship duties. Federal, State, and Local Entities Have Some Efforts Under Way to Collect More Information on Elder Abuse by Guardians Federal, state, and local entities have some efforts under way to try to collect better data on elder abuse and guardianship to support decision making and help prevent and address elder abuse by guardians. While state courts are responsible for overseeing the guardianship process— appointment and screening, education, monitoring, and enforcement— HHS has also taken steps to collect better data on guardianship and elder abuse. In 2011, we found that existing studies likely underestimated the full extent of elder abuse and could not be used to track trends. At that time, we recommended that HHS coordinate with the Attorney General to conduct a pilot study to collect, compile, and disseminate data on the feasibility and cost of collecting uniform, reliable APS administrative data on elder abuse cases from each state, and compile and disseminate those data nationwide. HHS agreed with our recommendation. In 2013, HHS’s Administration on Aging began developing the National Adult Maltreatment Reporting System (NAMRS)—a national reporting system based on standardized data submitted by state APS agency information systems. The goal of the system is to provide consistent, accurate national data on the exploitation and abuse of older adults and adults with disabilities as reported to state APS agencies. According to HHS officials and the contractor developing NAMRS, this system will have the capability to collect information that could help identify cases of elder abuse where a guardian was involved. For example, NAMRS can collect information about substitute decision makers, including guardians, associated with the complaint such as whether there was a substitute decision maker at the start and end of the investigation, whether the perpetrator was the older adult’s substitute decision maker, and what recommendations or actions the state APS agency initiated against the perpetrator. An official from the Administration on Aging stated that the pilot phase of the system is complete and the agency hopes to roll it out for data submissions from all states by early 2017. Representatives from the National Adult Protective Services Association stated that NAMRS would provide important information that could inform the guardianship process once fully implemented. For example, a court official from Florida suggested that having more information on elder abuse by a guardian may help guardianship programs decide whether to place more focus on screening, education, and monitoring of guardians, and enforcement of policies and laws governing guardians, as described later in this report. In addition to this federal effort, some state and local efforts are also under way to collect better data on elder abuse and guardianship. However, some of the stakeholders we spoke to acknowledged that these efforts face funding challenges and require ongoing support. Compiling data points. Officials in one county in Florida described an ongoing project they have to extract key data points from guardianship cases, such as the reason for alleged incapacity, asset values, and time spent with a guardian, to share with other state guardianship programs. These officials expect that the data points will be used to assess the guardianship system in this county, and suggested that courts could use critical data points on guardianship such as the average time in guardianship, average burn rate of assets, or typical fees charged in order to make appropriate data- driven decisions on how to better address cases of potential elder abuse by a guardian. A court official in Florida told us that in the fall of 2016, the Chief Justice of Florida will appoint a workgroup under the state’s Judicial Management Council to examine judicial procedures and best practices related to guardianship to help ensure that courts are protecting these individuals. Similarly, in Texas, the Office of Court Administration started the Guardianship Compliance Pilot Project, which provides additional resources to courts handling guardianships by supplementing local staff to review compliance with statutory requirements and by developing an electronic database to monitor guardianship filings of initial inventory and annual accountings. Information collected includes the number of courts involved in the project, the number of guardianships reviewed, the number of guardianships out of compliance with required reporting, the number of guardians reported to the court for person under guardianship well-being or financial exploitation concerns, and the status of technology developed to monitor guardianship filings. Collecting complaint data. In Washington, the state’s Certified Professional Guardianship Board collects complaint and grievance information about professional guardians. In its annual report, the state publishes the number of cases opened, closed, investigated, and in need of investigation. The state also discloses the number of sanctions, which can include decertification, suspension, reprimand, prohibition from taking new cases, and admonishment, imposed on professional guardians. Ohio’s Disciplinary Counsel also reported the number of grievances filed regarding guardianships in 2015 and through September 2016. A court official from the Judicial Council of California told us his state tracks the number of requests for restraining orders under California’s Elder Abuse and Dependent Adult Civil Protection Act, which can include those against guardians. Identifying red flags. Representatives from the National Center for State Courts (NCSC) are using data collected from Minnesota’s Conservator Account Auditing Program to identify “red flags,”—or risk indicators—such as unusually high guardian fees or excessive vehicle or dining expenses that would help courts detect cases that need additional review or monitoring. Representatives from the NCSC told us they are hopeful that these efforts will help courts move forward in preventing and responding to abuses. Federal Agencies Provide Funding to Support Coordination and Sharing Information, While State and Local Entities Oversee the Guardianship Process to Help Protect Older Adults with Guardians from Abuse Federal Agencies’ Measures to Help Protect Older Adults with Guardians Include Providing Funding to Support Coordination and Sharing Information While the federal government does not regulate or directly support guardianship, federal agencies, such as HHS, may provide indirect support to state guardianship programs by providing funding for efforts to share best practices and facilitate improved coordination. The federal government also shares information that state and local entities can use related to guardianship. Providing Funding to Support Coordination HHS has assumed a national role for funding grants to support coordination and information sharing that could help educate guardians and other parties. HHS has funded grants through the National Legal Resource Center to share best practices related to guardianship with states, attorneys, and other interested parties. The grant activities cover a wide range of guardianship issues related to court oversight and monitoring and illustrate the ongoing commitment to developing nationwide “Best Practice” resources on this issue. For example, grant activities have included providing technical assistance and policy guidance to states on guardianship issues, oversight and monitoring improvements, developing standards of practices for guardians, training attorneys practicing in the area of guardianship law, and developing solutions for interstate jurisdictional issues involving guardianship cases. HHS launched the Elder Justice Innovation Grants program in fiscal year 2016. The purpose of the program is to support foundational work to create credible benchmarks for elder abuse, neglect, and exploitation prevention and control, and for program development and evaluation. HHS expects projects funded by these grants will contribute to the improvement of the field of elder abuse prevention and intervention by developing and advancing approaches to address new and emerging issues related to elder justice, or by establishing and contributing to the evidence-base of knowledge. In 2016, HHS identified abuse in guardianship as one of the targeted priority areas for this program, and according to agency officials awarded three grants in this target area—each grant is funded at approximately $1,000,000 over 2 years, September 2016 through September 2018. At the completion of these grants, HHS expects grantees will have developed materials and information for further replication and testing. HHS also funds the National Center on Elder Abuse, which collects information regarding research, training, best practices, news, and resources on elder abuse, and provides this information to policymakers, professionals in the elder justice field, and the public. In addition, the State Justice Institute has provided grants to various entities to improve coordination and develop and share best practices. With help from funding provided by the State Justice Institute and others, states have developed Working Interdisciplinary Networks of Guardianship Stakeholders (WINGS) programs to facilitate enhanced coordination. WINGS programs bring together judges and court staff, the aging and disability networks, the public and private bar, mental- health agencies, advocacy groups, medical and mental-health professionals, service providers, family members and individuals affected by guardianship, and others to drive changes affecting the ways courts and guardians practice and to improve the lives of older adults (and others) with guardians. National Guardianship Association representatives told us that WINGS groups look at the broader picture of what is happening to address guardianship-related issues across the country and are not just focused on abuse and neglect. WINGS programs can make recommendations to state supreme courts and state legislatures based on their observations. American Bar Association representatives told us one of the keys to the success of a WINGS program is ongoing communication. The programs are not designed to be onetime conversations or a task force, but instead represent an ongoing communication mechanism to ensure optimal coordination. During our interviews, feedback for WINGS programs was consistently positive, and the WINGS group we spoke with emphatically encouraged other states to develop their own WINGS- like programs and expressed interest in continued funding support for its program. In addition, one of the goals of grants awarded through the Elder Justice Innovation Grants program is to establish, expand, and enhance state WINGS programs to improve the ability of state and local guardianship systems to develop protections less restrictive than guardianship and advance guardianship reforms. As of September 2016, at least 14 states and the District of Columbia have adopted either WINGS programs or something that resembles these programs. Sharing Information CFPB has developed materials that can be used by guardians, banks, and others to help better protect older adults with guardians from abuse. CFPB has published numerous educational materials to help protect older adults from financial abuse and exploitation. These include guides for fiduciaries that lay out the rules and responsibilities for appropriately handling the finances of another person. CFPB has also developed guidance for financial institutions. For example, in 2013, CFPB and seven other federal agencies issued guidance on privacy laws and reporting information on financial exploitation. This guidance is intended to make it clear that reporting suspected financial abuse of older adults to appropriate local, state, or federal agencies does not, in general, violate the privacy provisions of the Gramm-Leach-Bliley Act or its implementing regulations. CFPB officials stated that they hoped the 2013 Interagency Guidance will help financial institutions better understand their ability to report suspected financial exploitation to relevant federal, state, and local agencies. Additionally, in 2016, CFPB released an advisory and related recommendations for financial institutions on preventing and responding to elder financial exploitation. State and Local Measures Can Include Screening, Education, Monitoring, and Enforcement State and local courts have primary responsibility over the guardianship process and, hence, have a role in protecting older adults with guardians from abuse. In 2014, the National Association for Court Management published an adult guardianship guide with detailed information about how to plan, develop, and sustain a court guardianship program. This report laid out detailed suggestions for practices to effectively establish guardianships, monitor guardians, and train relevant stakeholders. Guardianship laws can also vary by state, but organizations such as the Uniform Law Commission—an organization that drafts legislation for states intended to bring clarity and stability to state statutory law—have developed model legislation to promote the uniformity of procedures for appointing guardians and conservators and strengthening due process protections for individuals in guardianship proceedings and jurisdictional conflicts. On the basis of our review of published materials and interviews with various state courts and nongovernmental stakeholders, we observed that measures states can take to help protect older adults with guardians vary but generally include screening, education, monitoring, and enforcement as shown in figure 1. According to multiple stakeholders we spoke with, an important step of the guardianship process is for a court to ensure that only those in need are appointed a guardian. Once the need for a guardian has been identified, state courts generally are responsible for screening proposed guardians to help ensure suitable individuals are appointed. On the basis of our review of published materials and interviews with various state courts and nongovernmental stakeholders, we observed the following promising practices and challenges related to screening. Least-restrictive option. Due to the loss of rights experienced when an older adult is placed into a guardianship, courts determine whether a guardian is appropriate. One representative from a state WINGS program that we spoke with expressed concern that guardianship may not be appropriate for some persons under guardianship, especially when the appointment is made for the convenience of others. To address this concern, this representative told us that courts in her state have modified court guardianship forms to encourage the use of less-restrictive alternatives to guardianship, such as a caregiver. Periodically reexamine guardianship. Some courts periodically reexamine the appropriateness of the guardianship to ensure that it is working for the person under guardianship and remains appropriate, since it can be difficult for an older adult with a guardian to demonstrate that his or her capacity has been restored. Criminal history and credit checks. These types of checks provide an easy and relatively inexpensive way to ensure that potential guardians do not have a criminal history or financial concerns. However, one of the stakeholders we spoke with described some limitations regarding background checks. For example, criminal- background check systems may not present a complete picture for various reasons, including that many elder abuse cases are not prosecuted. Even when prospective guardians have been prosecuted, a number of factors determine whether the criminal history appears in the background check. For example, a background check may not always identify a criminal history in another state. Education Stakeholders we spoke with agreed that education plays an important role in helping ensure that guardians understand their roles and responsibilities and appropriately perform their duties. On the basis of our review of published materials and interviews with various state courts and nongovernmental stakeholders, we observed the following promising practices and challenges related to education. Educational requirements. Education allows guardians to better understand their roles and responsibilities. For example, a court rule requires professional guardians in Washington to complete a training program developed by the state’s Certified Professional Guardian Board, while a statute generally requires family guardians to complete video or web-based training. According to state officials, the professional guardian training consists of a 90-hour course offered by the University of Washington, while family guardians usually complete a 2-hour training module. Florida statutes also generally require family guardians to undergo course work on guardian responsibilities, while applying more rigorous requirements for professional guardians. These types of training requirements may help to address unintentional and nonmalicious mistreatment such as comingling assets of the guardian and the person under guardianship. Officials at the National Guardianship Association told us that education about how to be an effective guardian is very important because guardians may make bad decisions due to lack of training or education about their role, and not intentional abuse. However, educational requirements for guardians are not in place in many states. Standards of practice and certification. The National Guardianship Association has developed standards of practice that define a guardian’s duty to comply with laws and regulations; the guardian’s relationship with the courts, protected persons, and others; and other duties to the person under guardianship. Also, the Center for Guardianship Certification has developed a certification program that tests a prospective certified guardian’s ability to apply these standards of practice. Under this certification program, certified guardians must meet continuing educational requirements to maintain their status as professional guardians. According to the Center for Guardianship Certification, 12 states require professional guardians to be certified, including 8 states that require certification via the use of Center for Guardianship Certification examinations, as of September 2016. Educational materials. Courts in all six of the selected states we spoke to post written guidance for guardians online. These guides explain the responsibilities and duties associated with becoming a guardian while providing other potentially useful information. For example, a guide from California discusses the importance of separating funds of guardian and of persons under guardianship by warning guardians that mixing their money with that of the persons under guardianship could get the guardian in serious trouble. Minnesota has also made online videos that explain the guardianship process as well as guardian roles and responsibilities. In conjunction with the NCSC, North Dakota developed a web-based information seminar that guardians can use to better understand their responsibilities. The training is scenario-based and helps the trainee understand his or her options, and was designed to be easily modified for replication in other states. One challenge that one official noted is that it can be difficult to reach family guardians to provide them with educational materials. Also, even when family guardians can be reached, one stakeholder suggested that a 30-minute training video is unlikely to radically enhance guardian performance when a guardian is faced with some of the more complicated scenarios. Support for guardians. One of the stakeholders we spoke with suggested that guardians and persons under guardianship would benefit from other initiatives, such as states providing guardians with a mechanism to ask questions and allowing guardians to receive positive feedback when something went well instead of just warnings when something went wrong. Another stakeholder told us it would be beneficial for guardians to interact with one another to finds ways to achieve better outcomes. Monitoring According to some of the stakeholders we spoke with, most states require guardians to be monitored, but the level of oversight and specific requirements vary by state. On the basis of our review of published materials and interviews with various state courts and nongovernmental stakeholders, we observed the following promising practices and challenges related to monitoring. In person visits and well-being checks. To monitor the person under guardianship’s personal well-being, one stakeholder told us courts in every state should periodically send a court investigator to conduct an unannounced site visit to check on that individual. Examinations of guardian expenditures. A state court official we spoke with cautioned that, without effective monitoring, guardians basically have free access to the person under guardianship’s money and other officials we interviewed outlined some specific related measures. For example, an official from one organization suggested that steps should be taken to help ensure that fees are appropriate for the services rendered (e.g., attorneys should not charge attorney rates for grocery shopping), while another representative of a different organization suggested that fees should be capped to help protect persons under guardianship. Other related suggestions from various stakeholders included independent reviews of mandatory annual financial reports, an initial inventory of the person under guardianship’s assets, and utilizing effective accounting controls to help protect that individual’s assets. Technology can be used to support the oversight process. For example, as previously described, Minnesota monitors the state’s conservators using an online program that allows auditors to flag suspicious spending patterns and other warning signs for potential abuse. Despite the known importance of monitoring efforts, stakeholders described how challenges in monitoring guardians often arise from resource limitations. According to one of the stakeholders we interviewed, courts often do not have the resources to employ court visitors, investigators, auditors, or robust case-management systems for tracking key filings and case events. Another stakeholder told us that guardians are supposed to submit annual reports about persons under guardianship, and in many states and counties these reports are filed, but no one checks to see if the reports have been filed on time or to verify if what is reported is accurate. In addition, other monitoring efforts can be limited. For example, a court official in Washington told us some reviews are paper audits where no one conducts a site visit to the person under guardianship to verify his or her well-being. Representatives from the National Guardianship Association told us that while guardianships have some oversight, there is significant variation in the level of oversight performed by different states. The investment in monitoring the activity of guardians is up to local counties and constrained by resources. One of the recurring themes these representatives find when they examine guardianship issues is that states would like to apply more robust oversight, but the states say that there are not enough resources available to investigate and oversee these cases. To help overcome resource limitations, the American Bar Association and AARP have developed programs courts can use to recruit and train volunteers to help monitor guardian activities. While there are some costs associated with these programs, according to stakeholders, they can reduce the burden on courts for monitoring guardian activities. Enforcement Enforcement activities punish the guardian for his or her abusive actions against a person under guardianship, deter future abuse by sending the message that the abuse of older adults by guardians will not be tolerated, and at times may allow for restitution to the victim. On the basis of our review of published materials and interviews with various state courts and nongovernmental stakeholders, we observed the following promising practices and challenges related to enforcement. Complaint systems. In addition to providing educational benefits to guardians, certification systems can provide states with a mechanism for receiving complaints and addressing noncriminal guardian performance issues (e.g., not submitting required accountings), while offering other potential certification-related benefits such as screening opportunities and continuing education requirements. In states that certify guardians, complaints may also be directed to the guardianship certification board. State-operated hotlines can also help identify cases of abuse. For example, the Palm Beach County Clerk’s Inspector General set up a hotline that allows the public to report concerns about guardians via telephone, e-mail, or the Internet, or in- person. From fiscal year 2011 through February 2016, the Palm Beach County Clerk’s Inspector General reported 516 contacts, 250 of which were actionable. However, multiple stakeholders also identified some challenges related to complaints. For example, some of the representatives we spoke with stated that it may be difficult or impossible for people with diminished capacity to file a complaint about a guardian, so complaints typically originate from family members. Also, one of the stakeholders we interviewed told us it is not always clear where complaints about guardians should be sent, but that anyone with an elder-abuse related concern could contact law enforcement agencies or the state APS agency. In addition, this stakeholder told us that courts may have complaint processes, but it can be difficult to navigate these processes without effective counsel. Dedicated investigative resources. Palm Beach County, Florida, dedicated resources to independently audit guardian spending reports and also dedicated resources to the investigation and monitoring of guardianship-related activities, which has had a positive effect, according to officials there. A prosecutor that we spoke with in San Diego discussed similar efforts in his jurisdiction, but noted that law- enforcement entities in most cities do not have departments dedicated to investigating elder abuse. Appropriate disciplinary measures. Guardianship enforcement activities can range from removing guardians for poor performance to prosecution for overt criminal actions. States that apply such measures appropriately can punish bad actors, obtain restitution for victims, and deter future abuse. However, there can be investigative and prosecutorial challenges associated with cases of elder abuse by a guardian. Stakeholders we spoke to highlighted obstacles that can obstruct efforts to punish abusive guardians. For example, a prosecutor in Washington noted that when abuse by guardians takes the form of overcharging an older adult for the guardian’s services, because the courts have approved the payments in question it is virtually impossible for the prosecutor’s office to file charges. This prosecutor explained that a guardian charged with financial exploitation in such a case would be able to argue that the fees he or she obtained were appropriate because they were sanctioned by the courts; this would almost certainly prevent such a guardian from being found guilty at trial. Also, a prosecutor in California opined that law- enforcement officials generally feel that when someone is in a position of trust, law enforcement officials cannot and should not get involved. Specifically, they feel it is a civil matter that should be handled in the civil jurisdiction. Other representatives we spoke with raised concerns about the cost of investigating cases of potential abuse. For example, representatives from the National Guardianship Association noted the forensic analysis to identify evidence in these cases can cost $20,000 or more for just one case. Other challenges relate to the penalties associated with these crimes. For example, an official in Washington has noted the sentences tend to be insignificant and jail time can often be avoided. This official also noted that prosecutors will rarely proceed with cases that do not exceed certain dollar thresholds. Agency Comments We are not making recommendations in this report. We provided a draft of this report to HHS, CFPB, the Department of Justice, SSA, the Department of Veterans Affairs, and the Office of Personnel Management for review and comment. CFPB and SSA provided technical comments, which we incorporated as appropriate. HHS, the Department of Justice, the Department of Veterans Affairs, and the Office of Personnel Management had no comments on this report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees; the Commissioner of the Social Security Administration; the Secretary of Veterans Affairs; the Secretary of Health and Human Services; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Coordination between Federal Representative Payee Programs and State Guardianship Programs The Social Security Administration (SSA), the Department of Veterans Affairs, and the Office of Personnel Management have programs that appoint representative payees to manage federal benefits received by individuals who are unable to do so for themselves. Federal agencies are responsible for oversight of representative payees assigned under these programs, while state and local courts are responsible for oversight of guardianship appointments. A representative payee may also be a guardian, and some beneficiaries with a representative payee may also have a guardian. According to a white paper prepared for the Elder Justice Coordinating Council, the representative payee and the guardian might or might not be the same person or organization. Table 3 shows the number of beneficiaries who are older adults and have representative payees, as well as the number of representative payees and court- appointed guardians or conservators that the respective federal agency is aware of. We have previously found that, among other things, poor communication between the courts and federal agencies has enabled guardians to chronically abuse persons under guardianship and others. In 2011, we found that information sharing among federal fiduciary programs and state courts could improve the protection of older adults with guardians. More specifically, we found that information about SSA’s incapable beneficiaries and their representative payees could help state courts (1) avoid appointing individuals who, while serving as SSA representative payees, have misused beneficiaries’ SSA payments in the past, and (2) provide courts with potential candidates for guardians when there are no others available. At that time, we recommended that SSA should determine how it can, under current law, disclose certain information about beneficiaries and fiduciaries to state courts upon request, potentially proposing legislative changes to allow such disclosure. Upon review of our recommendation, SSA determined it could not disclose information about SSA beneficiaries and representative payees to state courts for the purposes of determining guardianship without written consent because legal limitations prevent the sharing of this information. While we continue to believe that it is in the best interest of incapable SSA beneficiaries for the agency to disclose certain information about beneficiaries and fiduciaries to state courts, SSA officials with whom we spoke in 2016 maintain that the agency cannot disclose information regarding SSA beneficiaries and representative payees to courts for the purposes of determining guardianship issues without written consent, unless a Privacy Act exception applies. SSA officials also told us they were not aware of any routine exchanges of information between state courts and their agency; however SSA does share limited information about representative payees with other federal agencies when legally authorized to do so. Officials from state courts we spoke to also reiterated the need for increased coordination and communication with federal representative payee programs. For example, a court official in Washington explained that it is important for courts to know when there is an issue with a representative payee who is trying to become a guardian, and it is also important for SSA to know when there is a problem guardian. Also, court officials in Ohio described another challenge related to their monitoring efforts that occurs when they are unaware of significant increases in the assets of the person under guardianship, caused by the receipt of sizable back payments paid by SSA. As described in this report, the Administrative Conference of the United States administered and analyzed the results of a survey of judges, court staff, and guardians to review, among other things, court interaction with federal agencies. In August 2016, SSA officials told us the agency was using the study to make improvements that will leverage the work of state courts in SSA’s process for determining whether a representative payee is necessary. For example, SSA is exploring whether the agency could automatically appoint guardians—or individuals who are currently serving in a similar capacity—as representative payees. Additionally, SSA officials told us they are using the results to identify better ways to communicate with state and local courts and the guardians appointed by these entities. These efforts include providing clarification to agency technicians on permitted disclosures to state and local courts and legal guardians. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gabrielle Fagan (Assistant Director), John Ahern, Nada Raoof, and April Van Cleef made key contributions to this report. Also contributing to the report were Lorraine Ettaro, Colin Fallon, Maria McMullen, and James Murphy. Related GAO Products Elder Justice: More Federal Coordination and Public Awareness Needed. GAO-13-498. Washington D.C.: July 10, 2013. Elder Justice: National Strategy Needed to Effectively Combat Elder Financial Exploitation. GAO-13-110. Washington D.C.: November 15, 2012. Incapacitated Adults: Oversight of Federal Fiduciaries and Court- Appointed Guardians Needs Improvement. GAO-11-678. Washington D.C.: July 22, 2011. Elder Justice: Stronger Federal Leadership Could Enhance National Response to Elder Abuse. GAO-11-208. Washington D.C.: March 2, 2011. Guardianships: Cases of Financial Exploitation, Neglect, and Abuse of Seniors. GAO-10-1046. Washington D.C.: September 30, 2010.
The number of older adults, those over age 65, is expected to nearly double in the United States by 2050. When an older adult becomes incapable of making informed decisions, a guardianship may be necessary. Generally, guardianships are legal relationships created when a state court grants one person or entity the authority and responsibility to make decisions in the best interest of an incapacitated individual—which can include an older adult—concerning his or her person or property. While many guardians act in the best interest of persons under guardianship, some have been reported to engage in the abuse of older adults. GAO was asked to review whether abusive practices by guardians are widespread. This report describes (1) what is known about the extent of elder abuse by guardians; and (2) what measures federal agencies and selected state and local guardianship programs have taken to help protect older adults with guardians. GAO reviewed relevant research, reports, studies, and other publications issued by organizations with expertise on elder abuse and guardianship issues. GAO also conducted interviews with various guardianship stakeholders including federal agencies such as HHS, six selected state courts, and nongovernmental organizations with expertise in guardianship-related issues. In addition, GAO identified eight closed cases of abuse by guardians in which there was a criminal conviction or finding of civil or administrative liability to use as nongeneralizable illustrative examples. GAO makes no recommendations in this report. The extent of elder abuse by guardians nationally is unknown due to limited data on key factors related to elder abuse by a guardian, such as the numbers of guardians serving older adults, older adults in guardianships, and cases of elder abuse by a guardian. Court officials from six selected states GAO spoke to noted various data limitations that prevent them from being able to provide reliable figures about elder abuse by guardians, including incomplete information about the ages of individuals with guardians. Officials from selected courts and representatives from organizations GAO spoke to described their observations about elder abuse by a guardian, including that one of the most common types appeared to be financial exploitation. Some efforts are under way to try to collect better data on elder abuse and guardianship at the federal, state, and local levels to support decision making and help prevent and address elder abuse by guardians. For example, the Department of Health and Human Services (HHS) plans to launch the National Adult Maltreatment Reporting System—a national reporting system based on data from state Adult Protective Services (APS) agency information systems by early 2017. According to HHS and its contractor, this system has the capability to collect information that could specifically help identify cases of elder abuse where a guardian was involved. GAO also identified state and local initiatives to capture key data points and complaint data as well as identify “red flags” such as unusually high guardian fees or excessive vehicle or dining expenses. The federal government does not regulate or directly support guardianship, but federal agencies may provide indirect support to state guardianship programs by providing funding for efforts to share best practices and facilitate improved coordination, as well as by sharing information that state and local entities can use related to guardianship. State and local courts have primary responsibility over the guardianship process and, as such, have a role in protecting older adults with guardians from abuse, neglect, and exploitation. Measures taken by selected states to help protect older adults with guardians vary but generally include screening, education, monitoring, and enforcement.
Background VA operates one of the largest health care delivery systems in the nation, providing care to a diverse population of veterans. VA operates about 150 hospitals, 130 nursing homes, 950 outpatient clinics, and 230 readjustment counseling centers—Vet Centers—through 21 regional health care networks called Veterans Integrated Service Networks. VA is responsible for providing health care services to various populations—including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA is required by law to provide health care services to certain veterans and may provide care to other veterans. In general, veterans must enroll in VA health care to receive VA’s medical benefits package—a set of services that includes a full range of hospital and outpatient services, prescription drugs, and noninstitutional long-term care services provided in veterans’ own homes and in other locations in the community. VA also provides some services that are not part of its medical benefits package, such as nursing home care. The population of veterans to whom VA is required to provide nursing home care is more limited than the population to whom VA is required to provide other health care services. VA is required by law to provide nursing home care to certain veterans needing such care who also have service-connected disabilities, and VA also makes nursing home care available to other veterans on a discretionary basis as resources permit. VA’s enrollment system includes eight categories for enrollment established by law to manage access to services in relation to available resources. The order of priority for the categories is generally based on service-connected disability, income, or other special status such as having been a prisoner of war. If sufficient resources are not available to provide care that is timely and acceptable in quality, VA must restrict enrollment consistent with its priority categories. VA also provides enhanced priority status for veterans with combat experience—including those who participated in Operation Enduring Freedom in Afghanistan, Operation Iraqi Freedom, and Operation New Dawn in Iraq—for up to 5 years from their date of discharge or release from active-duty service. Veterans who enroll in VA health care may choose not to access VA’s health care services in any given year. This is in part because many veterans have other options, such as Medicare, Medicaid, or private health insurance, to access and pay for health care services. Enrollees choose whether to access services through VA or other providers based on factors such as their proximity to VA providers. Additionally, downturns in economic conditions may reduce veterans’ access to sources of private insurance, such as employer-sponsored insurance, and influence enrollees’ choice to access VA health care. Estimating the resources required to provide health care services to veterans and developing VA’s budget request is a collaborative process involving several offices within VA, mainly at VA headquarters. Within VA, VHA’s Office of Finance is responsible for policy and operational issues relating to budget formulation for all VHA services. It works with the Office of the Assistant Deputy Under Secretary for Health for Policy and Planning, which has responsibilities for managing knowledge and data related to VHA’s policies and strategic planning. Program offices, which are responsible for setting policies for providing specific health care services, provide information to support the budget formulation process. VA’s Office of Budget is responsible for overseeing the budget formulation process for the department as a whole on behalf of the Secretary and submitting VA’s budget request for OMB’s review and consideration in developing the President’s Budget. OMB plays a key role in the budget formulation process by providing the framework for agencies to follow. OMB annually issues Circular No. A-11, which contains detailed instructions and schedules for the submission of agencies’ budget estimates. It also includes other material to ensure that agency budget requests adhere to standardized conventions and formats. OMB also provides general guidance to federal agencies via bulletins and memoranda that include, among other things, the President’s priorities to consider as agencies prepare their budget submissions. Additional communications between OMB and an agency can occur anytime during the year. VA, like other agencies, begins formulating a budget request approximately 10 months before the President submits the budget to Congress in early February. This is approximately 18 months before the start of the fiscal year to which the request relates and about 30 months prior to the start of the fiscal year to which the advance appropriations request relates. The formulation of VA’s budget request is a process that follows the general schedule in table 1. VA Develops Most of Its Health Care Budget Estimate Using a Projection Model and Uses Other Methods for the Remaining Portion VA uses a projection model to develop estimates of the resources needed to deliver most of the health care services VA provides. These services accounted for most of VA’s health care budget estimate for fiscal year 2011. VA uses other methods to develop nearly all of the remaining portions of its health care budget estimate for long-term care and other services as well as proposed initiatives. VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Meet Expected Demand VA uses what is known as the Enrollee Health Care Projection Model (EHCPM)—a model developed in partnership with VA’s actuarial consultant—to estimate the amount of resources VA will need to meet the expected demand for most of the health care services VA provides. These services accounted for 83 percent of VA’s health care budget estimate for fiscal 2011. VA used the EHCPM to estimate the resources needed for fiscal year 2011 for 61 health care services, which VA grouped into seven service types (see app. I for a list of the 61 health care services that were grouped into the seven service types). Outpatient services accounted for almost half of the resources VA estimated using the EHCPM for fiscal year 2011. (See fig. 1.) The EHCPM is used to estimate needed resources based on the total cost of providing each health care service. VA officials said this total cost reflects direct patient costs as well as costs associated with management, administration, and maintenance of facilities. The EHCPM’s estimates are based on three basic components: the projected number of veterans who will be enrolled in VA health care, the projected utilization of VA’s health care services—that is, the quantity of health care services enrollees are expected to use—and the projected unit cost of providing these services. (See fig. 2.) Each component is subject to a number of complex adjustments to account for the characteristics of VA health care and the veterans who access VA’s health care services. The EHCPM makes these projections 3 or 4 years into the future for budget purposes based on data from the most recent fiscal year. For example, in 2009, VA used data from fiscal year 2008 to develop its health care budget estimate for the fiscal year 2011 request, including the advance appropriations request for fiscal year 2012. To project the expected number of veterans who will be enrolled in VA health care, the EHCPM relies on a combination of VA and other federal data to identify current enrollees and to estimate how many eligible, nonenrolled veterans will choose to enroll. The EHCPM uses VA data to identify the number of current enrollees in VA health care and to calculate historical enrollment rates by various characteristics, including age, gender, priority level, and geographic location. In addition, the EHCPM uses data developed by VA that combine federal census data on veterans and Department of Defense data on service members separated from active duty since the last decennial census to identify the number of eligible veterans not currently enrolled in VA health care. The data developed by VA also incorporate Department of Defense estimates of how many service members will separate from active duty each year into the future. The EHCPM uses these data to estimate how many eligible veterans will choose to enroll in VA health care by applying VA’s historical enrollment rates to this population. To project the utilization or the quantity of VA’s health care services veterans will use and the unit costs of VA’s health care services, VA groups these services into two major categories in the EHCPM: (1) those services that VA provides in a manner comparable to other providers, whose services are paid for by Medicare and private health insurers; and (2) those services that are unique to or are provided in a different manner by VA. For example, VA provides services, such as emergency room visits and physician office visits, in a manner comparable to other providers. In contrast, VA provides rehabilitation services to homeless veterans and certain types of prosthetic services that VA officials said are not generally offered by other providers. For health care services that VA provides in a manner comparable to other providers, the EHCPM uses utilization and unit-cost data developed by VA’s actuarial consultant that reflect data from Medicare and private health insurers in addition to data from VA’s own experience. VA used these data for 33 of the 61 health care services whose estimates were developed by the EHCPM for fiscal year 2011. (See app. I for the data sources used to generate utilization and unit-cost projections for each of the 61 health care services for fiscal year 2011.) Data from Medicare and private health insurers allow the EHCPM to better account for the impact of a number of factors—such as age, gender, geographic location, and benefit structure—that may affect utilization and unit-cost projections because these data represent more than 60 million individuals, compared with the 8.5 million veterans expected to be enrolled in VA health care in fiscal year 2011. Additionally, VA officials said that using data from Medicare and private health insurers allows the EHCPM to account for enrollees’ utilization of health care services outside of VA health care. For health care services that are unique to or are provided in a different manner by VA, the EHCPM uses utilization and unit-cost data from VA’s own experience. VA used these data for 28 of the 61 health care services whose estimates were developed by the EHCPM for fiscal year 2011. While VA data can be used to reasonably estimate the likely demand for these specific services, VA officials said these data do not allow the EHCPM to account for as many factors that may affect utilization and unit-cost projections. For example, VA uses national unit costs in most projections of services that are unique to VA health care because it lacks sufficient data to use in the EHCPM to account for geographic variations in the unit cost of these services. To project utilization and unit costs using the EHCPM, VA makes a number of complex adjustments to the utilization and unit-cost data to account for the characteristics of VA health care and enrolled veterans. For example, these adjustments take into account enrollees’ age, gender, priority level, and geographic location. VA also makes additional adjustments to account for changes expected to occur over time. For example, adjustments are made to utilization projections to account for changes in health care practice patterns, such as greater use of magnetic resonance imaging to diagnose a condition. Additionally, unit-cost projections are adjusted to account for the effect of inflation on the costs of labor and supplies. For services that VA provides in a manner comparable to other providers, VA also adjusts data from Medicare and private health insurers in the EHCPM to account for the extent to which enrolled veterans will choose to access health care services through VA—referred to as reliance on VA health care—or obtain these services through non-VA sources. VA uses Medicare data for enrolled veterans to estimate the proportion of each health care service that veterans age 65 and over access through Medicare instead of VA. However, VA does not have a comprehensive data source to estimate the proportion of each health care service that enrolled veterans under age 65 access through non-VA sources, such as private insurers. To estimate the proportion of each health care service that enrollees under 65 access through VA and non-VA sources, VA relies on an extrapolation from its analysis of Medicare data and from data collected through periodic telephone surveys of enrollees about their use of VA and non-VA health care services. Additionally, VA adjusts data from Medicare and private health insurers in the EHCPM to incorporate characteristics unique or more common to VA health care and its enrollee population. For example, VA does not require copayments for physician office visits for veterans that meet certain eligibility criteria and VA’s enrollee population is predominantly male. Within VHA’s Office of the Assistant Deputy Under Secretary for Health for Policy and Planning, the Office of Enrollment and Forecasting has the lead responsibility for developing the estimates from the EHCPM and annually updates the assumptions that may affect utilization or unit-cost projections. VHA’s Office of Enrollment and Forecasting works closely with VHA’s Office of Finance, which is responsible for coordinating the process for developing VA’s health care budget estimate. The Office of Enrollment and Forecasting also consults with VA’s health care program offices, which provide input ranging from identifying VA’s policy goals for the health care services they administer to providing subject-matter expertise on trends expected to affect the delivery of these services. Input from the program offices is incorporated into the underlying assumptions used in the EHCPM. For example, VA officials told us that the Office of Enrollment and Forecasting collaborated with VA’s pharmacy program office to obtain information about what brand-name drugs are coming off patent, for which lower-cost, generic alternatives may be available. The Office of Enrollment and Forecasting incorporated this information into assumptions used in the EHCPM to project utilization and unit costs for VA’s pharmacy services. Also, VA officials told us that the Office of Mental Health Services provided information to the Office of Enrollment and Forecasting on VA’s increasing provision of mental health services in less restrictive treatment facilities and outpatient settings, and this information supported assumptions used in the EHCPM to project utilization of VA’s mental health services. VHA’s Office of Enrollment and Forecasting annually briefs VA leadership, including the VA Secretary and VHA Under Secretary, and OMB on updates to the EHCPM and assumptions used in the EHCPM to generate the estimates. According to VHA officials, the briefings are intended to provide VA and OMB a better understanding of the EHCPM and its assumptions and facilitate their review of VA’s health care budget estimate. VA Office of Budget officials said they review the assumptions used in the EHCPM by comparing them to data on past trends. For example, VA officials said that the VA Office of Budget along with the program office for pharmacy services were involved in reviewing assumptions regarding unit-cost projections for pharmacy services taking into account VA’s heavy reliance on low-cost, generic drugs. VA Uses Other Methods to Develop Portions of Its Health Care Budget Estimate Related to Long- term Care and Other Services VA uses methods other than the EHCPM to develop estimates of the amount of resources needed for long-term care and other services. VHA’s Office of Finance coordinates the development of the estimates for these services, which accounted for 16 percent of VA’s health care budget estimate for fiscal year 2011. Long-term care was 13 percent of the overall budget estimate, and other services accounted for 3 percent. VA develops its estimates for long-term care by developing separate estimates for nursing home and noninstitutional care services. Noninstitutional care services include such services as home-based primary care and care coordination/home telehealth programs. VA’s estimates for nursing home and noninstitutional care are based on projections of the amount of care provided—which is known as workload—and the unit cost of providing a day of this care. VA multiplies the workload estimates, unit-cost estimates, and the number of days in the fiscal year to develop estimates of the amount of resources for both nursing home care and noninstitutional care. (See fig. 3.) VHA’s Office of Finance also incorporates input from VHA’s Geriatrics and Extended Care program office about workload and unit-cost estimates for long-term care services. For nursing home care, VA develops workload projections by estimating the amount of nursing home care in demand by two groups of veterans— high-priority veterans for whom VA is required by law to provide nursing home care and other veterans for whom such care is provided on a discretionary basis. VA officials said that when making workload projections, they consider the resources necessary to serve high-priority veterans whom VA must serve. In addition, VA’s overall policy goal for nursing home workload is to keep nursing home workload consistent with recent experience, as VA focuses on expanding noninstitutional care services in order to provide long-term care in the least restrictive and most clinically appropriate settings. The nursing home workload for veterans whom VA serves on a discretionary basis is contingent on the amount of care needed to serve those veterans whom VA must serve by law. VA generally projects unit cost for nursing home care by calculating unit- cost increases observed from recent experience and then using this information to project future unit costs. For example, VA used the unit- cost increases from fiscal year 2008 to fiscal year 2009 and applied this percentage increase to project nursing home unit-cost estimates for fiscal year 2011, except for services delivered through community nursing homes. VA officials said they began using recent experience as a basis to estimate unit cost for nursing home care in response to a recommendation that we made in a 2009 report. For noninstitutional care, VA’s projected workload for these services is based on VA’s policy goal of meeting, by fiscal year 2011, the noninstitutional care needs of veterans who seek such care from VA. VA projects the demand for noninstitutional care services using information about the size and demographic characteristics of the enrolled veteran population. To meet its policy goal, VA has expanded the amount of noninstitutional care services it has provided. Specifically, VA increased workload for noninstitutional services by 34 percent from fiscal year 2008 to fiscal year 2009 and projects a 19 percent increase from fiscal year 2010 to fiscal year 2011. VA projected a smaller increase—4 percent—for its noninstitutional workload between fiscal year 2011 and fiscal year 2012. VA projects unit cost for noninstitutional care services using the same general method as for nursing home care—by calculating unit-cost increases observed from recent experience and then using this information to project future unit costs. For some services, however, VA experienced decreases in unit cost from fiscal year 2008 to fiscal year 2009. To develop fiscal year 2011 budget estimates for those services, VA did not rely on its recent experience and instead chose to assume a unit- cost increase between 4.52 percent and 4.60 percent, depending on the service. The remaining services for which VA developed estimates using methods other than the EHCPM made up 3 percent of VA’s health care budget estimate for fiscal year 2011. The largest of these services was the Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA), which provides health care coverage for spouses and children of veterans who are permanently and totally disabled from a service-connected disability. CHAMPVA functions similarly to traditional health insurance—most care within CHAMPVA is delivered using private- sector health care providers. Therefore, developing estimates of the resources needed for CHAMPVA requires factoring in utilization patterns and cost inflation that are generally outside of VA’s control. Budget estimates for CHAMPVA are developed using a formula that computes the predicted number of users and costs per-member per-year. Since 2004, VA’ s Health Administration Center, which oversees administration of CHAMPVA, has worked with VA’s actuarial consultant to generate projections of CHAMPVA users that incorporate changes related to the population of disabled veterans and projections of expected increases and decreases in the CHAMPVA-eligible population. More recently, the Health Administration Center and VA’s actuarial consultant added projections of cost per-member per-year. These costs are calculated by dividing the most current fiscal year data on total CHAMPVA expenditures by the number of actual users. Trends are then incorporated to predict the future costs per- member per-year, which is multiplied by projections of the number of CHAMPVA users to develop CHAMPVA budget estimates. VA Also Incorporates Estimates of Resources for Initiatives Proposed by the Secretary or the President VA also incorporates into its budget estimate the amount of resources needed for health-care-related initiatives proposed by the Secretary or the President. For fiscal year 2011, health-care-related initiatives made up 1 percent of VA’s health care budget estimate. Some initiatives can be implemented within VA’s existing authority, while other initiatives would require a change in law. These initiatives can vary from year to year depending on policy priorities. VA officials said the EHCPM can be used to estimate the resources needed for these initiatives if VA has the data necessary for the model’s estimates to be useful. If not, VA uses other estimation methods and sometimes VA receives estimates from OMB. Estimates for two VA health care initiatives and two presidential health care initiatives were in the President’s fiscal year 2011 budget request for VA. For example, one VA initiative focused on expanding telehealth services for noninstitutional long-term care. The Secretary directed that VA include $40 million for this initiative in VA’s estimate. Additionally, one presidential initiative was a governmentwide emphasis on reducing operating costs associated with maintaining surplus property. For this initiative, OMB provided VA with estimates of the savings associated with reducing these operating costs. VA also developed estimates for the President’s fiscal year 2011 budget request for 11 proposed health-care-related initiatives that require a change in law. Some proposed initiatives would increase spending while others would decrease spending. For example, VA estimated that one proposed initiative to pay travel expenses for caregivers to support veterans receiving certain VA health care services would cost $16 million in fiscal year 2011. For a different proposed initiative, VA estimated savings of $325,000 in fiscal year 2011 if VA were permitted to stop reimbursing physicians and dentists for certain continuing education expenses. VA’s Health Care Budget Estimate Informs the Decision- making Process for the President’s Budget Request VA’s health care budget estimate prepared by VHA is reviewed at successively higher levels. Within the agency, the Secretary of VA reviews the health care budget estimate in the context of departmentwide priorities, including trade-offs between health care and other services. OMB considers VA’s budget submission in light of presidential priorities and needs governmentwide. VA and OMB communicate these priorities by providing guidance. One source of guidance for developing the budget request is the Secretary of VA. VA officials told us that each April they issue departmental guidance that may include funding targets and policy priorities. The guidance communicates the Secretary’s priorities and may identify the specific VA services to be emphasized in that year’s budget request. In addition to preparing the estimate for existing health care services, VHA may have to estimate the resources required to carry out a new initiative identified in guidance from the Secretary. VA officials said that in some years they may direct VHA to estimate the resources needed under different levels of demand for services to reflect a changing internal or external environment, such as legislative changes or economic conditions. Another source of guidance for developing the budget request is OMB, which issues OMB Circular No. A-11. It includes guidance to agencies for preparing budget submissions. According to OMB staff, additional guidance from OMB gives more specific information on priorities or target funding levels. OMB staff also stated that they maintain ongoing contact with VA officials as the budget submission is formulated to provide guidance and to keep apprised of VA budgetary concerns. The budget estimate for health care services is presented in different ways for review and decision making. VA must present the budget estimate in the appropriations accounts’ structure used by Congress, but the budget estimate is also shown by broad service categories, such as acute, mental health, and institutional long-term care, and the Secretary’s and President’s initiatives. For the purpose of presentation in the President’s Budget, agencies start from the most recently enacted appropriations, even if the President proposes changes to the structure and purposes of the appropriations accounts. Congress funds VA health care services in three appropriations accounts. The three appropriations accounts for VA health care services are: Medical Services, which includes funds for health care services provided to eligible veterans and beneficiaries in VA’s medical centers, outpatient clinic facilities, contract hospitals, state homes, and outpatient programs on a fee basis; Medical Support and Compliance, which includes funds for management and administration of the VA health care system, including financial management; and Medical Facilities, which includes funds for the operation and maintenance of the VA health care system’s capital infrastructure, such as costs associated with utilities, facility repair, laundry services, and groundskeeping. Figure 4 shows the different structures in which the fiscal year 2011 budget estimate was presented for review and decision making. Because support and compliance (administrative) costs and facility costs are not estimated separately by the EHCPM or other methods used by VA to develop its resource estimates, VHA officials said they generally distribute the total health care budget request among the three appropriations accounts based on historical spending trends. Specifically, VA uses the proportions of funding for health care services, administration, and facility costs that were obligated in the last budget year as a baseline and makes adjustments as appropriate. For example, if VA is requesting a relative increase in maintenance efforts, VA would increase the share of the budget requested for the Medical Facilities account. Funding was requested for fiscal year 2011 for the three accounts in the following proportions: Medical Services at 77 percent, Medical Support and Compliance at 11 percent, and Medical Facilities at 12 percent. Officials from VA’s Office of Budget said that they focus on changes from the previous year and review whether the changes are consistent with past trends, whether projected trends are consistent with their expectations, and whether the changes are justified. Given that VA’s Office of Budget has already reviewed the underlying assumptions in the early stages of developing the health care budget estimate, VA budget officials said that, at this point, they ensure that the budget estimate is internally consistent, the justifications are clear, and the amounts requested are reasonable. For example, the Office of Budget verifies that the resources requested are sufficient to cover the salaries for the number of employees included in the request. The Office of Budget also verifies whether the estimates for collections and new initiatives are sufficient. The Secretary of VA considers the health care budget estimate when assessing resource requirements among competing interests within VA, particularly in times of fiscal constraints. The departmentwide budget submission includes resources for the administration of veterans’ compensation and benefit programs as well as proposed investments for improving the delivery of those benefits using information technology and for construction projects at hospitals and other facilities. VA officials said the Secretary most often makes trade-offs in areas such as new initiatives and new construction. As an example, VA officials said that the Secretary may want to dedicate more resources for mental health services to reduce homelessness among veterans than the budget estimate initially provided. As a result of these trade-offs, VA’s budget request for health care could be different from the resource estimates developed using the EHCPM and other methods. VA’s Office of Budget includes supporting materials accompanying VA’s budget request that are submitted to OMB. These supporting materials include narrative statements of selected health care services, such as mental health and homeless programs. VA also submits additional data to justify VA’s request for resources. For example, VA submits detailed estimates from the EHCPM separately to OMB. According to VA officials, these estimates are used to communicate to decision makers how the estimated spending for services supports VA’s mission to provide health care services to veterans instead of listing all the details of health care services in the budget submission. In September of each year, VA transmits its departmentwide budget submission, including its budget estimate for health care, to OMB. OMB staff stated that they initially review VA’s assumptions, such as economic assumptions pertaining to inflation, and review cost and utilization trends used to develop the health care budget estimate. OMB staff also review the policy priorities in VA’s submission, which includes the funding request for VA health care, to verify that the President’s priorities are reflected. OMB staff stated that they also talk with VHA officials to ensure that the resources requested support the initiatives as described. Traditionally, OMB issues decisions, known as passback, to VA and other agencies in late November on the funding and policy proposals to be included in the President’s budget request. OMB staff said that they consider broader resource constraints and competing priorities of other agencies when making decisions about the level of funding for VA’s services. VA may appeal the decision before OMB finalizes the President’s budget request. The OMB decision and appeals process can result in a presidential budget request that is different from VA’s budget submission to OMB. The budget formulation process culminates with OMB preparing the accompanying documents submitted to Congress in February. Concurrently, VA prepares a congressional budget justification that provides details supporting the policy and funding decisions for the President’s budget request to Congress. Agency Comments We provided a draft of this report to VA and OMB for comment. VA and OMB provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs and the Director of the Office of Management and Budget, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Randall B. Williamson at (202) 512-7114 or at williamsonr@gao.gov, or Denise M. Fantone at (202) 512-6806 or at fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Data Sources for Utilization and Unit-Cost in VA’s Enrollee Health Care Projection Model, Fiscal Year 2011 Skilled nursing facility (non-acute) Prescription drugs (brand & generic) Appendix II: GAO Contacts and Staff Acknowledgments Contacts Acknowledgments In addition to the contacts named above, James C. Musselwhite and Melissa Wolf, Assistant Directors; Rashmi Agarwal; Deirdre Brown; Amber G. Edwards; Krister Friday; Lauren Grossman; Tom Moscovitch; Lisa Motley; Leah Probst; Steve Robblee; and Jessica Smith made key contributions to this report. Related GAO Products VA Health Care: Spending for and Provision of Prosthetic Items. GAO-10-935. Washington, D.C.: September 30, 2010. VA Health Care: Reporting of Spending and Workload for Mental Health Services Could Be Improved. GAO-10-570. Washington, D.C.: May 28, 2010. Continuing Resolutions: Uncertainty Limited Management Options and Increased Workload in Selected Agencies. GAO-09-879. Washington, D.C.: September 24, 2009. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006. VA Health Care: Preliminary Findings on the Department of Veterans Affairs Health Care Budget Formulation for Fiscal Years 2005 and 2006. GAO-06-430R. Washington, D.C.: February 6, 2006. VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans. GAO-06-333T. Washington, D.C.: January 9, 2006. VA Long-Term Care: More Accurate Measure of Home-Based Primary Care Workload Is Needed. GAO-04-913. Washington, D.C.: September 8, 2004.
Funding for the Department of Veterans Affairs' (VA) health care is determined by Congress in the annual appropriations process. Prior to this process, VA develops a budget estimate of the resources needed to provide health care services to eligible veterans. The Veterans Health Care Budget Reform and Transparency Act of 2009 requires GAO to assess whether the funding requested for VA health care in the President's budget requests submitted to Congress in 2011, 2012, and 2013 is consistent with VA's estimates of the resources needed to provide health care services. In anticipation of these future studies, GAO was asked to obtain information on how VA prepares its health care budget estimate. In this report, GAO describes (1) how VA develops its health care budget estimate, and (2) how VA's health care budget estimate is used in the President's budget request to Congress. To conduct this work, GAO reviewed VA documents on the methods, data, and assumptions used to develop VA's health care budget estimate that informed the President's budget request for fiscal year 2011 and request for advance appropriations for fiscal year 2012. GAO also interviewed VA officials responsible for developing this estimate and staff from the Office of Management and Budget (OMB), which is responsible for overseeing the development and implementation of the federal budget. VA uses what is known as the Enrollee Health Care Projection Model (EHCPM) to develop most of its health care budget estimate and uses other methods for the remainder. Specifically, VA used the EHCPM to estimate the resources needed to meet expected demand for 61 health care services that accounted for 83 percent of VA's health care budget estimate for fiscal year 2011 and similarly for fiscal year 2012. The EHCPM's estimates for these services are based on three basic components: projected enrollment in VA health care, projected use of VA's health care services, and projected costs of providing these services. To make these projections, the EHCPM uses data on the use and cost of these services that reflect data from VA, Medicare, and private health insurers. The EHCPM makes a number of complex adjustments to the data to account for characteristics of VA health care and the veterans who access VA's health care services. For example, these adjustments take into account veterans' age, gender, geographic location, and reliance on VA health care services compared with other sources, such as health care services paid for by Medicare or private health insurers. VA uses other methods to develop nearly all of the remaining portion of its budget estimate for long-term care and other services, as well as initiatives proposed by the Secretary of VA or the President. Long-term care and other services accounted for 16 percent and initiatives accounted for 1 percent of VA's health care budget estimate for fiscal year 2011 and similarly for fiscal year 2012. VA's health care budget estimate is reviewed at successively higher levels. Within the agency, the Secretary of VA reviews the health care budget estimate in the context of departmentwide priorities, including trade-offs between health care and other services. The budget estimate is presented in different ways, including the appropriations accounts structure used by Congress for decision making. OMB considers VA's budget submission in light of presidential priorities and needs governmentwide. VA can appeal decisions before OMB finalizes the President's budget request to Congress. VA and OMB provided technical comments, which GAO incorporated as appropriate.
Background NextGen and SESAR Procedures Will Differ Significantly from Current Air Traffic Control Procedures NextGen and SESAR, when fully implemented, will represent a significant departure from current air traffic control procedures, in which aircraft fly over fixed, ground-based navigational aids, and pilots respond to voice commands from air traffic controllers. NextGen and SESAR envision an airspace system in which network-based information and automation optimize an aircraft’s operation in all phases of flight—-from flight planning at the start to landing and taxiing to the gate at the end—to reduce delays and maximize airspace capacity, while reducing environmental impact and fuel consumption. See figure 1 for an illustration of how NextGen is envisioned to work. NextGen and SESAR envision trajectory-based operations, which would use technological advances in communications, navigation, and surveillance, some of which are still under development.  Communications between aircraft and air traffic control would change from primarily voice mode between pilots and air traffic controllers to data communications, known as Data Comm. Prescripted e-mail-like messages would replace routine voice communications between air traffic controllers and pilots. Data communications would also enable ground systems to communicate directly with the aircraft’s flight management system.  Navigation procedures used will be based on the performance capabilities of the aircraft, meaning that appropriately equipped aircraft and flight crews will be able to select their own flight paths, within limits, and use satellites rather than existing ground-based aids for navigation. The aircraft’s navigation system would alert the crew to deviations from the planned route. Each airplane will transmit and receive precise information about its position and the position of other nearby aircraft, as well as the time at which it and others will cross key points along their paths.  Surveillance would change with ground-based radars augmented, and gradually replaced, by a satellite-based system known as Automatic Dependent Surveillance-Broadcast (ADS-B). According to FAA, this will allow the agency to retire, over time, up to 50 percent of secondary radar and reduce associated maintenance costs. ADS-B provides more accurate information than radar. It reports data about the location of aircraft every second, compared with up to every 12 seconds for radar, and FAA anticipates that ADS-B’s more frequent reporting, as well as the improved accuracy, integrity, and reliability of satellite signals, compared with radar, would enable controllers to safely reduce the mandatory separation between aircraft. This will increase capacity in the nation’s skies. ADS-B incorporates an aircraft-mounted transmitter and receiving units on the ground, about the size of a minirefrigerator, which can be placed nearly anywhere, such as on cell phone towers or on oil rigs in the Gulf of Mexico, where radar coverage does not reach. ADS-B uses satellite signals along with aircraft avionics to transmit the aircraft’s location to ground receivers. The ground receivers then transmit that information to controller screens and aircraft cockpit displays on aircraft equipped with ADS-B avionics. The System Wide Information Management (SWIM) infrastructure would connect various networks and manage aviation-related information so that all aviation users—pilots, air traffic controllers, and aircraft dispatchers—have the same information. Collectively, these systems, in combination with others, will transform air traffic control to air traffic management. A Variety of Organizations Have Roles Supporting NextGen and SESAR FAA has the primary responsibility for developing, and managing the transition to, and implementing NextGen, while the SESAR Joint Undertaking (SJU) currently manages SESAR. However, several organizations support NextGen and SESAR. See table 1. Within FAA, various departments share responsibility for international collaboration. FAA’s International Office conducts government-to- government interface with the EU and signs formal collaboration agreements. FAA’s Air Traffic Organization (ATO) International Office collaborates with SJU on technical issues. FAA’s Joint Planning and Development Office (JPDO) performs the long-term planning for NextGen and partners with other federal agencies. JPDO’s Global Harmonization Work Group focuses on ensuring the global interoperability of NextGen. FAA officials in the ATO International Office told us that although all three offices have different roles, relevant information is distributed among the offices. For example, the results of JPDO’s Global Harmonization Work Group are shared with ATO. Likewise, ATO has provided JPDO with information such as SESAR developments, including SESAR’s work structure and progress. United States and the EU Differ in Aviation Governance and NextGen/SESAR Management Structure The United States and EU differ in aviation governance and NextGen/SESAR management and organization. See table 2. The differences in characteristics between the United States and the EU contribute to differences in how they manage their respective modernization programs. Whereas the United States manages aviation at the federal level, the EU, with its 27 sovereign member states, and their individual regulators and service providers, must consider interoperability among its member states, as well as with NextGen. These different governing structures contribute to differing management structures for NextGen and SESAR. NextGen’s management is government-centric, meaning that FAA has the lead responsibility for NextGen development and implementation but collaborates with industry on demonstrations and garners expert advice through industry participation on advisory committees. FAA has divided NextGen into three time frames. (See table 2.) In the near and midterm, FAA is focusing on making the most of technologies and procedures that are already available and introducing innovations such as ADS-B. NextGen’s far-term objective is to fulfill the NextGen vision, including gate-to-gate trajectory based management. The EU has divided SESAR into three phases, but in contrast to FAA’s government-centric approach to NextGen, it has provided a participatory role for the private sector. EUROCONTROL managed the definition phase through a contract with a 30-member consortium of airlines, air navigation service providers, airports, manufacturers, and others. The definition phase ran from 2006 through 2008 and produced the European Air Traffic Management Master Plan. SJU, made up of EUROCONTROL, the European Commission, and 15 member organizations—including airport operators, air navigation service providers, manufacturers of ground and aerospace equipment, and aircraft manufacturers—is managing the development phase and following the master plan. SJU has contracts with its member organizations and issues task statements for the work to be done. U.S. companies are participating in SESAR’s development phase, either as a member organization or as an associate partner. During the development phase, scheduled to continue through 2016, new technologies and operational procedures will be developed and validated. During the deployment phase, the results of the development phase will be implemented. How the deployment phase will be managed has not been determined, according to SJU officials. FAA and the European Commission Are Working Collaboratively on Components Common to NextGen and SESAR FAA and the European Commission Agreed to Collaborate on NextGen and SESAR and to Continue Existing Collaborative Research In 2006, FAA and the European Commission signed a Memorandum of Understanding (MOU) to ensure coordination between the aviation modernization programs in the United States and the EU. According to FAA officials in ATO’s International Office, the primary purpose of the MOU was to allow joint participation on committees. FAA was allowed to participate as an observer at bimonthly meetings of the EU’s Industry Consultation Body. FAA attends these meetings to hear the discussion taking place with industry regarding the Single European Sky and to remain up-to-date. The EU participated as an observer in RTCA’s Air Traffic Management Advisory Committee and now participates on the NextGen Advisory Committee. Cross-participation in these meetings makes both parties aware of each other’s direction, operational plans, and solutions. Such awareness is one of the most significant enablers to developing interoperable systems, according to FAA officials. The MOU was updated in 2009 to take into account SJU’s role in the technical cooperation with FAA under the authority of the European Commission and to identify specific subjects of common interest to SESAR and NextGen. FAA and SJU officials also highlighted the Atlantic Interoperability Initiative to Reduce Emissions as an example of international collaboration. In 2007, the European Commission and FAA began collaborating to demonstrate how using NextGen/SESAR air traffic management techniques can lead to emissions and fuel savings. For example, demonstrations of the Optimized Profile Descent—a procedure whereby an aircraft descends as smoothly as possible, considering local limitations, rather than descending and leveling off in steps as is commonly done today—at Miami and Atlanta International airports saved between 40 and 60 gallons of fuel per flight and between 800 and 1,090 pounds of carbon dioxide (CO) per flight. Tests at Honolulu and Anchorage International Airports showed that use of Optimal Profile Descent could save a total exceeding 8 million gallons of fuel and 167 million pounds of COplan, FAA and EUROCONTROL focused on trajectory predictors—an automated decision-support tool that predicts the anticipated future path of an aircraft. According to FAA officials, trajectory prediction is a fundamental underpinning of how NextGen and SESAR plan to manage air traffic. They said that the concept and techniques for exchanging information were first diagrammed under this action plan. In 2003, the action plan team identified similarities among the many disparate trajectory predictors in use and developed a structure for a generic version. Other action plans focused on developing technologies such as data communications and ADS-B—also prerequisites for trajectory-based operations. These action plan teams typically developed annual work plans that described ongoing activities’ progress and status, as well as planned research activities for the coming year. According to progress reports, these action plan teams evaluated new technologies, proposed actions, compared strategies and plans, and commented on white papers. The teams also sought input from the air traffic management community, including airlines, aircraft and avionics manufacturers, and standards bodies and stakeholder organizations, and emphasized the need for collaboration with European research bodies. Many of the action plans continued beyond 2006, after the MOU was signed, and formed the core of the efforts to ensure interoperability of the systems and components that would make up NextGen and SESAR. For example, the action plan on ADS-B continued, with the result that ADS-B applications gained international recognition as a means to improve future air traffic management operations. Another action plan on safety research, started in 2003, continued with its objective to enhance safety assurance in air traffic management. According to the action plan’s documents, safety culture is one of the main threads of the action plan work program, and this focus facilitates alignment among EUROCONTROL, FAA, and the Civil Air Navigation Services Organization, which represents the interests of air navigation service providers worldwide. With the signing of a new MOC in 2011 (see following section), work under the action plans was redirected to near- term, procedural issues, while work under the MOC will focus on long- term air traffic management development. FAA and the European Commission Have Established a Structure and Governance Process for Collaboration In March 2011, FAA and the European Commission signed a new MOC that replaced the 2006 MOU (updated in 2009) and provides more specific direction on collaboration and governance as NextGen and SESAR move forward. The 2011 MOC establishes the main principles of cooperation and governance for NextGen and SESAR that were not specifically identified in the 2006 MOU and establishes a Joint Committee that is responsible for the MOC’s effective functioning and for evaluating its implementation. Additionally, the 2011 MOC provides for participation by each party’s governmental and industrial entities—FAA’s NextGen Advisory Committee and the EU’s Industry Consultation Body. Annex I of the 2011 MOC, titled SESAR-NextGen Cooperation for Global Interoperability, lays out a structure and governance process for ensuring interoperability of NextGen and SESAR’s systems and procedures (see fig. 2). According to FAA officials and SESAR documents, this structure sets the framework to ensure collaboration and provides a process by which people with decision-making authority might resolve any questions or issues that arise. The annex provides for a High Level Committee, co- chaired by the European Commission and FAA, which will meet at least once a year to oversee and assess the results of the work conducted under the appendixes of the annex, among other things. Annex I also provides for a Coordination Committee that is co-chaired by SJU and FAA’s ATO. According to the annex, the Coordination Committee will meet at least twice a year to monitor the progress of ongoing joint projects and activities under Annex I’s five appendixes, and will prepare reports for, and consider proposals for new work to be provided to the High Level Committee, among other things. The Coordination Committee held its first formal meeting in May 2011, where it approved the FAA/SJU “Cooperation for Global Interoperability Management Document,” known informally as the “governance document,” which further defines roles and responsibilities under the 2011 MOC. The five appendixes in Annex I are further subdivided into 27 specific research topics, each with its own working group and coordination plan prepared by the plan leaders (see table 3). The list of research topics was developed during 2 years of meetings among FAA, SJU, EUROCONTROL, and the European Commission. Work on some of the topics had begun under the 2006 MOU and is continuing under the new structure. According to FAA officials, many of the experts who have worked on these topics in past collaborative efforts will continue their work under the MOC. The topics vary in complexity and priority, and more topics may be added over time. The working groups meet as necessary, comply with instructions given by the Coordination Committee, and report regularly to the Coordination Committee. As of July 2011, FAA and SJU had approved and signed the five appendixes. FAA and SJU had also assigned priorities and drafted the scope of work for all of the coordination plans and were working together to develop the specific work tasks for each of the higher-priority coordination plans. RTCA and EUROCAE have formed jointly led and staffed special committees to develop standards for the new technology that NextGen and SESAR will employ and help ensure interoperability in technologies that may differ in some ways between the two systems. For example, Special Committee 214 was formed in March 2007 to develop guidance material to define the safety, performance, and interoperability requirements for air traffic services supported by data communications. Similar joint special committees are addressing topics such as terrain and airport databases and enhanced flight vision and synthetic vision systems. These standards will allow equipment manufacturers to offer solutions that meet both NextGen and SESAR requirements, increasing the size of their markets and driving down costs. According to FAA officials, the most significant work to help ensure interoperability occurs in these special committees. FAA and SJU are working with ICAO to facilitate interoperability with other countries beyond the United States and the EU. ICAO is developing a Global Aviation Block Upgrade initiative that would identify common capabilities and operational improvements in NextGen and SESAR, as well as in a similar program in Japan known as Collaborative Actions for Renovation of Air Traffic Systems (CARATS). ICAO has led a team, composed of FAA, SJU, Japanese, and industry stakeholders to group these improvements into a series of aviation system block upgrades to guide the international aviation community in modernizing their air traffic management systems in a coordinated manner and, in turn, facilitate global interoperability. In September 2011, ICAO unveiled a first draft of the block upgrade initiative and obtained feedback at its Global Air Navigation Industry Symposium. ICAO plans to make revisions based on the feedback and, at its 12th Air Navigation Conference in November 2012, incorporate the block upgrades in the Global Air Navigation Plan, which all ICAO member countries use to develop their aviation systems. FAA officials in the ATO’s International Office, as well as those responsible for NextGen planning and integration and for Data Comm program management, told us that because many other countries do not have the resources to develop their own systems or procedures, they will readily adopt the operational improvements and procedures resulting from this effort. An SJU official told us that the ICAO Block Upgrade initiative has helped strengthen the linkage between NextGen and SESAR, as the United States and EU have worked together to feed the block upgrade process. Additionally, the tight deadlines imposed by this initiative provided the impetus for continued U.S.-EU interaction in the working groups, on an informal basis, before the formal collaborative procedures were established by the 2011 MOC. FAA’s Efforts Generally Mirror Several Effective Collaborative Practices, but Mitigating Stakeholder Skepticism Is a Challenge FAA’s Efforts Generally Mirror Several Effective Collaborative Practices Several Key Practices Can Help Enhance and Sustain Collaboration Effective collaboration increases the likelihood that organizations can communicate substantive information, reach joint agreements, and implement those agreements. Organizations can use their strategic and annual performance plans as tools to drive collaboration with other agencies and partners and establish complementary goals and strategies for achieving results. In our past work, we identified key practices that can help enhance and sustain collaborative efforts among U.S. agencies. Based on our review of the academic literature about effective collaborative practices, we have determined that the following practices also apply to international collaboration:  defining and articulating a common outcome;  establishing mutually reinforcing or joint strategies to achieve the outcome and establishing compatible policies, procedures, and other means to operate across agency boundaries;  agreeing upon respective roles and responsibilities; identifying and addressing needs by leveraging resources;  developing mechanisms to monitor, evaluate, and report the results of reinforcing individual accountability for collaborative efforts through agency performance management systems; and reinforcing agency accountability for collaborative efforts through agency plans and reports. As noted previously, FAA and EUROCONTROL have collaborated many times in the past to achieve common outcomes under action plans. In 2011, FAA and the EU reaffirmed their agreement that interoperability is essential by establishing the MOC. Both parties recognize that it is in their mutual interest that aircraft be able to operate seamlessly as they fly from one system to the other. Without interoperability, airlines might have to install a second suite of equipment on their aircraft to operate in NextGen and SESAR airspaces. Furthermore, having different procedures would require pilots to learn two different operating procedures, which could degrade safety. Additionally, if FAA or SJU did not implement certain aspects of NextGen or SESAR, they would not receive the associated benefits, such as fuel savings that could result from more efficient air traffic management procedures. As we have previously reported, having a clear and compelling rationale to work together—such as that described above—is a key factor in successful collaborations. Agencies can overcome significant differences when such a rationale and commitment exist. Our prior work also found that agencies that articulate their agreements in formal documents, such as memoranda of cooperation, can strengthen their commitment to working collaboratively. FAA and SJU officials we interviewed, as well as industry stakeholders representing organized labor, airlines, and airframe and aerospace equipment manufacturing companies, generally agreed that the 2011 MOC is a positive development toward ensuring the interoperability of NextGen and SESAR, and it shows how the two sides are going to work together to achieve that common outcome. Annex I to the 2011 MOC represents the overall joint strategy under which FAA and SJU will work together to ensure the interoperability of NextGen and SESAR and establishes a means for FAA and SJU to operate across agency boundaries. It builds off the cooperation framework established in earlier agreements and FAA’s long-standing cooperative relationship with EUROCONTROL. The 2011 MOC defines the terms and conditions for mutual cooperation and sets forth the procedures by which FAA and SJU can establish cooperative research and development activities in any civil aviation issue. It also contains a larger list of areas of cooperation between NextGen and SESAR than the MOU that it replaced. As mentioned previously, FAA and SJU have identified the specific areas for coordination and are in the process of developing the coordination plans that will serve as the joint strategies for how both sides will collaborate on research and development for those areas. FAA and SJU have assigned priorities such as “immediate” or “on hold,” to these coordination plans. Those areas that do not have an immediate need for harmonization are deferred in favor of those with a more urgent need, such as data communications. The annex also establishes other means for FAA officials to work with their European counterparts such as allowing each side to participate in the other’s consultative bodies and allowing industry stakeholders to contribute to each other’s work programs and access information on, and results of, equivalent research and development programs and projects. Joint strategies are also evident in the RTCA/EUROCAE special committees’ terms of reference, which govern how the two standards organizations will work together, and include the scope, deliverables and their envisioned use, and due dates. Joint strategies, such as those mentioned above, can help agencies align their activities, core processes, and resources to accomplish their common outcome. Work in the RTCA/EUROCAE special committees has already helped align FAA’s and SJU’s activities. For instance, the two sides have resolved a difference in midterm plans that could have jeopardized the interoperability of NextGen and SESAR’s Data Comm systems. FAA’s plans called for implementing an interim communications system as a step toward this future system. EUROCAE working group members, airframe manufacturers, and SJU did not support this interim step for several reasons. For instance, Boeing stated that such a step was not promoting harmonization because requiring multiple steps would make implementation more costly, and higher costs could jeopardize the implementation of the final harmonized system. After both sides discussed the issue in RTCA/EUROCAE special committee meetings, FAA decided to drop this interim step and instead move toward the same future system as the EU. Our work has shown that addressing the compatibility of standards, policies, and procedures that will be used in the collaborative effort can facilitate collaboration. However, although FAA and SJU worked out technical differences in their data communications implementation plans, their timelines for implementing Data Comm still differ. SJU officials told us that moving forward on Data Comm is SESAR’s biggest challenge because the United States and Europe have differing time frames for implementation. SJU would like to see Data Comm implemented by 2018, while a senior FAA official responsible for communications believes that it will take until 2023 at the earliest. SJU officials hope that a compromise will be reached and noted that discussions between the two sides are continuing. As mentioned above, FAA and SJU have made Data Comm a high-priority area for collaboration and are developing a cooperation plan for this area. While the implications of the timeline difference are unclear, officials from FAA’s Data Comm office and SJU emphasized the importance of continuing communications to resolve this issue. As we have previously reported, frequent communication among collaborating agencies can enable a cohesive working relationship that can lead to the mutual trust required to enhance and sustain the collaborative effort, can facilitate working across agency boundaries, and can prevent misunderstandings between the two sides. Through the 2011 MOC and related documents, FAA and SJU have defined their roles and responsibilities for NextGen and SESAR collaboration, including how the collaborative effort will be led. For instance, the MOC describes the governance and management responsibilities of the High-Level and Coordination Committees and the working groups. The management document further defines roles and responsibilities for FAA and SJU, the committees, working group leaders, and coordination plan leaders. Coordination plans describe the scope, objectives, timescale, and processes for resolving issues of specific collaboration areas and formalize coordination between the parties under the framework of the MOC. A U.S. avionics manufacturer official, who is familiar with the MOC, commented that it is an improvement over past agreements that provided for periodic meetings but did not specify any outcomes. In contrast, this official said the 2011 MOC is more oriented toward projects and outcomes, provides motivation for decisions at the project level, and drives development toward demonstrations. SJU officials with whom we spoke noted that the 2011 MOC’s structure for meetings has helped the SJU organize and set priorities for its work. According to our prior work, collaborating agencies that work together to define and agree on their respective roles and responsibilities, as FAA, the EU, and SJU have done through the 2011 MOC and related documents, can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. The 2011 MOC lays out a structure of jointly led and staffed coordination plans through which each side can leverage the resources of the other. Such a structure mirrors an effective collaboration practice that, as we have previously reported, can help collaborating agencies access resources that would not be available if the two were working separately. According to FAA officials, a central purpose of both the 2011 MOC, and the older MOC with EUROCONTROL, is to leverage the resources of aviation experts in the United States and EU. By conducting joint research that leverages the expertise of FAA’s EU counterparts, FAA hopes to reduce the resources that each organization would otherwise require if it were to develop a solution in isolation. FAA officials noted that U.S. and EU experts have prior experience leveraging each other’s research so that the work goes further. For example, one action plan states that its primary purpose is to minimize duplication of effort, so as to reduce costs and time to deployment. These same officials also pointed to saving resources when U.S. and EU experts worked together on a highly technical task under an action plan. The joint RTCA/EUROCAE special committees also leverage the knowledge of officials representing NextGen, SESAR, and industry interests. For instance, representatives from FAA and EUROCONTROL, avionics manufacturers, U.S. Department of Transportation, and organized labor, such as the National Air Traffic Controllers Association, participate in the joint RTCA/EUROCAE special committee on Data Comm. European officials involved in developing standards, and U.S. and European officials involved in manufacturing aerospace equipment and airframes, noted that companies that traditionally compete for sales, such as Boeing and Airbus, or Raytheon and Thales (avionics manufacturers), work together in these joint committees to develop standards for air traffic management systems. These U.S. and EU companies want to operate in each other’s markets and believe that they can save resources if the standards are harmonized. U.S. stakeholders representing aerospace industries noted that the aviation industry, in general, is becoming much more internationalized than in the past, and nation-based solutions are becoming less important. In our prior work, we found that agencies that create a means to monitor, evaluate, and report the results of collaborative efforts can better identify areas for improvement. Annex I of the 2011 MOC mirrors this practice in that it establishes the framework for how the United States and SJU will oversee NextGen/SESAR coordination efforts. Progress and issues are reported upward from the coordination-plan leaders to the working group leaders. Working group leaders are responsible for maintaining a regular dialogue with their coordination plan leaders in order to address potential issues and risks of misunderstanding. Issues not resolved at this level are referred to the Coordination Committee. Based on their experience with the similarly structured MOC with EUROCONTROL, FAA officials we interviewed anticipate that the working groups will resolve most technical issues, and the Coordination Committee will address any significant items that cannot be resolved in the working groups. As these officials noted, most of the monitoring and evaluation work occurs at the Coordination Committee level. The coordination plan leaders will jointly report progress within their respective coordination plans twice a year to their working group leaders, who will report on the status of their activities at the Coordination Committee meetings. The Coordination Committee is to examine the progress made on the coordination plans, which contain issues to be addressed, actions to be taken, target dates, current status information, and a statement of the consequences if the issue is not resolved. This committee is to provide the working group leaders with support and guidance, and to ensure that adequate planning and resource allocation takes place for each working group. It is also responsible for examining any issues raised by the working group leaders, such as unclear situations, or activities that require a specific Coordination Committee action. If necessary, issues not resolved through the Coordination Committee are raised to the High Level Committee. Industry stakeholders aware of the governance structure in the 2011 MOC told us that it is a good sign that FAA and SJU have recognized the need for oversight over collaborative efforts because such oversight will help ensure that the systems are not developed in isolation. Previous agreements also had provisions for monitoring results. FAA and EUROCONTROL have a Coordination Committee under the MOC between the two organizations to oversee the FAA/EUROCONTROL action plans. The 2006 MOU called for FAA and the European Commission to try to meet at least every 12 months to review the functioning of the MOU. FAA’s performance management system is designed to incorporate all of the responsibilities and duties of each staff member, according to FAA officials we interviewed. This means that if a person is involved in the harmonization work under the 2011 MOC, his or her duties are covered under their performance plan and become part of his or her annual review. Additionally, FAA officials noted that the coordination plan leaders will be held accountable for the actions and deliverables, described and agreed to within their respective coordination plans, and will have to report results of their efforts to the Coordination Committee. We have previously reported that high-performing organizations use their performance management systems to strengthen accountability for results by placing greater emphasis on fostering the necessary collaboration both within and across organizational boundaries to achieve results. FAA has not externally reported its collaborative efforts with EU entities in public documents, such as its strategic plan or performance and accountability reports. As previously discussed, FAA has a long history of collaboration with the EU, but it has not detailed these efforts or outcomes in these publications. For instance, FAA’s strategic plan for 2009 through 2013, known as the Flight Plan, lacks any detailed information on these efforts. Likewise, FAA’s 2010 Performance and Accountability Report does not discuss FAA’s collaborative efforts with EU entities. In our past work, we have found that public reporting of results can reinforce agency accountability for collaboration. To FAA’s credit, its NextGen Implementation Plan, issued in March 2011, does state that the United States and EU have agreed to enter into a new MOC to advance the interoperability of NextGen and SESAR technologies and that FAA and SJU are collaborating on air traffic management research, development, and validation for global interoperability. However, the plan does not identify goals and strategies to achieve this interoperability, such as the structure and governance for ensuring interoperability outlined in the 2011 MOC. Stakeholders representing U.S. airlines, the U.S. aviation industry, and European avionics manufacturers told us that they were aware that work was progressing to ensure the interoperability of systems, but they were not aware of specific details. For example, stakeholders in the aerospace equipment industry expressed concerns about the differences in NextGen and SESAR’s Data Comm implementation timelines but could not say whether the collaborative structure of the 2011 MOC could help resolve these differences because they were not familiar with the details of the MOC’s structure and governance. Providing such information in these plans or other public documents would provide industry stakeholders with more details of the steps that FAA and SJU are taking toward NextGen/SESAR interoperability and would reinforce FAA’s accountability for achieving them. Mitigating Stakeholder Skepticism Is a Challenge Lack of Details on Collaborative Efforts Contributes to Stakeholder Skepticism about NextGen and SESAR Benefits Some stakeholders we interviewed on both sides of the Atlantic expressed skepticism about whether or when the future benefits of NextGen and SESAR will be realized, echoing concerns that have been raised in the past. We have reported on stakeholder concerns about FAA’s not following through with its NextGen efforts, which made airlines hesitant to invest in new equipment. This hesitancy arose after an airline equipped some of its aircraft with a then-new Data Comm system, but because of funding cuts, among other things, FAA canceled the program, and the airline could not use the system. The program’s cancellation contributed to widespread skepticism about FAA’s commitment to follow through with its plans, and that skepticism persists today among some of the stakeholders with whom we spoke. In Europe, an air navigation service provider representative said that experiences such as FAA’s canceling the earlier Data Comm program have led airlines to take a cynical view of promised benefits. He noted, for example, that the Atlantic Interoperability Initiative to Reduce Emissions may demonstrate benefits, but these benefits are not realized when landings are delayed at congested airports. He said that because industry has not realized many promised benefits from past efforts, there is skepticism about what today’s programs will produce. Similarly, a U.S. air freight transportation stakeholder pointed out that standards are now being implemented to support technologies designed to provide more distant benefits, but there is no guarantee that FAA will implement those technologies. Airline confidence that there will be NextGen/SESAR benefits over the long term is an important element in NextGen/SESAR implementation. FAA and SJU have been wrestling with airlines’ hesitancy to equip with NextGen/SESAR technologies because some of the key benefits, such as increased capacity and more direct, fuel-saving routing will not be realized until a critical mass of equipped aircraft exists. Because the first airlines to equip with the new technologies will not realize immediate benefits, it is difficult for an airline to make a business case showing that the near-term benefits of equipping will outweigh the cost. Our previous work has shown that agencies such as FAA can demonstrate their commitment to the collaborative process—a key element in NextGen and SESAR’s success—by using their strategic and annual performance plans as tools to drive collaboration. To its credit, FAA has briefed EU’s Consultation Body, an industry group composed of all European aviation stakeholders, on its collaborative efforts and has made presentations in a number of aviation forums, including the Air Traffic Control Global Conference, RTCA’s 2011 Annual Symposium, and a subcommittee of the NextGen Advisory Committee. Now that the 2011 MOC has been signed, FAA has an additional opportunity to demonstrate its commitment to the collaborative effort by detailing the collaborative framework provided in the MOC. Such reporting could help reduce stakeholders’ skepticism and airlines’ hesitancy to equip with NextGen technology. Efforts to reduce the federal debt could decrease the funding available to FAA for both collaboration and NextGen system development, potentially slowing the schedule for harmonization and adding to stakeholders’ skepticism. According to an action plan team report, traveling restrictions would cause a 6- to 9-month delay. To reduce travel costs, action plan teams have endeavored to schedule their meetings to coincide with other meetings and officials are making use of technological substitutes for travel, such as Webex. However, a EUROCONTROL official said that he does not consider these virtual meetings to be as effective as face-to- face interactions, and an official representing European air navigation service providers told us that overuse of this technology could impede harmonization and result in higher costs over the long run. Cuts in system development budgets could also delay the schedule for harmonization and the realization of interoperability benefits. FAA officials told us that they normally absorb funding cuts by eliminating or delaying programs, with funding cuts taking precedence over previously agreed upon schedules, even those whose schedules they have previously coordinated with Europe. For example, FAA officials responsible for navigation systems told us that FAA is restructuring the plans for its ground-based augmentation system (GBAS) because of potential funding reductions. These officials said that FAA might have to stop its work on GBAS while SESAR continues its GBAS development, with the result that SESAR may have an operational GBAS, while FAA does not. A delay in implementing GBAS would require FAA to continue using the legacy Instrument Landing System, which does not provide the benefits that GBAS would provide, according to these officials. Such a situation could further fuel stakeholder skepticism about whether FAA will follow through with its commitment to implementing NextGen, and in turn, increase airlines’ hesitancy to equip with NextGen technologies. Providing information about the ramifications of budget proposals is important to help congressional decision makers anticipate the effects of their decisions and to manage stakeholders’ expectations. In the past, we found that when FAA was required to cut its budget in line with expected funding, it did not inform decision makers about the implications of the cuts, including the rationale for proposed trade-offs, and the effects of cutting one program on related interdependent programs. In addition, FAA did not report on the impact of cuts on air traffic control modernization, including both the delayed benefits and the increased costs of maintaining legacy systems longer than originally planned. We recommended that FAA annually report this information to Congress, as well as the potential effects of any budget or schedule slippages on the overall transition to NextGen. In response, FAA established a new appendix to its Capital Investment Plan, which FAA provides annually to Congress and to the public over the Internet. The appendix includes each acquisition’s original and current budget and schedule, as well as the reasons for changes, as we recommended. It Is Too Early to Judge the 2011 MOC’s Effect on Ensuring Interoperability While the 2011 MOC follows several of the key practices that we have found can help to enhance and sustain collaborative efforts, it is still in the early stages of implementation. During the spring and summer of 2011, FAA and SJU were implementing the various pieces of the MOC and Annex I, such as developing coordination plans and appendixes. Although meetings or actions will not be considered formal until these elements are approved, FAA and SJU officials continue meeting informally to address technical issues. Because the components of the MOC have not yet been put into action, we were unable to judge its effectiveness in facilitating collaboration toward interoperability. The real test of the MOC’s effectiveness will come when NextGen and SESAR move toward final decisions about implementing solutions and system components. In the past, FAA and Europe jointly developed systems that were either not implemented or were implemented differently by each side, such as early efforts at developing harmonized Data Comm systems. The structure of the 2011 MOC is designed to prevent such results in the future. However, the absence of effective collaborative practices does not guarantee failure, nor does their presence ensure success. Conclusions The continuing skepticism among industry stakeholders about FAA’s commitment to follow through on its plans elevates the importance of providing these stakeholders with more detailed information on the agency’s efforts toward interoperability and in particular, on the structure and processes laid out in the 2011 MOC’s Annex 1. These details could allow stakeholders to judge for themselves whether interoperability efforts are moving ahead deliberately, as planned, and provide assurances that FAA is serious about collaborating on interoperability and implementing NextGen. Providing this assurance could help to mitigate stakeholders’ skepticism about whether or when NextGen and SESAR benefits will be realized and alleviate airlines’ hesitancy to equip with new technology. As Congress works to reduce the federal debt, we believe that it will be important for FAA to provide current information on how budget decisions will affect the progress of NextGen, as well as for stakeholders to understand how any changes in planned funding will affect their realization of NextGen benefits. Because we have previously recommended that FAA provide such information, and FAA has recently begun to implement our recommendations, we are making no further related recommendation in this report. Recommendation for Executive Action To better inform aviation stakeholders of efforts toward interoperability and to improve accountability for, and the credibility of, such efforts, we recommend that the Secretary of Transportation direct the FAA Administrator to publicly provide more details on the efforts FAA has taken and planned toward NextGen/SESAR interoperability, such as through strategic plans, performance reports, or other means. Agency Comments We provided a copy of this report to the Department of Transportation and other interested parties for review and comment. The Department of Transportation agreed to consider our recommendation. The Department of Transportation and the European Commission provided technical comments, which we incorporated as appropriate. As agreed with your offices, we plan no further distribution until 20 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of the Federal Aviation Administration, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Scope and Methodology To identify and understand the efforts that the Federal Aviation Administration (FAA) has taken to ensure the interoperability of the Next Generation Air Transportation System (NextGen) with the Single European Sky Air Traffic Management Research (SESAR) programme, we reviewed key documents and conducted semistructured interviews with FAA officials and aviation stakeholders. Specifically, we reviewed agreements between FAA and the European Union (EU) concerning collaborative research on air traffic management and other documents such as the NextGen Implementation Plan and the European Air Traffic Management Master Plan, which provide details on NextGen and SESAR programs, and reports comparing NextGen and SESAR concepts and avionics road maps. To gain perspective on FAA’s collaborative efforts, we developed 29 questions spanning four topics: (1) past harmonization efforts and outcomes, (2) the organization of current harmonization efforts, (3) the 2011 Memorandum of Cooperation, and (4) harmonization factors. FAA’s Air Traffic Organization (ATO) provided written responses to these questions, and we met with officials from ATO’s International Office and from FAA’s Joint Planning and Development Office to obtain clarification on their responses. To understand the nature and outcomes of past joint research efforts with the EU related to NextGen and SESAR, we reviewed the research topics of 18 action plan teams to identify those that dealt with topics most closely related to the central operational concepts of NextGen and SESAR. We reviewed documents such as the annual work plans and status reports spanning 2004 to 2010 for the 4 action plan teams that we identified as meeting our selection criterion. To understand how key NextGen and SESAR programs must interoperate, we interviewed officials in FAA’s offices for NextGen Planning and Integration, Navigation Services, Communications, and Surveillance. To understand the nature of collaborative efforts between FAA and European aviation experts, we met with key officials from four RTCA special committees that are working jointly with the European Organisation for Civil Aviation Equipment (EUROCAE) to develop performance standards for NextGen and SESAR equipment. To understand NextGen and SESAR interoperability in a global context, we obtained a briefing from a representative of the International Civil Aviation Organization (ICAO). We also conducted a series of interviews with EU officials in Brussels, Belgium and Paris, France to obtain their perspectives on FAA’s efforts to ensure NextGen/SESAR interoperability. Specifically, we met with high-ranking officials at the European Commission; the SESAR Joint Undertaking (SJU); European Organisation for the Safety of Air Navigation (EUROCONTROL); and EUROCAE. We conducted a telephone interview with a representative of the European Aviation Safety Agency (EASA). We also obtained perspectives from high-level U.S. and European stakeholder associations representing airlines, airports, as well as airframe and equipment manufacturers. In Washington, D.C., we visited the Aerospace Industries Association (AIA), to meet with officials from AIA’s Civil Aviation Infrastructure, International Affairs, and Standardization Offices. Officials from Raytheon’s Civil Programs and Rockwell Collins’ Strategic Initiatives also participated in the AIA meeting. At Honeywell, we met with an official with Aerospace Regulatory Affairs. At the Air Transport Association, we met with officials within Legislative and Regulatory Policy, Airspace Management, and Operations. At the International Air Transport Association, we met with officials within Infrastructure Implementation and Airports; Legislative Affairs North America; Safety, Operations, and Infrastructure for the Americas; and Safety, Operations, and Infrastructure in Europe. At Airbus’ Washington, D.C., office, we met with officials within Engineering, Airbus Americas; Safety and Technical Affairs; Government Relations; and Airbus Prosky. We also met with officials from Airports Council International. We conducted telephone interviews with officials from the National Business Aviation Association; FedEx; the International Federation of Air Traffic Controllers’ Associations (IFATCA); the International Coordinating Council of Aerospace Industries Associations, and a Professor of Aeronautics and Astronautics. To obtain perspectives of European stakeholders, we visited the Aerospace and Defense Industries Association of Europe to meet with an official from Air Transport and an official representing Dassault Aviation, Direction Generale Technique, and we met with a representative of EU Air Navigation Service Providers. We conducted telephone interviews with officials from the Industry Consultation Body (ICB) and the Civil Air Navigation Services Organization (CANSO). To determine how FAA’s collaborative efforts with the EU compare with effective interagency collaborative practices, we compared FAA’s collaborative efforts, as documented in status reports of action plan teams and in the 2011 Memorandum of Cooperation, with key practices that we have previously identified in effective interagency collaborations. We combined two of the practices into one due to their similarities, resulting in the seven key practices that we used to conduct our comparative analysis. Prior to deciding on the seven key practices, we conducted a literature search of peer reviewed journal articles published between 2006 and 2011 to identify studies on effective practices for interagency or international collaboration. We conducted a search utilizing multiple databases, such as ProQuest, Academic OneFile, and EconLit, using search terms such as collaboration, cooperation, coordination combined with the terms interagency, successful, and effective. From those sources, an initial 428 results were returned. After reviewing citations for relevance and eliminating duplicates, we were left with 37 citations. We reviewed citations to select studies dealing explicitly with effective practices for collaboration. Based on the review of these articles, we identified effective practices for collaboration and compared these practices with those from our prior work with the conclusion that the practices we had previously identified were (1) consistent with the academic literature on interagency collaboration, and (2) also applicable to international collaboration. We conducted this performance audit from January to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Maria Edelstein, Assistant Director; Nabajyoti Barkakati; Lauren Calhoun; Elizabeth Curda; Pamela Davidson; Leia Dickerson; Colin Fallon; Jeffrey Heit; Edmond Menoche; Joshua Ormond; Taylor Reeves; and Maria Stattel made key contributions to this report.
The Federal Aviation Administration (FAA) is leading development of the Next Generation Air Transportation System (NextGen), which will transform the current radar-based air traffic control system into a satellite-based system. At the same time, the European Union (EU) is developing a similar transformation effort, known as the Single European Sky Air Traffic Management Research (SESAR) programme. Interoperable NextGen and SESAR systems and procedures will be important for aircraft to seamlessly transition from one system to the other. As requested, this report discusses (1) the efforts that FAA has taken to ensure the interoperability of NextGen with SESAR and (2) how those efforts compare with effective interagency collaboration practices. To address these issues, GAO reviewed agreements between the U.S. and the EU concerning collaborative research on air traffic management and documents related to NextGen and SESAR; reviewed the literature on effective collaboration; and interviewed FAA and EU officials. FAA and the EU are working collaboratively toward ensuring interoperability as they modernize their air traffic control systems (NextGen and SESAR) and are generally using effective collaborative practices, but mitigating stakeholder skepticism about realizing NextGen/SESAR benefits will be a challenge. FAA-EU collaborative efforts predate NextGen and SESAR and helped establish some of these systems’ central concepts. In March 2011, FAA and the EU signed a new agreement that established a formal collaborative structure for NextGen and SESAR. FAA is generally following collaborative practices that we have observed in successful interagency collaborations, but some U.S. and EU stakeholders expressed skepticism about whether NextGen’s and SESAR’s benefits will ever be realized. FAA could reduce stakeholder skepticism by providing, in its public documents, details on the new structure for collaboration and governance with the EU. FAA and the EU are working collaboratively toward Next Generation Air Transportation System (NextGen) and Single European Sky Air Traffic Management Research (SESAR) interoperability. In 2006, FAA and the European Commission established a Memorandum of Understanding (MOU) that allowed reciprocal participation in meetings, which provided each with an awareness of the other’s plans. The MOU also continued a long-standing agreement that fostered collaborative research and helped develop some of the central concepts of NextGen and SESAR, such as data communications and satellite-based surveillance. Additionally, FAA and the EU conducted demonstrations of NextGen/SESAR procedures and technologies that produced useful results at the airports involved in the demonstrations. In March 2011, FAA and the EU signed a separate Memorandum of Cooperation (MOC) that established a formal collaborative structure for NextGen and SESAR. Outside of formal agreements, U.S. and EU standards bodies have formed joint committees to develop common standards for NextGen and SESAR systems. Additionally, FAA and the EU are working with an international standards organization to facilitate global interoperability.
Background Charter schools are public schools established under contracts that grant them greater levels of autonomy from certain state and local laws and regulations in exchange for agreeing to meet certain student performance goals. D.C. charter schools must comply with select laws, including those pertaining to special education, civil rights, and health and safety conditions. In addition, charter schools are accountable for their educational and financial performance, including the testing requirements under the Elementary and Secondary Education Act of 1965, as amended (ESEA). A wide range of individuals or groups, including parents, educators, nonprofit organizations, and universities, may apply to create a charter school. Charter schools in the District are nonprofit organizations and, like other nonprofits, are governed by a board of trustees. The board of trustees, which is initially selected by the school founders, oversees compliance with laws, financial management, contracts with external parties, and other school policies. School board trustees are also responsible for identifying existing and potential risks facing the charter school and taking steps to reduce or eliminate these risks. Charters to operate a school are authorized by various bodies, and may include local school districts, municipal governments, or special chartering boards. In 1996, Congress passed the District of Columbia School Reform Act of 1995 (School Reform Act), creating PCSB as a chartering authority. PCSB was established with the purpose of approving, overseeing, renewing, and revoking charters. After granting charters to schools, PCSB is responsible for monitoring charter schools’ academic achievement, operations, and compliance with applicable laws. While the Mayor and Chancellor of the District of Columbia Public Schools (DCPS) oversee traditional public schools, PCSB is responsible for holding the District’s charter schools accountable for academic results and compliance with applicable laws. PCSB is comprised of seven unpaid board members with expertise relevant to charter school operation and approximately 25 employees who implement the board’s policies and oversee charter schools. Under D.C. law, the seven-member board is appointed by the Mayor, and members of PCSB may serve up to two 4-year terms. To support its operations, PCSB receives local funds through the annual D.C. Appropriations Act. PCSB also receives administrative fees from charter schools based on the number of students enrolled as well as revenue from grants. PCSB’s largest expenditures are for its personnel and program-related costs, such as technology upgrades and charter school reviews that are conducted, in part, by consultants (see table 1). The D.C. School Reform Act allows PCSB to grant up to 10 charters per year. Each charter remains in force for 15 years, and may be renewed for an unlimited number of times. PCSB is required to review each charter at least once every 5 years to determine whether the charter should be revoked. Each year PCSB is required to submit an annual report to the Mayor, the District of Columbia Council, the U.S. Secretary of Education, the appropriate congressional committees, and others that includes information on charter renewals, revocations, and other actions related to public charter schools. A total of 76 charter schools have opened in the District since BOE and PCSB began chartering schools in 1996 and 1997, respectively. However, between 1998 and 2010, 24 charter schools closed, many for fiscal mismanagement discovered through PCSB monitoring. As of the 2010-11 school year, 52 charter schools across 93 campuses are in operation, serving over 29,000 students at all education levels, including early childhood and adult education. Charter schools in the District represent varied instructional and academic models. For example, some schools have a particular curricular emphasis, such as math and science, art, or foreign language, while other charter schools focus on specific populations, such as students with learning disabilities, students who have dropped out or are at risk of doing so, youth who have been involved in the criminal justice system, and adults. In addition, one charter school is a college preparatory boarding school. See appendix I for more information on D.C. charter school characteristics. Unlike traditional public schools, which are generally part of a larger local educational agency (LEA), or school district, each D.C. charter school operates as its own LEA for most purposes. As a result, charter schools are responsible for a wide range of functions associated with being a local school district, such as applying for certain federal grants and acquiring and maintaining facilities. Charter schools may operate in a variety of facilities, such as surplus D.C. school buildings, shared spaces with other schools, and converted commercial buildings, including warehouses. However, public charter schools in D.C.—like charter schools across the nation—face challenges in acquiring facilities and funding facilities-related projects. GAO has previously reported that charter schools consistently encountered problems obtaining cost-effective and appropriate facilities. The District provides various forms of assistance to charter schools for facilities, including preference in leasing or purchasing former D.C. school buildings. The District prefers to lease rather than sell these buildings to charter schools so that they remain assets to D.C. residents. Although PCSB Operates as an Independent Agency, It Is Subject to Performance Hearings and Financial Oversight While the Mayor appoints members to the board, PCSB functions as an independent agency within D.C. government. As such, it operates outside of the policies and direction of the Mayor, and operates outside of DCPS and the Chancellor’s purview (see figure 1). While PCSB functions as an independent agency, PCSB and the Office of the Deputy Mayor for Education coordinate on issues of mutual concern. The School Reform Act, which created PCSB, outlines the operations of PCSB and grants the board the power to appoint, terminate, and fix the pay of its executive director and other staff who carry out the daily operations of PCSB. The appointed seven-member board developed by- laws that established its operational procedures, including how appointed board members will be removed. The by-laws also include a reference to the board’s rules on gifts and conflicts of interest, which board members must follow. In addition, the appointed board members establish policies and procedures for evaluating the financial management, governance, and performance of charter schools. PCSB staff implement the policies set by the board and handle the day-to-day charter school oversight activities. According to PCSB, its staff comply with all applicable D.C. laws and regulations, including those related to procurement, ethics, and employment. While several agencies may conduct activities to review the performance and operations of PCSB, the most regular and comprehensive activities are conducted by the D.C. Council and Office of the Chief Financial Officer (OCFO) (see figure 2). Similar to hearings the D.C. Council conducts for other District agencies, boards, and commissions, its Committee of the Whole holds annual performance hearings for PCSB to examine its expenditures and performance. In preparation for these hearings, the Committee generally requests information from PCSB on the following: budget, including approved budget and actual spending; programs and policies, including information on all policy initiatives, studies PCSB prepared or contracted for, and a description of the activities taken to meet key performance indicators; ongoing or completed investigations or audits of PCSB or any of its employees, and actions taken to address all recommendations identified during the previous 3 years by the D.C. Office of the Inspector General or Auditor; and contracting and procurement. In addition, OCFO oversees PCSB’s financial management. According to OCFO and PCSB, OCFO’s oversight of PCSB includes reviewing budget estimates and proposals, reviewing financial processes, and overseeing cash management and procurement activities. OCFO manages PCSB’s accounts payable, ensures procurement activities are administered according to approved PCSB financial policies, and provides monthly reports to the PCSB executive director and board members on financial activities and budget variance. OCFO also oversees and coordinates PCSB’s fiscal year-end process which includes ensuring the accurate and timely closing of books for auditing purposes. In addition to the oversight by OCFO, the School Reform Act requires PCSB to provide for an audit of its financial statements by an independent certified public accountant and forward the findings and recommendations of these audits to the Mayor, Council, and OCFO of the District. According to an OCFO official, OCFO may also conduct audits of PCSB. Other agencies may conduct audits or investigations when issues arise. The D.C. Office of Campaign Finance monitors appointed PCSB members’ submission of annual financial disclosure statements, has the authority to investigate conflict of interest violations, and may impose fines and refer cases to the United States Attorney for the District of Columbia. In 2009, the Office of Campaign Finance conducted investigations of allegations of conflicts of interest in response to concerns raised in local news reports and concluded that the appointed PCSB members under investigation did not violate conflict of interest laws. The District of Columbia Office of the Attorney General (OAG) conducted a similar conflict of interest inquiry regarding one of the same board members. OAG did not find any violations, but made recommendations that the board strengthen its ethics standards and formal policies which have largely been implemented. These include recusal, financial disclosure, gift rules, and participation in regular ethics training. PCSB has participated in ethics training but has not yet established formal policies for implementing such training on a regular basis. The D.C. Inspector General and the D.C. Auditor may also conduct investigations or audits of PCSB. PCSB Implemented a New Accountability System to Monitor Charter Schools PCSB’s New Accountability System Is Currently Undergoing Revision To improve oversight of charter schools, PCSB launched a new accountability system—called the Performance Management Framework (PMF)—to capture school performance information for the 2009-2010 school year (see figure 3 for the current version of the PMF). However, in October 2010, about 2 weeks before the PMF results were to be released, the board voted to withhold the results from the public, citing concerns about the accuracy of the school-level data collected. For example, some elementary, middle, and high schools had data accuracy issues with the re- enrollment and demographic numbers, and high schools were inconsistently reporting graduation rates. In addition to withholding the results to resolve data accuracy issues, PCSB later communicated that it also wanted to thoroughly review components of the PMF to ensure their analyses were accurate and fair. PCSB decided to use the data collected for the 2009-2010 school year to further develop and test the new system, while it continues implementing the system. Results from the current 2010- 2011 school year are expected to be released in fall 2011. As it revises the new system, PCSB is working collaboratively with charter school leaders. For more information on the PMF measures and components that are under development or review, see appendix II. According to PCSB officials, the PMF is designed to allow PCSB to assess and compare all schools using common academic measures, target more intense reviews to schools that are not performing well relative to other schools, and use technology to streamline document submission and review. The PMF is based on a common set of five academic indicators, as well as nonacademic measures to evaluate school performance, as shown in figure 3. Weights are assigned to each academic measure and these academic measures are then combined to yield a final PMF score for each school. This final score determines the level of additional support or oversight a school receives from PCSB. The PMF is also designed to assess schools’ nonacademic performance in finance, governance, and compliance with ESEA and other applicable laws. Nonacademic reviews are to be conducted annually. According to PCSB officials, although a school’s PMF score does not include nonacademic indicators, charter schools may face consequences, including charter revocation, for poor financial performance or violations of law discovered during governance and compliance reviews. To conduct various academic and nonacademic reviews, PCSB uses internal staff, consultants, and a recently established audit management unit that will analyze and monitor schools’ financial statements and audits. In addition, the PMF will be supported by a new electronic system to streamline the review process and enable PCSB and charter schools to exchange and share documents more efficiently. For example, under the PMF, schools will submit academic performance data, annual reports, and financial statements electronically for review. This review will allow PCSB to identify potential issues and schools needing additional help or more thorough, in-depth review. According to PCSB, the new electronic system is expected to make the process of sending, receiving, filing, tracking, and reviewing electronic versions of reporting requirements under the PMF more efficient for PCSB and charter school operators. PCSB Has Recently Communicated to Charter Schools Its Plans for Implementing the Revised PMF for the 2010-2011 School Year Although charter schools initially received limited information about PCSB’s plans for implementing the revised system, PCSB has more recently taken steps to keep charter schools informed. In January 2011— about 3 months after PCSB decided to withhold initial PMF results—PCSB provided information to charter schools about when it would revise and implement components of the PMF and its timeline for soliciting feedback. PCSB also provided charter schools with its timeline for collecting and validating data, as well as information on how it would resolve data accuracy issues, including developing data collection templates. PCSB plans to solicit feedback from charter schools on the revised PMF model for elementary, middle, and high schools in April 2011, and plans to hold these schools accountable under the PMF for the 2010-2011 school year, with results released to the public in November 2011. For adult and early childhood schools, PCSB expects to implement the PMF for the 2012-2013 school year. Moving forward, PCSB plans to provide updates to charter schools on its progress in revising the PMF in weekly e-mail messages. Charter Schools Receive Funding and Other Resources for Their Operations and Facilities D.C. Charter Schools May Receive Local, Federal, and Private Funding for Their Operations and Facilities D.C. charter schools may receive funding for their operations and facilities from a range of sources. Like traditional public schools in the District, the primary source of funding for charter schools is local appropriations, which is allocated on the basis of a per-pupil formula that takes several factors into consideration. As shown in table 2, the amount charter schools and DCPS receive per pupil varies based on grade level, ranging from $6,578 for adult students to $11,752 for preschool students in school year 2009-2010. Schools also receive add-on amounts to account for differences in the cost of educating certain student populations, such as special education and limited English proficient students. For example, schools that served kindergarten students who require more than 24 hours per week of special education services received a total of $36,220 per pupil for such students in school year 2009-2010. In addition, to help cover the cost of charter school facilities, most of which are commercial buildings around the city, charter schools receive a local per-pupil facilities allowance. For the 2009-2010 school year, the facilities allowance was $2,800 per pupil for nonresidential students and $8,395 per pupil for schools that provide residential room and board. Although local funding for operations for both charter schools and traditional schools is determined based on the same formula, charter schools receive funding based on actual, or audited, enrollment while DCPS receives funds based on projected enrollment. Charter schools receive four payments during the fiscal year, which are reconciled based on audited enrollment figures. For example, if a charter school’s audited enrollment is higher or lower than projected, subsequent payments will be increased or decreased accordingly. DCPS receives spending authority at the beginning of the fiscal year based on enrollment projections, and it is not adjusted based on audited enrollment, according to District officials. Charter schools may also receive federal and private funding for their operations and facilities. Like all public schools in D.C., charter schools are eligible to receive federal formula grants through various programs under ESEA and the Individuals with Disabilities Education Act (IDEA). Because charter schools in the District are considered individual LEAs, they may also be eligible to compete for federal discretionary grants from agencies, such as the Department of Education, the Department of Health and Human Services, and the Department of Justice. Furthermore, between fiscal years 2004 and 2010, Congress appropriated over $104 million directly to the District to fund programs intended to expand public charter schools. Some of the programs for which federal payments were used included facilities financing for charter schools through which the District awarded more than 80 grants and loans to help charter schools build, improve, lease, or purchase facilities. In addition, charter schools may engage in fundraising activities and accept grants and gifts from corporations, foundations, and other organizations, as long as the gift is not subject to any condition contrary to law or their charters. For example, some of the charter schools we visited held yearly fundraisers and received annual gifts from corporations. Charter schools may also generate income by charging tuition and fees for students who live outside of the District or renting out property. In addition, charter schools may take out private loans to secure their facilities. D.C. Charter Schools Have Access to Nonfinancial Resources D.C. charter schools may receive local personnel and services from the District. Officials at some of the schools we visited told us they have been able to obtain school police officers, nurses, crossing guards, and other city services. In addition, 14 of 52 charter schools elected to use DCPS as their LEA for special education services. For these schools, DCPS is responsible for special education evaluations, placements, litigation, and other services, according to school officials. However, charter schools that serve as their own LEA for special education services are responsible for carrying out these functions. All charter schools, including those that use DCPS as the LEA, are responsible for providing direct special education services, such as specialized instruction or staff. Charter schools may also lease former D.C. public school buildings through a provision in D.C. law, enacted in late 2004, which provides a “right of first offer” to charter schools for school buildings DCPS determines it no longer needs. This allows charter schools to submit proposals for these buildings to the District before other entities, such as private development firms. As shown in figure 4, DCPS transfers buildings it no longer needs to the D.C. Department of Real Estate Services (DRES)—an agency under the purview of the Mayor—which is responsible for the District’s real estate portfolio management, among other duties. The District then determines whether there is another governmental need for the building before making it available to charter schools under the “right of first offer” preference. Some former D.C. school buildings have been used as homeless shelters, space for local agencies, and additional space for DCPS during school renovations, among other uses. If DRES determines there is no governmental need for the building, DRES may issue a Request for Offers (RFO) from charter schools. As of December 2010, 52 former D.C. school buildings have been transferred from DCPS to DRES and charter schools occupy or will occupy 18 of these buildings (see figure 5). Twenty-five of the 52 buildings transferred to DRES have been made available for first offers from charter schools. The remaining 27 buildings were exempt from the right of first offer provision due to a pre-existing lease, resolution of the D.C. Council, or governmental use by the District. To date, charter schools have submitted offers for 17 of the 25 buildings made available under the right of first offer provision, and offers have been accepted for 10 of these buildings. For accepted charter school offers, the property is appraised and a lease is negotiated and, if required, executed with approval from the D.C. Council. If no charter school submits an offer or the offers are rejected, DRES may use the building for other governmental purposes or lease the building for other purposes, such as use by a nonprofit entity, according to agency officials. DRES may also transfer buildings to the Office of the Deputy Mayor for Planning and Economic Development (DMPED) if there is potential use for economic development, according to DRES officials. DMPED will then issue another solicitation for offers and proposals from private developers or other entities. Some former school buildings that were transferred to DMPED were awarded to development corporations for residential and retail projects. The Basis for the District’s Decisions to Reject Charter School Offers for Former D.C. School Buildings Is Unclear For the offers that are rejected, we found that the RFO does not detail all of the factors the District may consider in deciding whether to award a school building to a charter school and that the basis for the District’s decision to reject a charter school’s offer is not always sufficiently documented. Specifically, the RFO states that the selection panel, which is comprised of officials from DRES and the Office of the Deputy Mayor for Education, will evaluate offers in the context of six evaluation criteria: (1) educational vision, (2) project vision, (3) capability of respondent to execute its vision, (4) past experience with similar project(s), (5) financial feasibility, and (6) best interest of the District. Although the RFO criterion “best interest of the District” is rather broad and could conceivably encompass other factors the District may consider in evaluating offers, the only additional information provided for this criterion in the RFO pertains to whether the offer requires a District subsidy and maximizes community involvement. However, the D.C. rule regarding disposition of former school property states that the long- or short-term community development; economic development; cultural, financial, or other goals of the Mayor or the District may be also considered by the selection panel when deciding whether to accept or decline charter school offers. Only by looking at the D.C. rule would an offeror know that these additional factors may be considered. Because the RFO does not clearly indicate that additional factors beyond the stated criteria can affect whether a proposal is accepted, potential offerors may not have a clear understanding of the criteria that will be used to evaluate their offers. While District officials felt that the criteria listed in the RFO were inclusive of all factors that may be considered, some charter school officials and advocates we spoke with expressed a lack of understanding and confidence in the fairness and transparency of how the District made decisions to accept and reject offers. We also found that the selection panel does not always sufficiently document the reasons for recommending that a particular charter school’s offer be rejected. After consensus is reached on a charter school’s offer, the selection panel provides a memorandum documenting its recommendation for accepting a charter school’s offer to the Director of DRES and Deputy Mayor for Education. According to District officials, the Director of DRES and the Deputy Mayor for Education then make a recommendation to the Mayor, who ultimately decides whether an offer is accepted. For rejected offers, however, the selection panel does not always document its recommendation, and although DRES notifies the charter schools of its decision in writing, it does not include the reasons that offers were rejected. While DRES officials told us that charter schools may request a briefing to understand why their offers were rejected, the rejection letter does not state that charter schools have this option. Because DRES does not always document its recommendations for rejecting charter school offers and the notification letter does not include the reasons offers were rejected, charter schools may lack information that could help them better understand the process and develop future offers. Conclusions In the District, charter schools, which enroll nearly 40 percent of all public school children in the city, offer parents more educational choice. These schools offer varied approaches to instruction and some target specific subpopulations of students. Charter schools in the District, and in general, were designed to operate with more autonomy and flexibility than traditional schools, but like all schools, are accountable for ensuring that every student receives a quality public education. PCSB has oversight over all 52 charter schools, and its new PMF has the potential to be a valuable tool for overseeing and monitoring charter schools. The PMF also has the potential to provide more information to parents, school leaders, and other stakeholders about the relative and collective performance of charter schools across a range of indicators. As such, it is important that PCSB take the necessary steps to ensure that its PMF is designed and implemented well. PCSB plans to collaborate with charter schools to develop and revise the system, and has more recently begun providing more detailed information to charter schools about its plans for implementing the revised system for the 2010-2011 school year. We believe that ongoing collaboration and communication such as this is vital to the successful implementation of the PMF. The District faces tough trade-offs in how it uses its resources. Former D.C. school buildings may be attractive locations for a range of city uses, including charter school facilities. The growing charter school population in the District makes appropriate, affordable space to educate students a critical resource to the success of individual charter schools and the District’s charter school movement as a whole. Therefore, it is also important that criteria used to determine whether a charter school receives a former D.C. school building are as transparent as possible and that the basis for the District’s decision is clear and sufficiently documented. Additional clarity and transparency regarding how the District decides to use former D.C. school buildings may increase charter schools’ understanding of the process and may help to avoid the appearance of a lack of fairness among charter school officials and advocates. Recommendation for Executive Action To ensure that the criteria for evaluating offers from charter schools to use surplus D.C. school buildings are clear and the reasons for denial of offers are communicated, we recommend that the Mayor of the District of Columbia direct DRES to take the following two actions: ensure the RFO on former D.C. school buildings clearly indicates all factors that may be considered by the selection panel, and inform charter schools, in writing, of the reasons their offers were rejected or of the opportunity to request a briefing to obtain such information. Agency Comments and Our Evaluation We provided a draft of this report to PCSB, the D.C. Mayor’s Office, and the U.S. Department of Education. PCSB and the Mayor’s Office provided written comments, which are reproduced in appendixes III and IV, respectively. The U.S. Department of Education did not have comments on the report. We also received technical comments from various offices cited in the report, including DRES, D.C. Office of the Attorney General, and the D.C. Council, which we incorporated throughout the report where appropriate. In its letter, PCSB stated that it has redoubled its efforts to work with nationally recognized experts in school accountability systems as it further validates certain elements of its new accountability system. The District agreed with our recommendations and stated that DRES has begun taking steps to improve the process for awarding former D.C. school buildings to charter schools and will continue to identify ways to improve the selection process. We are sending copies of this report to PCSB, the D.C. Mayor’s Office, U.S. Department of Education, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: D.C. Charter School Characteristics Appendix I: D.C. Charter School Characteristics adequate yearly progress (AYP) Type of building(s) Prekindergarten (PreK) Mission, curriculum, or target population adequate yearly progress (AYP) Type of building(s) Mission, curriculum, or target population adequate yearly progress (AYP) Type of building(s) Mission, curriculum, or target population adequate yearly progress (AYP) Type of building(s) Commercial Reflects schools’ AYP status for 2010. Under ESEA, states are required to establish performance goals and hold their schools accountable for students’ performance by determining whether or not schools have made AYP. The Act requires states to set challenging academic content and achievement standards in reading or language arts, mathematics, and science, and determine whether school districts and schools make AYP toward meeting these standards. To determine AYP, the District uses student test scores on the statewide exam in grades 3 through 8 and 10. “N/A” is listed for schools that did not serve the tested grades at the time of the statewide exam or whose sample size was too small to calculate AYP. Individual charter schools may operate in several locations. The building types are for school year 2008-2009, except for National Collegiate Prep which opened in 2009. We were unable to obtain current building information for all charter schools in the District. Appendix II: Description of Academic and Nonacademic Components of the Performance Management Framework as of December 2010 Using state test scores, predicts whether a student will score at the proficient or advanced level in the future Appendix III: Comments from Public Charter School Board Appendix IV: Comments from the District of Columbia Mayor’s Office Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Sherri Doughty, Assistant Director; Charlene J. Lindsay, Analyst-in-Charge; Raun Lazier, Vernette Shaw, Brian Egger, Vida Awumey, James Bennett, Nora Boretti, Russell Burnett, Susannah Compton, Sheila McCoy, Sara Pelton, and James Rebbe also made significant contributions to this report. Related GAO Products Charter Schools: Education Could Do More to Assist Charter Schools with Applying for Discretionary Grants. GAO-11-89. Washington, D.C.: December 7, 2010. District of Columbia Public Education: Agencies Have Enhanced Internal Controls Over Federal Payments for School Improvement, But More Consistent Monitoring Needed. GAO-11-16. Washington, D.C.: November 18, 2010. District of Columbia Public Schools: Important Steps Taken to Continue Reform Efforts, But Enhanced Planning Could Improve Implementation and Sustainability. GAO-09-619. Washington, D.C.: June 26, 2009. D.C. Charter Schools: Strengthening Monitoring and Process When Schools Close Could Improve Accountability and Ease Student Transitions. GAO-06-73. Washington, D.C.: November 17, 2005. Charter Schools: Oversight Practices in the District of Columbia. GAO-05-490. Washington, D.C.: May 19, 2005. Charter Schools: To Enhance Education’s Monitoring and Research, More Charter School-Level Data Are Needed. GAO-05-5. Washington, D.C.: January 12, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: December 10, 2004. District of Columbia: FY 2003 Performance Report Shows Continued Improvements. GAO-04-940R. Washington, D.C.: July 7, 2004. Charter Schools: New Charter Schools Across the Country and in the District of Columbia Face Similar Start-Up Challenges. GAO-03-899. Washington, D.C.: September 3, 2003. Public Schools: Insufficient Research to Determine Effectiveness of Selected Private Education Companies. GAO-03-11. Washington, D.C.: October 29, 2002. DCPS: Attorneys’ Fees for Access to Special Education Opportunities. GAO-02-559R. Washington, D.C.: May 22, 2002. District of Columbia: Performance Report Reflects Progress and Opportunities for Improvement. GAO-02-588. Washington, D.C.: April 15, 2002. Charter Schools: Limited Access to Facility Financing. GAO/HEHS-00-163. Washington, D.C.: September 12, 2000. Charter Schools: Federal Funding Available but Barriers Exist. GAO/HEHS-98-84. Washington, D.C.: April 30, 1998. Charter Schools: Issues Affecting Access to Federal Funds. GAO/T-HEHS-97-216. Washington, D.C.: September 16, 1997.
Almost 40 percent of all public school students in the District of Columbia (D.C. or District) were enrolled in charter schools in the 2010-11 school year. The D.C. School Reform Act established the Public Charter School Board (PCSB) for the purpose of authorizing and overseeing charter schools. Congress required GAO to conduct a management evaluation of PCSB. GAO addresses the following: (1) the mechanisms in place to review the performance and operations of PCSB, (2) the procedures and processes PCSB has in place to oversee and monitor the operations of D.C. charter schools, and (3) the resources available to charter schools for their operations and facilities. GAO interviewed officials from D.C. agencies and 7 charter schools and reviewed oversight procedures for PCSB and charter schools. GAO also reviewed the processes for providing resources to charter schools and analyzed data on these resources. Although the Mayor appoints members to the board, PCSB has operated outside of the control of the Mayor and the Chancellor of traditional D.C. public schools; however, several agencies review PCSB's performance and operations. The D.C. Council holds annual hearings to examine PCSB's organization, personnel, budget, programs, policies, contracting, and procurement. The Office of the Chief Financial Officer oversees PCSB's budget development, operations, and financial reporting and reviews PCSB's monthly financial reports and year-end audits. Other offices monitor compliance with applicable laws and may conduct investigations or audits of PCSB when issues arise. PCSB launched its new performance accountability system to oversee the District's charter schools in school year 2009-2010. However, in October 2010, just weeks before the results were to be released, PCSB decided to withhold the results from the public due to concerns about data accuracy and plans to use the data collected to further test and develop the system. The new system, called the Performance Management Framework (PMF), is designed to assess charter schools using common measures for academic performance, compliance with applicable laws, and financial management, among other things. As it implements the new system for the 2010-2011 school year, PCSB is currently collaborating with charter schools to develop and revise the system, and has more recently begun providing more detailed information to charter schools about how it will revise the system. D.C. charter schools may receive funding from local, federal, and private sources for their operations and facilities and also have access to other District resources, including former D.C. school buildings; however, the criteria for awarding former school buildings to charter schools could be more transparent. The primary source of support for charter schools is local per-pupil funding, which is allocated to charter schools on the same basis as all public schools in the District. Charter schools also receive a per-pupil allotment from the District for facilities. In addition to local funds, charter schools are eligible to receive federal formula funding, federal discretionary grants, and private funding, such as foundation grants and commercial loans to purchase or renovate school buildings. To date, charter schools lease or will lease about half of the former D.C. school buildings that have been made available pursuant to a provision in D.C. law that provides charter schools with a right of first offer for these buildings. However, we found that the District does not include in its requests for offers all factors it may consider, such as economic development or other goals of the Mayor, when determining whether to accept or reject an offer. In addition, the District does not sufficiently document the basis for rejecting offers. Charter school officials and advocates expressed concern about the transparency and fairness in how the District makes decisions regarding former D.C. school buildings.
Background DOD is increasingly relying on contractors to provide a range of mission- critical support from operating information technology systems to providing logistics support on the battlefield. These contractors are responsible for managing contract performance, including planning, placing, and administering subcontracts as necessary to ensure the lowest overall cost and technical risk to the government. Although total subcontract awards from DOD contracts decreased 15 percent from fiscal year 2005 to 2006, total subcontract awards have increased by 27 percent, from $86.5 billion in fiscal year 2002 to $109.5 billion in fiscal year 2006. (see fig. 1). While subcontracting plans submitted by contractors are required for most contracts over $550,000, this information is reported only for first tier subcontracts. Historically, DOD has limited insight into costs associated with using multiple layers of contractors to perform work. Figure 2 depicts how lower tier’s costs become part of the higher tier’s and prime contractor’s overall costs. Risks of Excessive Pass-Through Charges Are Assessed through Routine Evaluations of Contractor Value DOD contracting officials generally rely on tools in the FAR and DFARS in assessing the risk of excessive pass-through charges when work is subcontracted. For the 32 selected contracts we reviewed, when there was full and open competition, contracting officials assessed contractor value added based on the technical ability to perform the contract, but did not need to separately evaluate costs for value added as market forces generally control the proposed contract cost. However, contracts with greater risk—such as those awarded noncompetitively or without fixed prices—require contracting officers to consider more than the technical ability to perform the work in assessing value added. We found that conducting assessments of contractor value added is especially challenging in unique circumstances, such as when requirements are urgent in nature and routine contracting practices may be overlooked. DOD Generally Relies on Tools in Acquisition Regulations to Assess Contractor Value Added The FAR and DFARS contain requirements for contracting officials when entering into contractual relationships that are intended to help ensure the best value for products and services. Contracting officers have wide latitude to exercise business judgment when applying these regulations. While no specific criteria exist for contracting officers to use in evaluating contractor value added, several key elements in acquisition regulations, however, provide them with a mix of tools to gain insight into how prime contractors intend to do the work and the associated costs, including the role and costs of subcontracting: Acquisition planning is key to determining contract requirements, level of competition available based on market research, and the appropriate contract vehicle to be used based on level of risk. Solicitation procedures allow contracting officers to select the prospective contractor that represents the best value to the government. Contract pricing is used to determine price reasonableness for the contract, including subcontracting costs. Contract administration is intended to obtain a variety of audit and administration services to hold contractors accountable for operating according to their proposals. Presence of Competition and Type of Contract Generally Guide the Use of Assessment Tools According to DOD contracting officials and based on our review of selected contracts, assessments of contractor value added are typically driven by contract risk—the presence of competition and whether the type of contract requires the government to pay a fixed price or costs incurred by the contractor. When using full and open competition, the value added by the prime contractor was determined by its technical ability to perform the contract, but generally contracting officers did not do a separate detailed evaluation of cost to determine value added. DOD contracting officials told us that competitive fixed-price contracts allow the market to control overall contract value, which provided them with reasonable assurance of the contractor’s value added and potentially minimizing the risk of excessive pass-through charges. When using noncompetitive contracts, however, the market forces did not control contract cost and required contracting officers to consider—in addition to cost—the technical ability to perform the work. Specifically, DOD contracting officials noted that noncompetitive as well as other than fixed-price contracts require additional oversight and administration, including more detailed information to conduct the assessment of contractor value added and minimize the risk of excessive pass-through charges. For the 32 selected contracts we reviewed, 16 were awarded noncompetitively, with 7 of those on a cost-reimbursement basis and 2 on a time-and-materials basis. (see table 1) . DOD contracting officials noted that fixed-price contracts incentivize prime contractors to keep overall contract costs low—to include any subcontract costs—as they will have to absorb cost overruns under such contracts. In reviewing two fixed-price competitive contracts for Air Force space systems, acquisition planning and market research up-front provided insights into reasonable prices as well as identified the best- qualified suppliers to do the work. Air Force officials stated that because competitive contracts are often proposed by a team of contractors—the prime plus the subcontractors—market pricing extends to subcontractor costs as well. In discussing the two contracts, these officials added that fixed prices lowered the government’s risk of increased costs. As a result, the contracting officials said that in assessing the prime contractor’s value added, they focused more on the technical capabilities—rather than cost—to ensure contractors were responsive in meeting the mission. Contracting officials told us that contracts awarded noncompetitively decrease their assurance of price reasonableness since there is no basis of comparison through competition. Therefore, they rely on other pricing tools contained in the FAR and DFARS. These tools assist the contracting officers in obtaining more detailed information to provide reasonable assurance of contractor value added and potentially minimize the risk of excessive pass-through charges. Our review of contract files also revealed the role that DCAA and DCMA played in reviewing cost information in several of the 16 noncompetitive contracts we reviewed and helping to negotiate subcontract costs. For the Air Force’s estimated $3 billion satellite program contract, DCAA reviewed the certified cost and pricing data that the contractor was required to provide. The required data included not only the contractor’s costs but a detailed description of the efforts and costs of each subcontractor, including subsidiary companies. The contracting officer judged the proposed costs based on results of audit reports and a technical evaluation. Because of the high dollar value and complexity of this contract, the program office required the prime contractor to submit cost data reports at the conclusion of each effort and cost performance reports to provide insight into the prime contractor’s and subcontractor’s cost and schedule data. Additionally, DCAA and DCMA helped to negotiate individual subcontracts and, in some cases, achieve lower overall costs. The Army similarly relied on DCAA and DCMA assistance on a $1 billion fixed-price contract for a family of heavy tactical vehicles. The Army did not pursue full and open competition, citing the lack of industry response, thus requiring contracting officers to gain more insight into prime contractor and subcontract costs to assess contractor value added and minimize the risk of excessive pass- through charges. The Army used a teaming arrangement that allowed the contracting command, DCAA, DCMA, and the contractor to evaluate, discuss, and negotiate the costs. Each cost element was mutually agreed upon, resulting in a negotiated price list. Contracting officials told us that these negotiated prices applied to subcontracts for vehicle parts as well, preventing overcharges by lower-tier subcontractors. For an $11 million Navy contract for fighter aircraft support, DCMA provided an evaluation of prime contractor and subcontractor costs for certain services. In prenegotiation discussions with the Navy, DCMA described the technical evaluation of the contractor’s cost proposal, providing a cost summary of what was proposed by the contractor and what was recommended during the technical evaluation. For one portion of the contract, the evaluators questioned the direct labor hours proposed by the prime contractor for managing the project because most (if not all) the actual work would be done by subcontractors. The DCMA technical evaluator found the hours proposed to be excessive, raising questions about the prime contractor’s value added relative to the costs. Documentation in the contract file stated that although the prime contractor believed the hours proposed were fair, it agreed to a 25 percent reduction in the hours. In addition to the presence of competition, the risk associated with contracts in which the government pays based on costs incurred also affects the degree to which contracting officers assess contractor value added and potentially minimize the risk of excessive pass-through charges. These contracts, which include cost-reimbursement and time-and- materials contracts, increase DOD’s need to ensure appropriate surveillance during performance to provide reasonable assurance that efficient methods and effective cost controls are used. Because of the risks involved, the FAR directs that these contracts should be used only when it is not possible at the time of award to estimate accurately the extent or duration of the work or to anticipate costs with any reasonable degree of confidence. Of the selected contracts we reviewed, 18, or 56 percent, were cost-type contracts with 11 noncompetitively awarded. Contracting officials told us that under these arrangements, although competition increases the assurance of reasonable prices and controls contract cost, the absence of a fixed price requires them to take additional steps to obtain other information to assess the roles and costs of prime contractors and subcontractors, which assists in evaluating contractor value added. For example, a task order awarded under a $3 billion Army multiple award contract, which reimbursed the contractor based on the cost of its time and materials, demonstrated the risk associated with these contracts. Under this multiple award contract, eight prime contractors competed for task orders, with one of the contractors identifying over 75 subcontractors in its proposal. On the task order we reviewed, for engineering services, most of the work was subcontracted. Contracting officers stated that because the contract was awarded on a time-and-materials basis, the government was particularly vulnerable to the prime contractor charging more than it paid for its subcontractor since at this time prime contractors could charge for subcontract labor at the prime’s rate and keep any difference between its rate and the subcontractor’s. While the prime contractor was not required to submit certified cost or pricing data, it was required to provide a task execution plan that described the specific duties of both the prime contractor and the subcontractor, including the fees they were charging the government to manage the subcontractor. The Army evaluated the plan and determined that the number of hours and rates proposed by the prime contractor were reasonable based on the data provided. We have previously reported that for some time-and-materials contracts, DOD paid more than actual costs for subcontracted labor. To minimize this risk with time-and-materials contracts, a new DOD regulation set forth different rules about how prime contractors are to be reimbursed for subcontracted labor to ensure that prime contractors do not charge the government higher rates than those charged by subcontractors. In our review of other cost-type contracts, DOD gained insight into value provided by the prime contractor in determining price reasonableness. The cost or pricing data in some cases provided the contracting officer with added insight by breaking out costs of the prime contractor and major subcontractors by the work they were to perform. Further, in some cases, DCAA questioned the proposed subcontractor costs and provided an estimate for the contracting officer to use in negotiating a more reasonable price to ensure best value. For example, in a $92 million Army contract for the redesign of a chemical demilitarization facility, DCAA questioned the surcharges applied to certain subcontractor costs and recommended lower rates. The contractor accepted the lower rates, reducing the overall cost to the Army. According to several contracting officials, as prime contractors assign subcontractors more critical roles to achieve a mission, the increased need for detailed cost information is coupled with the need for more insight into the technical capabilities of the subcontractors. Because cost is not always the primary criterion used to determine best value, technical capabilities can also be evaluated to determine the role of the prime contractor when work is subcontracted. We found that cost was not always ranked as the highest factor in reviewing source selection criteria for five cost-type contracts. For one example—an $863 million Navy contract for support services related to a destroyer—the technical evaluation determined the ability of the prime contractor and multiple tiers of subcontractors to perform the work. While detailed cost information was obtained from the prime contractor and considered in the source selection, its ability to consolidate and manage efforts that had previously been conducted under five separate contracts was a particularly significant factor in evaluating the contractor’s value added. In another example—a $2.9 billion Navy contract for a major weapons system—given the size of the contract and magnitude and complexity of work involved, the contracting officer required greater insight into how the prime contractor intended to subcontract. As a result, the contracting officer modified the contract to increase requirements for the prime contractor to obtain consent to subcontract. The contracting officer told us that although prime contractors are ultimately responsible for managing their subcontractors, DOD still needed to maintain a certain level of insight into subcontracting, given the increased role. Unique Circumstances Can Drive Contracting Arrangements That Carry Greater Risk of Excessive Pass-Through Charges Some unique contracting arrangements that are noncompetitive or where requirements are urgent in nature carry greater risk of excessive pass- through charges and pose challenges in conducting assessments of contractor value added. This was the case with a contract we reviewed that had been awarded to an Alaska Native Corporation (ANC) firm through a small business development program. In addition, related GAO work and DOD audits on contracts awarded for Hurricane Katrina recovery efforts found multiple layers of subcontractors, questionable value added by contractors, increased costs, and lax oversight. Through the Small Business Administration’s 8(a) program, DOD and other federal agencies can award sole-source contracts to ANC firms for any dollar value. The Small Business Administration requires agencies to monitor the percentage of work performed by the 8(a) firms versus the percentage performed by their subcontractors to ensure that small businesses do not pass along the benefits of their contracts to subcontractors. The “limitations on subcontracting” clause in the FAR requires that for 8(a) service contracts with subcontracting, the firm must incur at least 50 percent of the personnel costs with its own employees (for general construction contracts, the firm must incur at least 15 percent of the personnel costs). However, for one contract we reviewed that was awarded to an ANC firm, contracting officials had failed to include the required FAR clause in the contract and other contracting officials we spoke to were unsure who should be monitoring compliance—findings consistent with our past work on ANC 8(a) contracts. For an Army logistics support services cost-reimbursement contract for $54 million awarded to an ANC noncompetitively, substantial variations in workload created too much cost risk to make it a fixed-price contract. According to the contracting officer, usually with a scope of work this large and varied, the contract lends itself to subcontracting. When asked about the level of insight into how the ANC would use subcontracted support, the contracting officer responded that this was challenging since small businesses are not required to submit subcontracting plans. In reviewing the base contract, we found that it did not contain the required clause that limits subcontracting. We brought this to the attention of the contracting officer, who told us that although he was not aware of any subcontracting, the clause should have been included and it was an oversight. Several other contracting officials we spoke to said they were unsure of whose responsibility it is to monitor compliance with the subcontracting limitations under these 8(a) contracts. They recognized that they should be doing more to monitor compliance. By not ensuring compliance with the limits on subcontracting requirement, there is an increased risk that an inappropriate degree of the work is being done by large businesses, raising questions about the value added by the ANC firm. According to contracting officials we spoke with, assessing the value added by a prime contractor is especially challenging in emergency situations, where requirements are critical and urgent in nature, such as those for recovery from Hurricane Katrina. We have similarly reported that the circumstances created by these situations can make it difficult to balance the need to deliver goods and services quickly with the need for appropriate controls. Our past work has cautioned, however, that limited predictability must not be an excuse for poor contracting practices. In some cases, the response to Hurricane Katrina suffered from inadequate planning and preparation to anticipate requirements for needed goods and services. The scale of operations and the government’s stated inability to provide program management after Katrina drove the decision to award contracts with large scopes of work that, in certain cases, led to multiple layers of subcontractors and increased costs. GAO’s past work in reviewing orders and contracts for the Katrina recovery effort found that the U.S. Army Corps of Engineers (USACE) disclosed increased costs associated with multiple supplier layers. In reviewing orders and contracts for portable public buildings in Mississippi, which were awarded in 2005, we found that USACE ordered 88 buildings that were purchased and sold through two to three layers of suppliers, resulting in prices 63 percent to 133 percent higher than manufacturers’ sales prices. In one example, 45 of the 88 portable public buildings were purchased from a contractor who in turn purchased the buildings from a distributor, who in turn purchased them from another distributor, who had purchased the 45 buildings from the manufacturer. Each layer added an additional fee, resulting in USACE agreeing to a price that was 63 percent higher than the manufacturer’s price. DOD auditors have noted additional concerns in some Katrina contracts they reviewed. For example, a November 2006 Army Audit report stated that unclear requirements for four post-Katrina debris removal contracts awarded by USACE—for $500 million each with an option for an additional $500 million—resulted in prices renegotiated in unfavorable circumstances. According to the report, the urgency to award contracts quickly did not give USACE contracting personnel sufficient time to develop a well-defined acquisition strategy—one that defined desired outcomes and risks related to the acquisition to ensure contracts were structured in the government’s best interest. Contracting officials were less diligent about complying with acquisition regulations regarding best value contracts and reasonable pricing. Fixed-price contracts were renegotiated at higher prices without the benefit of a DCAA review. USACE’s decision to use four large contracts also resulted in multiple tiers of subcontractors to accomplish the work, with each tier adding costs. Post-award audits performed by DCAA found substantial overcharges by the debris contractors. USACE officials we spoke with noted that they have revised the acquisition strategy to structure the size and scope of contracts to maximize competition and minimize subcontractor tiers. New contracts will have reduced performance periods to ensure that prices reflect the existing conditions. While USACE previously set production rates in its contracts, it did not measure them during the performance of the contract. USACE officials further stated that under the revised strategy, they will negotiate the production rate and measure the contractor’s ability to maintain it. To ensure price reasonableness of proposed prices, USACE plans to request DCAA to assist the contracting officer in reviews of competitive proposals and in negotiations. According to the officials, these revisions to USACE’s acquisition strategy were designed to address concerns related to prime contractors passing work on to subcontractors and increasing costs to the government without adding value. Selected Private Sector Companies Rely on Several Shared Approaches to Minimize The Risk of Excessive Pass- Through Charges Selected private sector companies we interviewed had several strategies in common for minimizing the risk to them of excessive pass-through charges when purchasing goods and services. These companies focus resources on acquisition planning and knowledge of their supply chains and costs—challenges DOD continues to face. They also seek to optimize competition, preferring fixed-price competitive arrangements. According to several companies, they recognize the financial risks of other types of contracts, such as time-and-materials, and enter into them with proper oversight and accountability. As we have previously reported, DOD’s use of these riskier contracts has not always ensured good acquisition outcomes and prudent expenditure of taxpayer dollars. In addition, company officials we interviewed told us that continuous and close management of the contractual relationship is critical to minimizing risks of excessive costs. Private Sector Companies Focus on Acquisition Planning and Knowledge of Supply Chain The contracting officials we spoke with at selected private sector companies told us that to avoid unnecessary pass-through charges when purchasing goods and services, they devote attention to planning acquisitions. Some companies told us that they invest in teams of experts and consultants to define contract requirements and then structure contracts based on the complexity of the acquisition. For example, one company described the use of cross-functional teams to obtain input on information technology, purchasing, quality, and other internal expertise. Having such information assists in developing comprehensive project acquisition plans and clear and stable requirements. One company seeks input from its engineers to develop a set of criteria based on the product or service acquisition. Officials from another company told us they will determine the optimum number of subcontracts required to procure a particular product or service and group them based on the requirements and need to subcontract. One company contracting official told us that it is an “expensive fishing expedition” when the requirements are not clearly defined, as it limits the company’s ability to enter into fixed-price competitive contracts and can increase its vulnerability to excessive costs. Private sector firms that spoke before the Acquisition Advisory Panel— established to review federal acquisition laws and regulations on a number of issues—also described a vigorous acquisition planning phase when buying services. These firms invest time and resources necessary to clearly define requirements first, allowing them to achieve the benefits of competition. According to some company contracting officials, they use rigorous market research and requests for information to develop a range of potential suppliers and cost and pricing data. Having this information on their supply chain allows these companies to minimize the risk of excessive pass-through charges. To gain additional information into costs, some companies work in a collaborative environment with contractors and subcontractors. However, they indicated that in these types of arrangements, companies have to be willing to share information openly and communicate their concerns and needs to achieve best value from the contractual relationship. Company officials told us that clearly defined requirements contribute to their ability when purchasing goods and services to enter into fixed-price contracts that lower costs and mitigate the risks of unnecessary charges relative to value added by the prime contractor. While the vast majority of their contracts are competitive fixed-price type contracts, some companies noted that the use of other contracts is sometimes necessary. Companies we met with recognize the financial risks involved and enter into them only with proper oversight and accountability. Buyers from companies who spoke before the Acquisition Advisory Panel also noted that when they enter into time-and-materials contracts, for example, they “endeavor to maintain tight controls over the contracting process, costs, and levels of effort.” Prior GAO work has found that DOD has been challenged in adequately planning many of its major acquisitions. In 1992, GAO identified DOD contract management as high-risk due to long-standing concerns in planning, execution, and overseeing acquisition processes. We have reported that to produce desired outcomes, DOD and its contractors need to clearly understand acquisition objectives and how they translate into a contract’s terms and conditions. Likewise, we have reported that obtaining reasonable prices depends on the benefits of a competitive environment, yet we have found cases where DOD failed to adequately define contract requirements, making it difficult to hold DOD and contractors accountable for poor acquisition outcomes. Moreover, participants at a October 2005 GAO forum related to Managing the Supplier Base noted that DOD faces challenges in maintaining insight into its supply chain and recognized the importance of promoting competition in managing multiple tiers of suppliers. In addition, our recent work on DOD’s use of time-and- materials contracts noted that contracting and program officials frequently failed to ensure that these contracts were used only when no other contract type was suitable. DOD officials cited speed and flexibility as the main reasons these contracts were used, and we reported inconsistencies in the rigor with which DOD monitored contractor performance, as called for in time-and-materials contracts. Private Sector Companies Closely Manage Contractual Relationships to Control Costs Company officials we interviewed told us that continuous management of the contractual relationship is critical to minimizing risks of excessive costs. The specific management practices used by companies generally include establishing clear contract terms and periodic evaluations to monitor performance. Subcontractor management practices include the use of clear contract terms to guide the relationship and ensure both parties understand each other’s needs. Some companies told us that the type of contract arrangement depends on the product or service and some contract terms may include more detail than others. According to one company, the level of detail of information requested also depends on the product or service procured, size of the procurement, and complexity of the work to be performed. This company told us that in such cases it has requested information on all parties who would be performing the work, down to the fifth level. In other cases it may retain the right to renegotiate the contract to ensure it is receiving the best price throughout the contractual agreement. Typically, both parties agree to renew the contract as long as the performance and benefit goals are being met. Company officials stressed the importance of having performance monitoring systems to ensure that the prime contractor’s value added relative to subcontractor costs is being met. For example, one company we interviewed told us that it periodically checks prices in the marketplace against cost information provided by the supplier. Similarly, another company emphasized the need to continuously check prices against the market, since similar to DOD, it does not have insight below the first-tier subcontractors. Some companies we interviewed also emphasized the importance of periodically evaluating and assessing the contractor’s value added relative to the costs and the need for continuing, changing, or ending the contract relationship. Contracting Officials Lack The Guidance and Insight Needed to Effectively Implement DOD’s Interim Rule DOD recently issued an interim rule that allows it to recoup contractor payments that contracting officers determine to be excessive on all eligible contracts. The rule requires detailed information from the prime contractor on value added when subcontracting reaches 70 percent or more of the total contract. While the rule aims to provide contracting officers with more information, it will not provide greater insight into DOD’s supply chain and costs. Further, while the rule is not yet final, contracting officials indicated to us that guidance is needed to ensure effective and consistent implementation in assessing contractor value added, particularly for newer and less experienced contracting staff. Congress required DOD in the Fiscal Year 2007 National Defense Authorization Act to prescribe regulations on excessive pass-through charges, which are defined in the act as charges (overhead and profit) for work performed by a contractor or subcontractor that adds no, or negligible, value. In April 2007, DOD issued an interim rule to require a contract clause that provides audit rights and cost recovery should these excessive pass-through charges occur. The rule also requires specific disclosure by a contractor that intends, or subsequently decides, to subcontract most of the work. Specifically, the contractor is to identify in its proposal the percentage of effort it intends to perform, and the percentage expected to be performed by each subcontractor under the contract, task order, or delivery order, or if a decision to subcontract comes after award, the contractor must notify the contracting officer in writing. Under the interim rule, prime contractors are required to inform a contracting officer of the value added that they are providing when subcontract costs exceed 70 percent of the total contract value. While the rule may enhance insight into contractor value added under these circumstances, it will not address DOD’s challenges in obtaining insight into its supply chain and costs—key information needed to mitigate risk of excessive pass-through charges according to companies we interviewed. In addition, DOD has not developed guidance for contracting officers to use in implementing the rule. Specifically, it lacks guidance that addresses contract risk associated with presence of competition, contract type, and unique circumstances where requirements are urgent in nature. As we found during our contract review, these are key risk factors to take into account when determining the degree of assessment needed, not necessarily the percentage of subcontracting alone. However, contracting officers have wide latitude in exercising judgment on how to apply these tools. While contracting officers we met with were generally applying these tools in conducting their assessments of contractor value added for the selected contract actions we reviewed, they indicated that guidance—particularly for newer and less experienced staff—would help ensure the tools are consistently applied and that assessments are properly documented in the contract files. We brought this to the attention of DOD procurement policy officials, who told us that as they develop implementing guidance, they will emphasize that contracting officers need to include contract risk in conducting their contractor value added assessments and document the results. While the regulation allows contracting officers to recoup charges that they determine to be excessive, it does not specify the roles of DCAA and DCMA in this process. As we found in our contract review and in discussions with contracting officials, these organizations played a key role in assessing cost information. However, contracting officials indicated the importance for newer and less experienced staff to involve DCAA and DCMA as appropriate. We spoke with officials from both of these agencies, who also indicated that they would play a role in implementing this regulation and in assisting contracting officers in determining whether costs are excessive, but they said they had not fully considered the extent or the resources needed. We brought this to the attention of DOD procurement policy officials, who agreed these organizations need to be involved in assisting contracting officers in their assessments of whether pass-through charges are excessive, and as they develop implementing guidance, they will emphasize the involvement of DCAA and DCMA in facilitating the assessments as appropriate. Conclusions Assessing contractor value added and minimizing the risk of excessive pass-through charges have taken on heightened importance given the increasing role of subcontractors in providing DOD with critical goods and services—especially for emergency situations, where routine contracting practices may be overlooked in an effort to meet urgent requirements. Historically, DOD has lacked insight into subcontractor costs, raising questions about the value added when multiple layers of contractors perform the work. Optimizing competition—an acquisition strategy the private sector companies we interviewed emphasized when purchasing goods and services for their own operations—can minimize DOD’s risk of paying excessive payments since market forces generally control contract cost. However, without insight into the supply chain and associated costs, it is difficult to assess the risk of excessive pass-through charges. While DOD’s new interim rule is a step in the right direction, it by itself will not help contracting officials gain this insight. Further, although we found that contracting officers were generally applying tools in the FAR in conducting assessments of contractor value added for selected contracts we reviewed, implementing guidance for the new rule would help ensure these tools are consistently applied in determining the degree of assessment needed, documenting the assessments, and appropriately involving DCAA and DCMA. Recommendations for Executive Action As DOD finalizes its rule on preventing excessive pass-through charges and develops implementing guidance to ensure consistency in how contracting officials assess contractor value added, we recommend that the Secretary of Defense direct the Director of Defense Procurement and Acquisition Policy to take the following actions: Require contracting officials to take risk into account when determining the degree of assessment needed. Risk factors to consider include whether (1) the contract is competed; (2) the contract type requires the government to pay a fixed price or costs incurred by the contractor; and (3) any unique circumstances exist, such as requirements that are urgent in nature. Require contracting officials to document their assessments of contractor value added in the contract files. Involve DCAA and DCMA in facilitating assessments as appropriate. Agency Comments We provided a draft of this report to DOD for comment. In written comments, DOD concurred with our recommendations and noted actions planned and underway that are directly responsive. Specifically, DOD anticipates issuing a second interim rule in February 2008 and expects a final rule in August 2008. Once the rule is finalized, DOD intends to provide extensive guidance to supplement the regulation that will cover a range of issues, including those GAO recommended. DOD’s comments are reproduced in appendix III. We are sending copies of this report to the Secretary of Defense and will make other copies available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or calvaresibarra@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were John Neumann, Assistant Director; Barry DeWeese; Yvette Gutierrez- Thomas; Kevin Heinz; Maurice Kent; Julia Kennon; John Krump; and Karen Sloan. Appendix I: Scope and Methodology To determine the Department of Defense’s (DOD) approach to assessing the risk of excessive pass-through charges when work is subcontracted, we reviewed and analyzed tools in the Federal Acquisition Regulation (FAR) and the Defense Federal Acquisition Regulation Supplement (DFARS). We met with DOD officials from the Office of the Secretary of Defense, Defense Procurement and Acquisition Policy, Defense Contract Audit Agency (DCAA), Defense Contract Management Agency (DCMA), and contracting officials from 11 DOD contracting locations to discuss these regulations and their approach to assessing the risk of pass-through charges, evaluating contractor value added, and factors that drive these assessments. We selected 10 of these locations, which had some of the highest spending in fiscal year 2005, to visit and discuss specific contracts, policies, and processes related to evaluating contractor value added when work is subcontracted. In addition, while we did not visit the Army Tank and Automotive Command in Warren, Michigan, we obtained contract documents from it for review. While our selection of locations cannot be generalized to the population of all DOD contracting locations, those selected represented each of the military services and represented a variety of goods and services procured. The specific military locations we visited were: Air Force Space Command, Colorado Springs, Colorado Air Force 21st Space Wing, Peterson Air Force Base, Colorado Springs, Air Force 50th Space Wing, Schriever Air Force Base, Colorado Air Force Space and Missile Command, El Segundo, California U.S. Army Space and Missile Defense Command, Peterson Air Force Base, Colorado Springs, Colorado Army Contracting Agency, Fort Carson, Colorado Springs, Colorado Army Communications and Electronics Command, Fort Monmouth, Army Sustainment Command, Rock Island, Illinois Naval Sea Systems Command, Washington Navy Yard, District of Naval Air Systems Command, Patuxent River, Maryland. At the locations we visited, and to provide a broad perspective on extent to which contracting officials apply existing tools in acquisition regulations in assessing risk of excessive pass-through charges and contractor value added, we analyzed and discussed 32 selected contract actions awarded in fiscal year 2005. These selected actions included base contracts, task orders under contracts, and modifications to contracts. Since DOD’s procurement information system—DD350 database—does not contain a specific field for percentage of subcontracting, of all DOD contract actions over $10 million and had reported submitting a subcontracting plan, we selected a nongeneralizable sample of actions that provided a mix of contract types, levels of competition, dollar values, and goods and services procured. Moreover, we also selected small business contracts over $10 million. While small businesses are not required to submit subcontracting plans, the dollar value of these actions would have otherwise required them. We relied on data provided in the DD350 database and verified the reliability of the information where practical with contracting officers, contract files at contract locations visited, and through review of contract documents in DOD’s Electronic Data Access Web-based system. On the basis of this assessment, we found the DD350 database to be sufficiently reliable for our purposes. We reviewed and analyzed available documentation for the selected DOD contract actions and discussed these actions with the responsible contracting officials. While our selection of contract actions cannot be generalized to all DOD contracts, those selected represent each of the military services and a number of different contract actions, allowing us to obtain a variety of perspectives from DOD contracting officials. In reviewing and discussing contract files with contracting officials at these locations, we examined factors that drove the need to assess prime contractor and subcontractor costs, guidance and tools available to conduct assessments, and level of insight into subcontracting activity. Included in the contract files and also reviewed were documents from DCAA and DCMA that were used to support the decisions of contracting officials. We met with both of these agencies to discuss their roles in assisting contracting officials. Because no criteria exist for assessing value added relative to costs, our review does not include a determination of whether the DOD contracting officer adequately assessed the value added and costs, but rather the extent to which the contracting officer applied existing tools in the FAR and DFARS. In addition, to obtain additional information on contracts that had been identified as having questionable costs, we also interviewed the Army Corps of Engineers, the Army Audit Agency, and the DOD Office of Inspector General. We reviewed and analyzed documents from these agencies as well as past GAO work to determine how tools in acquisition regulations were applied in contracts with questionable costs. We also discussed strategies being explored to help mitigate risks of excessive costs on future contracts. To identify the strategies selected private sector companies use to minimize risk of excessive pass-through charges when purchasing goods and services, we selected nine companies to interview. Our selection of companies was based on diversity in commercial and public sector contracting and a range of goods and services. In the company interviews, we discussed the perspectives and practices for managing and assessing value added relative to prime and subcontractor costs. In addition to the interviews, we reviewed findings and recommendations of the Acquisition Advisory Panel’s 2007 report on commercial practices. We also reviewed previous GAO reports and issues raised in various GAO acquisition forums and met with industry associations, such as the Professional Services Council and Coalition for Government Procurement. Table 2 provides a list and description of the companies we interviewed. To assess DOD’s interim rule to prevent excessive pass-through charges, we reviewed the specific mandate for DOD in Section 852 of the Fiscal Year 2007 Defense Authorization Act. We discussed this requirement with DOD procurement policy officials and reviewed the interim DOD rule on excessive pass-through charges in response to the mandate as well as any changes made based on public comments received. We also spoke to DCAA and DCMA regarding their role in implementing the rule and obtained perspectives from contracting officials we interviewed at the military locations we visited on potential challenges in implementing the rule. Appendix II: Key Elements for Contracting Officers in Assessing Contractor Value Added Description Acquisition planning determines the requirements of the contract, the level of competition available based on market research, and the appropriate contract vehicle to be used. Requirements and logistics personnel should avoid issuing requirements on an urgent basis or with unrealistic delivery or performance schedules, since it generally restricts competition and increases prices. Early in the planning process, the planner should consult with requirements and logistics personnel who determine type, quality, quantity, and delivery requirements. (FAR 7.104(b)) Market research is conducted to determine if commercial items are available to meet the government’s needs or could be modified to meet the government’s needs. The extent of market research will vary, depending on such factors as urgency, estimated dollar value, complexity, and past experience. Market research involves obtaining information specific to the item being acquired using a variety of resources. The availability or unavailability of items in commercial markets drives the contracting procedures used. (FAR 10.001 and 10.002) Contracting officers shall provide for full and open competition through use of competitive procedures that are best suited to the circumstances of the contract action and consistent with the need to fulfill the government’s requirements efficiently. (FAR Part 6.101) When adequate price competition exists, generally no additional information is necessary to determine the reasonableness of price. (FAR 15.403-3) Acquisition plans should address when subcontract competition is both feasible and desirable and describe how it will be sought, promoted, and sustained throughout the course of the acquisition. (FAR 7.105(b)(2)(iv)) A wide selection of contract types is available to the government and contractors in order to provide needed flexibility in acquiring goods and services. The objective is to negotiate a contract type and price that will result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance. A firm-fixed-price contract, which best utilizes the basic profit motive of business enterprise, shall be used when the risk involved is minimal or can be predicted with an acceptable degree of certainty. However, when a reasonable basis for firm pricing does not exist, other contract types should be considered, and negotiations should be directed toward selecting a contract type that will appropriately tie profit to contractor performance. If the contractor proposes extensive subcontracting, a contract type reflecting the actual risks to the prime contractor should be selected. (FAR 16.1) In different types of acquisitions, the relative importance of cost or price may vary. For example, in acquisitions where the requirement is clearly definable and the risk of unsuccessful contract performance is minimal, cost or price may play a dominant role in source selection. The less definitive the requirement, the more development work required, or the greater the performance risk, the more technical or past performance considerations may play a dominant role in source selection. (FAR Part 15.101) Normally, competition establishes price reasonableness. Therefore, when contracting on a firm-fixed- price or fixed-price with economic price adjustment basis, comparison of the proposed prices will usually satisfy the requirement to perform a price analysis, and a cost analysis need not be performed. In limited situations, a cost analysis may be appropriate to establish reasonableness of the otherwise successful offeror’s price. When contracting on a cost-reimbursement basis, evaluations shall include a cost realism analysis to determine what the government should realistically expect to pay for the proposed effort, the offeror’s understanding of the work, and the offeror’s ability to perform the contract. The contracting officer shall document the cost or price evaluation. (FAR 15.305(a)(1)) The source selection records shall include an assessment of each offeror’s ability to accomplish the technical requirements; and a summary, matrix, or quantitative ranking, along with appropriate supporting narrative, of each technical proposal using the evaluation factors. Cost information may be provided to members of the technical evaluation team in accordance with agency procedures. Additionally, the evaluation should take into account past performance information regarding predecessor companies, key personnel who have relevant experience, or subcontractors that will perform major or critical aspects of the requirement when such information is relevant to the instant acquisition. (FAR Part 15.305(a)(2)) In negotiated acquisitions, each solicitation that is expected to exceed $550,000 ($1,000,000 for construction) and that has subcontracting possibilities, shall require a subcontracting plan. If the offeror fails to negotiate a subcontracting plan acceptable to the contracting officer within the time limit prescribed by the contracting officer, the offeror will be ineligible for award. (FAR 19.702(a)(1)&(2)). Each subcontracting plan must include percentage goals for using different types of small businesses, a statement of the total dollars planned to be subcontracted, a statement of the total dollars planned to be subcontracted to small businesses, and a description of the principal types of supplies and services to be subcontracted.(FAR 19.704(a)) Contracting officers must purchase supplies and services from responsible sources at fair and reasonable prices. When prices are based on adequate price competition, no other information is generally needed. In other cases, more information may be needed. (FAR 15.402(a)) When required, a disclosure statement must be submitted as a part of the offeror’s proposal unless they have already submitted a statement disclosing the practices used in connection with the pricing of the proposal. (FAR 52.230-1(b)). Prime contractors or higher tiered subcontractors can be required to include subcontractor accounting practices in their disclosure statements (FAR 30.202-8(a)). DCAA provides audit services in assuring compliance with Cost Accounting Standards. The contracting officer shall require the prime contractor to submit cost and pricing data and a certificate that states that the data are accurate, complete, and current. Any subcontractor or prospective subcontractor should submit similar data and certification to the prime contractor or appropriate subcontractor tier. (FAR 15.403-4) The contracting officer is responsible for obtaining information that is adequate for evaluating the reasonableness of the price or determining cost realism. The contracting officer may request other information to use in this evaluation, including, the prices at which the same item or similar items have previously been sold. (FAR 15.403-3) The contracting officer is responsible for the determination of price reasonableness for the prime contract, including subcontracting costs. The contracting officer should consider whether a contractor or subcontractor has an approved purchasing system, has performed cost or price analysis of proposed subcontractor prices, or has negotiated the subcontract prices before negotiation of the prime contract, in determining the reasonableness of the prime contract price. This does not relieve the contracting officer from the responsibility to analyze the contractor’s submission, including subcontractor’s cost or pricing data. (FAR 15.404-3) The contracting officer should request field pricing assistance when the information available at the buying activity is inadequate to determine a fair and reasonable price. The contracting officer must tailor requests to reflect the minimum essential supplementary information needed to conduct a technical or cost or pricing analysis. (FAR 15.404-2). DCAA’s Financial Liaison Advisors provide financial advisory service support at customer sites to assist contracting officers in determining fair and reasonable contract prices. These services include market research and analysis of certified cost and pricing data and other information. DCMA can also provide requested assistance through technical analysis (i.e., engineering evaluation of proposed labor hours or material requirements) and special analyses (i.e., evaluations of specific cost elements, rates and factors, or, in some cases, estimating methodologies). DCAA, as the responsible audit agency, submits information and advice to the requesting activity based on the auditor’s analysis of the contractor’s financial and accounting records or other related data as to the acceptability of the contractor’s incurred and estimated costs. DCAA may also perform other analyses and reviews that require access to the contractor’s financial and accounting records supporting proposed and incurred costs. (FAR 42.101) The contracting officer delegates many functions to a contract administration office. This office can be DCMA or another agency that offers a wide variety of administrative services. However, since the prime contractor is responsible for managing its subcontracts, this office’s review of subcontracts is normally limited to evaluating the prime contractor’s management of the subcontracts. Therefore, supporting contract administration shall not be used for subcontracts unless the Government otherwise would incur undue cost or successful completion of the prime contract is threatened. For major system acquisitions, the contracting officer may designate certain high-risk or critical subsystems or components for special surveillance in addition to requesting supporting contract administration. (FAR 42.201 and 42.202). The contracting officer may require consent for subcontracts to protect the government because of the subcontract type, complexity, or value, or because the subcontract needs special surveillance. These can be subcontracts for critical systems, components, or services. (FAR 44.201-1(a)). Notification submitted to the contracting officer should include a description of the supplies or services to be subcontracted, the type of subcontract to be used, the proposed subcontractor and proposed price, the subcontractor’s current cost or pricing data, certificate of cost and pricing data, and the subcontractor’s Disclosure Statement or Certificate to Cost. (FAR 52.244-2 (f)(1)). The objective of a contractor purchasing system review is to evaluate the efficiency and effectiveness with which the contractor spends government funds and complies with government policy when subcontracting. The review provides the administrative contracting officer a basis for granting, withholding, or withdrawing approval of the contractor’s purchasing system. (FAR 44.301). Evaluation of the purchasing system pays special attention to items such as the degree of price competition obtained, methods of obtaining accurate cost or pricing data, and methods of evaluating subcontractor responsibility. (FAR 44.303). Appendix III: Comments from the Department of Defense
One-third of the Department of Defense's (DOD) fiscal year 2006 spending on goods and services was for subcontracts. Concerns have been raised among DOD auditors and Congress about the potential for excessive pass- through charges by contractors that add little or no value when work is subcontracted. To better understand this risk, Congress mandated that GAO assess the extent to which DOD may be vulnerable to these charges. This report examines (1) DOD's approach to assessing the risk of excessive pass-through charges when work is subcontracted, (2) the strategies selected private sector companies use to minimize risks of excessive pass-through charges when purchasing goods and services, and (3) DOD's interim rule to prevent excessive pass-through charges. GAO's work is based on analysis of 32 fiscal year 2005 DOD contract actions at 10 DOD top contracting locations and discussions with DOD acquisition policy, audit, and contracting officials, including Defense Contract Audit Agency (DCAA) and Defense Contract Management Agency (DCMA) staff. GAO also interviewed nine selected private sector companies with diverse contracting experience. Although no specific criteria exist for evaluating contractor value added, DOD contracting officials generally rely on tools in the Federal Acquisition Regulation (FAR) to assess the risk of excessive pass-through charges when work is subcontracted. For the 32 selected contract actions GAO reviewed, DOD contracting officials generally applied these tools to their assessments. The degree of assessment depended on whether the contract was competed and whether the contract type required the government to pay a fixed price or costs incurred by the contractor. When using full and open competition, contracting officials assessed contractor value added based on the technical ability to perform the contract, but did not separately evaluate cost since market forces generally control contract costs, potentially minimizing the risk of excessive pass-through charges. However, when using noncompetitive contracts, contracting officials were required to evaluate more detailed cost information in assessing value added, as market forces did not determine the contract cost. For example, for a $3 billion noncompetitive contract for an Air Force satellite program, contracting officials assessed detailed cost or pricing data that included subcontractor costs, and received DCAA and DCMA support to negotiate lower overall contract costs. However, assessing contractor value added is especially challenging in unique situations where requirements are urgent in nature and routine contracting practices may be overlooked. Related GAO work and DOD audits on contracts awarded for Hurricane Katrina recovery efforts found multiple layers of subcontractors, questionable contractor value added, increased costs, and lax oversight. The selected private sector companies GAO interviewed rely heavily on acquisition planning, knowledge of supply chain, and managing contractual relationships to minimize risk of excessive pass-through charges when purchasing goods and services. They seek to optimize competition to minimize overall contract costs, and several companies indicated that they prefer fixed-price competitive arrangements. In addition, some form collaborative business relationships with contractors and subcontractors that provide greater insight into their supply chains and costs--a challenge DOD continues to face. When using other than fixed-price contracts, they recognize the financial risks and ensure proper oversight and accountability. As GAO has reported in the past, DOD's use of riskier contracts, such as time-and-materials contracts, has not always ensured good acquisition outcomes. DOD recently issued an interim rule requiring a contract clause in all eligible contracts, which allows it to recoup contractor payments that contracting officers determine to be excessive. The rule also requires detailed information from contractors on their value added when subcontracting costs reach 70 percent or more of total contract cost. However, the rule alone will not provide greater insight into DOD's supply chain and costs--information companies told us they use to mitigate excessive costs. Further, contracting officials indicated the need for guidance to ensure effective implementation and consistent application of tools in the FAR as appropriate.
Background Many infectious diseases—including pneumonia, tuberculosis, and common childhood ear infections—are caused by bacteria that have developed resistance to one or more previously effective antibiotics. Resistance may occur when the introduction of an antibiotic imposes “selective pressure” on an organism that has mutated by random genetic change. The antibiotic will not be able to kill the resistant strain of the organism. If susceptible bacteria are killed, remaining resistant bacteria may then become the dominant strain. For example, for nearly 40 years after penicillin was introduced, it was used successfully to treat pneumonia; today, penicillin-resistant strains of pneumonia are dominant in many countries. Also, disease-causing bacteria—or pathogens—may develop resistance spontaneously. For further information about the development of antibiotic resistance and the public health burden associated with resistant bacteria, see Antimicrobial Resistance: Data to Assess Public Health Threat From Resistant Bacteria Are Limited (GAO/HEHS/NSIAD/RCED-99-132, Apr. 28, 1999). Experts Believe the Use of Antibiotics in Agriculture Is Linked to the Emergence of Antibiotic Resistance Antibiotics are used in both food-producing animals and on food plants to treat specific diseases afflicting specific animals and plants and to prevent the spread of diseases that are known to occur in particular herds, flocks, and crops under certain conditions. Antibiotics are also used in food animals to enhance their growth rate and feed efficiency—that is, increasing the amount of feed that is absorbed by the animal. Antibiotics used on animals may be obtained over-the-counter in feed stores and are included in commercially available animal feed. Antibiotics may also be dispensed under a veterinarian’s prescription. For larger animals (such as cattle), antibiotics may be administered by injection or mixed with water; for smaller animals (such as poultry), they are generally mixed with feed or water. As a pesticide for disease treatment and prevention, antibiotics are generally sprayed onto plants. However, data are not available on the quantities of specific antibiotics used in agriculture and the purposes for which they are used. Appendix II presents information on the major classes of antibiotics, provides examples of specific antibiotics within each class, and indicates the antibiotics within that class have been approved for use on animals, plants, and/or humans. Research Has Linked Three Diseases With Antibiotic-Resistant Strains Affecting Humans to the Use of Antibiotics in Animals Experts, including those in the Department of Health and Human Service’s (HHS) Food and Drug Administration (FDA), and Centers for Disease Control and Prevention (CDC), believe that resistant strains of three specific organisms that cause illness or disease in humans—Salmonella, Campylobacter, and E. coli—are linked to the use of antibiotics in animals. Salmonella and Campylobacter infections generally cause intestinal distress and do not require medical treatment. However, each year several thousand persons have severe illnesses resulting in hundreds of deaths. Young children, the elderly, and patients whose immune systems are compromised are especially at risk. Severe cases of Salmonella have been associated with infections in the blood and the lining of the brain and other deep body tissue. According to CDC, each year an estimated 8,000 to 18,000 hospitalizations, 2,400 bloodstream infections, and 500 deaths are associated with Salmonella infections. One in 1,000 Campylobacter infections result in Guillain-Barré Syndrome, a disease that can cause paralysis. Most E. coli strains are relatively harmless in humans, but one strain causes a potentially serious illness in children and individuals with weakened immune systems. However, there are no current comprehensive estimates of the extent to which antibiotic-resistant strains of Salmonella, Campylobacter and E. coli have resulted in severe illnesses or deaths in humans. According to scientists at CDC, resistant strains of these organisms acquire resistance to antibiotics while in the animal. The resistant strain of the disease is then transferred to humans through food or through contact with animals or animal waste. A more detailed discussion of these organisms and their development of antibiotic resistance is presented in appendix II. In addition to the direct foodborne transfer of antibiotic resistance from these three specific organisms, some research suggests that the use of antibiotics in food animals may reduce the effectiveness of related antibiotics used to treat humans. This concern is often raised about antibiotics administered in low doses over a continuous period, such as those used in agriculture to promote animal growth. The research most often cited with this issue was conducted in Denmark during the early 1990s and concerns the closely related antibiotics avoparcin and vancomycin. Scientists there reported linking the use of avoparcin in animals to the emergence of vancomycin-resistant enterococci—generally known as VRE—in humans. VRE is an organism generally contracted in a hospital setting that causes serious, and in some cases untreatable, infections in humans. In the United States, avoparcin has never been approved for use in agriculture or human medicine, and vancomycin has never been approved for use in agriculture. However, according to FDA officials, FDA discovered an instance in which avoparcin was used illegally in the United States in the production of veal and possibly other meat products. FDA pursued regulatory enforcement, and, according to officials, the individual responsible was convicted of a crime. Vancomycin is an extremely important drug in the treatment of antibiotic-resistant bacterial infections in humans, many of which are serious and life-threatening and cannot be treated by any other currently approved antibiotic. According to CDC, the excessive use of vancomycin in human medicine is a primary cause for the rapid rise of VRE in the United States. Studies estimate that doctors inappropriately prescribe vancomycin in treating illnesses in humans 30 to 80 percent of the time. While research is available on the emergence of antibiotic-resistant strains of foodborne pathogens, such as Salmonella, Campylobacter, and E. coli, for nonfoodborne human pathogens (such as VRE), agricultural use is only one factor that contributes to the problem of antibiotic resistance in humans. Only a few studies, primarily in Europe, have examined agriculture’s contribution—relative to the contributions of other factors, such as the inappropriate prescribing of antibiotics in human medicine—to the development of resistance in nonfoodborne human pathogens. Appendix I identifies several studies, reports, and scientific articles by, among others, the National Research Council, World Health Organization, Institutes of Medicine, Office of Technology Assessment, and British House of Lords, that discuss and assess the research on these issues. Several Agencies Have Responsibilities Regarding the Use of Antibiotics in Agriculture Several federal agencies have roles involving the use of antibiotics in agriculture and a multiagency program—the National Antimicrobial Resistance Monitoring System-Enteric Bacteria—tracks the development of antibiotic-resistant strains of Salmonella and Campylobacter (see table 1). Two agencies are responsible for approving the use of antibiotics by the agriculture industry. FDA approves all antibiotics used for food-producing animals; the Environmental Protection Agency (EPA) approves antibiotics used as pesticides on produce and plants. FDA has approved many antibiotics for use on food-producing animals; EPA has approved two antibiotics for use on plants. FDA and EPA each establish maximum allowable residue levels (tolerances) for the antibiotics they approve and have regulatory authority to withdraw approvals, although withdrawing approval can be a lengthy and difficult process. The U.S. Department of Agriculture’s (USDA) Food Safety and Inspection Service (FSIS) operates a program to ensure that antibiotic residues in food products are within established limits. FSIS’ National Residue Program tests meat and poultry products for antibiotic residues. These tests are performed on the carcasses of slaughtered animals and on samples collected at ports of entry throughout the United States. The National Antimicrobial Resistance Monitoring System’s-Enteric Bacteria program is the only federal program specifically focused on testing for antimicrobial resistance related to agriculture. The program was created in 1996 as a joint effort by FDA, CDC, and USDA. Initially, Salmonella was selected as the sentinel organism for tracking antibiotic resistance. Samples for this program are collected from humans in clinical settings and from animals in clinical and nonclinical settings. The samples are tested for susceptibility to 17 antibiotics. These antibiotics were selected because they are either commonly used in animal and/or human medicine or because they are very important to human medicine. CDC tests the samples collected from humans, and USDA tests the samples collected from animals. In 1997, the program was expanded to include testing of Campylobacter samples. The head of veterinary testing for this program told us that its scope has been relatively limited, however, because the resources devoted to it have been limited. Two other federal programs collect information related to disease-causing organisms and antibiotic use, but neither is focused on antibiotic resistance. USDA’s Animal and Plant Health Inspection Service operates the National Animal Health Monitoring System. Through this program, the agency conducts studies on animal health that include information about antibiotic use—the reasons producers use antibiotics, the way antibiotics are administered to the animals, and the size of producers’ operations. The studies do not collect information about the quantities of antibiotics used. However, the program has contributed samples for the National Antimicrobial Resistance Monitoring System-Enteric Bacteria program. CDC operates the Foodborne Disease Active Surveillance Network—also known as FoodNet. This is a surveillance system designed to allow more accurate and precise estimates and interpretation of the prevalence of foodborne diseases over time. Debate Is Ongoing Over the Potential Risk to Human Health From the Agricultural Use of Antibiotics The debate over whether to further regulate or restrict the use of antibiotics in agriculture centers around the risk their use may pose to human health relative to their benefits to agriculture. Much of this debate concerns the uncertainty about whether and to what extent antibiotic resistance in humans may be acquired from the continued application of low doses of certain antibiotics in animal feeds. We first questioned the health implications of using antibiotics in animal feeds in 1977. We noted that the safety and effectiveness of the practice had not been established and that the possibility existed that antibiotic-resistant bacteria may develop and be transferred from animals to humans. Among other things, we recommended that FDA determine the safety of antibiotics used in animal feeds on the basis of available data and withdraw approval of any not shown to be safe. According to the Director of FDA’s Center for Veterinary Medicine, in 1978, FDA proposed withdrawing approval of penicillin and tetracycline for other than disease treatment in animals. In response to concerns over the absence of definitive data to confirm that those antibiotics presented a hazard to human health, FDA contracted with the National Academy of Sciences to review the available data. According to a June 1980 report by a House appropriations subcommittee, the Academy’s review found that “the postulated hazards to human health...were neither proven nor disproven.” The Academy recommended that additional research be conducted to fill data gaps. The subcommittee report asked FDA to delay implementing its proposal pending the final results of the additional research and evidentiary hearings. The World Health Organization, the United Nations’ group responsible for monitoring global health, sponsored two recent conferences to examine the research on antibiotic resistance and agriculture. The first conference, in October 1997, addressed the medical impacts of the use of antimicrobials in food-producing animals. At the conclusion of this conference, scientists advocated (1) a more thorough assessment of the risks, (2) increased monitoring to detect the emergence of resistance, and (3) terminating the use of antibiotics for growth promotion in animals if they are also used in human medicine or are known to potentially become cross-resistant to antibiotics used in human medicine. Scientists attending the second conference in June 1998 recommended more research on the emergence of resistance to, and prudent practices for using, the class of antibiotics known as quinolones in animals. Other Countries Believe Potential Human Health Risks Warrant Limiting Antibiotic Use in Agriculture On the basis of their assessment of the potential risks, several countries have acted to reduce the agricultural use of antibiotics. The United Kingdom banned the use of penicillin and tetracycline for growth promotion in the early 1970s; other European countries followed suit shortly thereafter. Sweden banned the use of all antibiotics for growth promotion in 1986, and Denmark banned the use of one antibiotic in animal feed in 1998. Canada’s health department has called for a voluntary reduction in the amount of antibiotics used in agriculture. In December 1998, health ministers for the European Union voted to ban four antibiotics that were widely used to promote animal growth. They announced that they were taking this action as a precaution to minimize the risk of the development of resistant bacteria and to preserve the efficacy of certain antibiotics used in human medicine. The ban is scheduled to become effective for the 15 members of the European Union on July 1, 1999. Associations Representing Agriculture and Pharmaceutical Industries and Veterinarians Believe Restricting Antibiotics Is Not Warranted In the United States, associations representing beef, pork, and poultry producers and pharmaceutical manufacturers have stated that restricting the use of antibiotics in agriculture is not warranted and is not supported by science. In their view, the use of antibiotics in agriculture is only one potential contributor to antibiotic resistance in humans and the extent of agriculture’s contribution has not been determined. They also believe that the research does not warrant restricting the use of antibiotics in agriculture. These associations believe that antibiotics are vital to agricultural industries and contend that most producers are already using antibiotics prudently. The Animal Health Institute, a trade association representing manufacturers of animal health products, including pharmaceuticals, has announced a plan that calls for (1) assessing the benefits and risks to humans from treating animals with antibiotics, (2) developing guidelines for prudently using antibiotics in farm animals, and (3) supporting improved surveillance and monitoring of the use of antibiotics. Associations representing beef, pork, and dairy producers are also advising their members on antibiotic use. The National Cattlemen’s Beef Association has advised its members to “strive to limit the need for use through sound husbandry and preventative practices.” Both the National Milk Producers Federation and the National Pork Producers Council have developed 10-point Quality Assurance programs that advise members how to properly use antibiotics during production. The National Broiler Council told us that poultry producers use antibiotics prudently. Officials from Tyson, the nation’s largest poultry producer, told us that the company stopped using antibiotics to promote animal growth more than 2 years ago and has been experimenting with alternative poultry production practices. The American Veterinary Medical Association has been working with its members to develop a set of principles aimed at safeguarding public health and educating veterinarians on the potential risks posed by antibiotic use in agriculture. The proposed principles include (1) emphasizing appropriate animal husbandry and hygiene, routine health examinations, and vaccinations in preference to antibiotics; (2) considering therapeutic alternatives prior to using antibiotics; (3) avoiding, in initial therapy, those antibiotics that are considered important in treating infections in humans, and (4) avoiding the inappropriate use of antibiotics, such as for viral infections without bacterial complications. Federal Efforts to Identify and Address Potential Risks USDA, CDC, and FDA agree that antibiotics are critical in treating diseases in animals as well as humans. As we noted earlier, under the National Antimicrobial Resistance Monitoring System-Enteric Bacteria program, these agencies have been active in monitoring the emergence of antibiotic-resistant Salmonella since 1996 and resistant Campylobacter since 1997. They shared their concerns with us about the potential impact on human health from using antibiotics in agriculture. CDC and FDA agree that the agricultural use of antibiotics is a significant source of antibiotic resistance among foodborne pathogens. They also agree that the extent to which the agricultural use of antibiotics contributes to resistance in other—nonfoodborne—pathogens that cause diseases in humans is not precisely known, although evidence is increasing that these uses can be an important contributing factor. USDA’s activities have been limited to the testing and monitoring that the Food Safety and Inspection Service, the Animal and Plant Health Inspection Service, and the Agricultural Research Service do under the National Antimicrobial Resistance Monitoring System-Enteric Bacteria program. With regard to the debate over whether to further regulate or restrict the use of antibiotics in agriculture, USDA believes that, before any decisions are made, more research is needed to determine how animals acquire resistant strains of Salmonella, Campylobacter, and E. coli. USDA also believes that research is needed to determine the extent to which environmental sources contribute to the development of resistance in these pathogens. In addition, according to USDA officials, the potential health risks to humans from using antibiotics to promote animal growth need to be weighed against the economic benefits to the consumers of this use. CDC’s experts have advocated several measures to reduce the use of antibiotics in agriculture. CDC researchers believe that some antibiotics should not be used in animal feed to promote growth. These researchers told us that, in treating diseases, veterinarians need to ensure that they are prescribing the appropriate doses of antibiotics. To prevent the spread of disease, alternatives to antibiotics—such as improved hygiene and sanitation, feed safety, and “direct-fed microbials”—good or harmless bacteria that can be used to outcompete harmful or bad bacteria—should be used when appropriate. With regard to promoting growth in animals, CDC supports restricting the use of antibiotics because CDC believes such use results in antibiotic resistance that is transmitted to humans through the food supply and may limit treatment options in ill persons. CDC has specifically suggested that FDA reconsider its approval of penicillin and tetracycline for promoting growth in animals, as well as its approval of fluoroquinolones for disease treatment and prevention in poultry. According to CDC, fluoroquinolones are vital antibiotics for the treatment of serious Salmonella and Campylobacter infections in humans. According to FDA officials, the development of fluoroquinolone-resistant strains of Salmonella and Campylobacter highlights the need to better address the potential development of bacterial resistance as part of the safety determination prior to approving new antibiotics for use in food-producing animals. FDA has publicly stated that the current regulatory structure is inadequate to properly evaluate the human health impact of antibiotic resistance from the use of antibiotics in food-producing animals. To address these concerns, in November 1998 FDA’s Center for Veterinary Medicine published Proposed Framework for Evaluating and Assuring the Human Safety of the Microbial Effects of Antimicrobial New Animal Drugs Intended for Use in Food- Producing Animals. This framework is intended to provide a mechanism for evaluating and ensuring the human safety of antibiotics and other antimicrobials used in food animals, including those used for growth promotion. The proposed framework includes components for assessing antibiotics on the basis of (1) the importance of the antibiotic to human medicine, (2) preapproval data showing a safe level of resistance transfer, (3) the establishment of thresholds for monitoring safe resistance levels, (4) the effect of proposed uses on human pathogen load, and (5) post approval studies and monitoring. The Animal Health Institute objects to the post-approval monitoring requirements of FDA’s proposed framework, saying that it would be cost-prohibitive and that it is not justified from a public health standpoint. HHS noted that the framework sets out a conceptual risk-based process, the goal of which is to ensure that the antibiotics that are significant in human treatment are not lost because of the use of antimicrobials in animals while also providing for the safe use of antimicrobials in animals. The proposed framework includes a footnote indicating that the agency anticipates that the framework will be used, as resources allow, to review existing approved uses of antibiotics on food-producing animals. Although FDA officials told us that they intend to use the framework for evaluating the safety of all antibiotics currently approved, the framework does not specify a specific strategy and time frame for this reevaluation. In January 1999, FDA convened a public meeting to discuss and obtain comments on the proposed framework. FDA is in the process of revising the framework in response to the meeting and the written comments it has received. Finally, although FDA officials told us in July 1998 that they shared CDC’s concerns about fluoroquinolone resistance, FDA has not initiated an action to withdraw its earlier approval for the use of fluoroquinolones on poultry. In addition, FDA approved fluoroquinolones for use on beef cattle in August 1998. Conclusions Although research has linked the use of antibiotics in agriculture to antibiotic-resistant strains of specific foodborne pathogens that affect humans, agricultural use is only one factor in the emergence of antibiotic resistance in nonfoodborne pathogens. Debate exists over whether the role of agricultural use in the overall burden of antibiotic-resistant infections of humans warrants further regulation or restriction. CDC believes the potential human health risks call for action to restrict antibiotics for growth promotion in animals. We first raised concerns in 1977 about the potential human health risks of this practice. Today, more than two decades later, federal agencies have not reached agreement on the safe use of antibiotics in agriculture. In developing a federal response, both human health concerns and the impact on the agriculture industry are factors to consider. Recommendation to the Secretaries of Agriculture and Health and Human Services In light of the emergence of antibiotic resistance in humans, questions about the extent that the agricultural use of antibiotics contributes to the human health burden, and the debate over whether further regulation or restriction of use in agriculture is needed, we recommend that the Secretaries of Agriculture and of Health and Human Services develop and implement a plan that contains specific goals, time frames, and resources needed to evaluate the risks and benefits of the existing and future use of antibiotics in agriculture, including identifying and filling critical data gaps and research needs. Agency Comments We provided copies of a draft of this report to USDA, HHS, and EPA for their review and comment. To obtain USDA’s comments, we met with officials in the Food Safety and Inspection Service; the Animal and Plant Health Inspection Service; and the Agricultural Research Service, including the Associate Deputy Administrator for Animal Production, Product Value and Safety. HHS provided written comments, which appear with our response in appendix IV. EPA had no formal comments on the draft report. The agencies also provided technical comments that we incorporated throughout the report as appropriate. USDA generally found the draft report to be an accurate presentation of the facts and agreed with the recommendation but believed the draft overstated the extent to which antibiotic use in agriculture may be linked to the emergence of antibiotic resistance in humans. USDA acknowledged that the use of antimicrobials can lead to the development of resistance but does not believe that there is consensus among experts that research has linked the use of antibiotics in agriculture to the emergence of resistant strains of Salmonella, Campylobacter, and E. coli in humans. USDA also commented that more research is needed before decisions are made to further regulate or restrict the use of antibiotics in agriculture. We have incorporated USDA’s positions into the report. HHS, on the other hand, believed the draft report did not fully recognize what HHS believes is the current state of knowledge—the increasing body of evidence pointing to the connection between the agricultural use of antibiotics and resistant foodborne illnesses, and the potential adverse human health consequences of antibiotic use in agriculture. Noting that preventive action is needed now, the Department stated, “steps need to be taken to decrease the use in agriculture of antibiotics that contribute to the development of resistant strains of human pathogens.” It also pointed out that the public health community is concerned not only with the growth promotion uses of antibiotics in agriculture but also with uses to treat and prevent disease, which “can be significant contributors to the pool of resistant microorganisms that enter the food chain” and often involve “critical drugs of last resort in treating a variety of human infections.” While the Department believes no further research is needed to prove the link for foodborne pathogens, it does believes more research would be beneficial in assessing agricultural practices that can reduce antimicrobial use, identifying the types of use that are high or low risk, and better understanding the potential risks of resistance transfer from animal organisms other than typical foodborne pathogens. With regard to our recommendation, HHS pointed out that under the Food and Drug Administration’s proposed framework, applicants would have to conduct tests to determine the potential for inducing resistance for new animal drugs. It also stated that the framework would allow the Food and Drug Administration to withdraw already marketed antibiotics. While we agree that the framework is an important step, especially for developing data on antibiotic use, it does not include specific goals and time frames. Moreover, the proposal states that currently approved antibiotics and their uses will be assessed only to the extent resources allow. Without a specific plan, goals, time frames, and the identification of needed resources for such assessments, human health concerns that were raised more than two decades ago may remain unanswered. Finally, the disparity between USDA’s and the HHS’ views further highlights the need for the departments to work together to ensure that both human health concerns and the impact on the agriculture industry are considered. We have incorporated HHS’ comments into the report as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this letter. At that time, we will send copies of this report to the Honorable Richard Lugar, Chairman, Senate Committee on Agriculture, Nutrition, and Forestry; the Honorable Larry Combest, Chairman, and the Honorable Charles Stenholm, Ranking Minority Member, House Committee on Agriculture; the Honorable James Jefford, Chairman, and the Honorable Edward M. Kennedy, Ranking Minority Member, Senate Committee on Health, Education, Labor, and Pensions; and the Honorable Tom Bliley, Chairman, and the Honorable John Dingell, Ranking Minority Member, House Committee on Commerce. We will also send copies to the Honorable Dan Glickman, Secretary of Agriculture; the Honorable Donna Shalala, Secretary of Health and Human Services; the Honorable Carol Browner, Administrator, Environmental Protection Agency; the Honorable Jane Henney, M.D., Commissioner, Food and Drug Administration; the Honorable Jeffrey P. Koplan, M.D., Director, Centers for Disease Control and Prevention; the Honorable Jacob J. Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available to other on request. If you any questions about this report, please contact me at (202) 512-5138. Major contributors to the report are listed appendix V. Objectives, Scope, and Methodology This report examines (1) how antibiotics are used in agriculture and the implications of that use for human health; (2) federal roles and responsibilities for overseeing the use of antibiotics in agriculture; and (3) issues surrounding the debate over whether to further regulate or restrict the use of antibiotics in agriculture. To determine how antibiotics are used in agriculture, we spoke with officials from the Center for Veterinary Medicine in the Food and Drug Administration (FDA); the Office of Pesticide Programs in the Environmental Protection Agency (EPA); and the Agricultural Research Service (ARS), Animal and Plant Health Inspection Service (APHIS) and Food Safety and Inspection Service (FSIS) in the U.S. Department of Agriculture (USDA). We also met with officials representing specific agricultural industries, including the National Pork Producers Council, the National Milk Producers Federation, the National Broiler Council, and the National Cattlemen’s Beef Association. In addition, we spoke with officials from the American Feed Industry Association, the American Veterinary Medical Association, and the Animal Health Institute. From these meetings, we also identified the classes of antibiotics with examples of specific antibiotics approved for agriculture and the agricultural use for which they are approved. For comparison, we used the Physicians’ Desk Reference to identify classes of antibiotics and examples of antibiotics used on humans. To determine the implications for human health of the agricultural use of antibiotics, we reviewed the relevant research findings of studies, reports, and other scientific and medical literature, including, among others, “The Use of Drugs in Food Animals: Benefits and Risk;” National Research Council, July 9, 1998; “The Medical Impact of the Use of Antimicrobials in Food Animals,” World Health Organization, October 1997; Workshop Report, “Orphans and Incentives: Developing Technologies to Address Emerging Infections,” Institute of Medicine, 1997; Workshop Report, “Antimicrobial Resistance: Issues and Options,” Institute of Medicine,1998; Impacts of Antibiotic-Resistant Bacteria,” U.S. Congress, Office of Technology Assessment (OTA-H-629; Washington, D.C., U..S. Government Printing Office, September 1995); “Joint Committee on the Use of Antibiotics in Animal Husbandry and Veterinary Medicine,” November 1969 (Swann Report); the British House of Lords, Select Committee on Science and Technology Seventh Report; “Emergence of Multidrug-Resistant Salmonella Enterica Serotype Typhimurium DT-104 Infections in the United States,” The New England Journal of Medicine (May 1998); “Technology Crisis and the Future of Agribusiness: Antibiotic Resistance in Humans and Animals,” Harvard Business School, July 1997; “Can We Use Less Antibiotics?” Swedish Ministry of Agriculture, Food, and Fisheries, 1997; “Protecting the Crown Jewels of Medicine, A strategic plan to preserve the effectiveness of antibiotics.” Center for Science in the Public Interest, 1998. We met with officials and scientists from the Centers for Disease Control and Prevention (CDC), FDA and USDA and other experts, both in and out of government, to obtain their expert opinions of the studies and research that has been done on the subject. To determine federal roles and responsibilities for overseeing the use of antibiotics in agriculture, we spoke with officials and collected data from FDA, EPA, CDC, and USDA’s Agricultural Research Service, Animal and Plant Health Inspection Service, and Food Safety and Inspection Service. We also reviewed applicable laws and regulations for these agencies. To determine the issues surrounding the debate over whether to further regulate or restrict the use of antibiotics in agriculture, we reviewed and analyzed reports and documents published by, among others, the Institute of Medicine, the National Research Council, the Office of Technology Assessment, CDC, FDA, USDA, EPA, agricultural industry associations, the New England Journal of Medicine, and the World Health Organization. We discussed the issues with officials from the National Institutes of Health, CDC, FDA, USDA, EPA, and the World Health Organization, and from associations representing agricultural associations, veterinarians, and pharmaceutical manufacturers. We performed our review from May 1998 through April 1999 in accordance with generally accepted government auditing standards. Approved Uses of Selected Classes of Antibiotics in the United States Table II.1 lists major classes of antibiotics, provides examples of specific antibiotics within each class, and indicates whether any antibiotics within the class have been approved for use on animals, plants, and/or humans. Based on information in the Physicians’ Desk Reference, this classification of antibiotics is grouped according to specific characteristics, such as similarities in chemical composition or in the way they kill or inhibit bacterial organisms. (The Physicians’ Desk Reference provides the latest available information on more than 2,500 specific pharmaceutical products. Each entry provides an exact copy of the product’s FDA-approved labeling.) While the table shows that many classes of antibiotics approved for use in agriculture are also approved for use in human medicine, it is important to note that the antibiotics cited as examples may or may not be the antibiotic approved for a particular use. For example, only two antibiotics have been approved for use on food plants: streptomycin, which is an antibiotic in the class of aminoglycosides, and oxytetracycline, an antibiotic in the class of tetracyclines. Table II.1: Major Classes of Antibiotics, Examples in Each Class, and Approval for Use on Animals, Plants, and/or Humans beef cattle, goats, poultry,sheep, swine, certain plants beef cattle, dairy cows, fowl, poultry, sheep, swine beef cattle, dairy cows, poultry, sheep, swine (continued) beef cattle, fowl, goats, poultry, rabbits, sheep beef cattle, poultry, swine Polypeptides (bacitracin) fowl, poultry, beef cattle, poultry, swine beef cattle, dairy cows, fowl, poultry, swine, catfish, trout, salmon Beef cattle, dairy cows, fowl, honey bees, poultry, sheep, swine, catfish, trout, salmon, lobster, certain plants beef cattle, poultry, swine (Table notes on next page) Antibiotic-Resistant Strains Have Emerged in Three Food-Related Organisms That Cause Diseases in Humans Federal experts believe that research has linked the use of antibiotics in agriculture to the emergence of antibiotic-resistant strains of three disease-causing organisms. These organisms, which are known to cause illness or disease in humans, are Salmonella, Campylobacter, and Escherichia coli, commonly known as E. coli. Salmonella Salmonella is an organism commonly found in poultry, eggs, beef, and other foods of animal origin. According to public health officials, an estimated 800,000 to 4 million cases of Salmonella infections occur each year in the United States. Salmonella typically causes intestinal distress and does not require medical treatment. However, severe cases of Salmonella have been associated with reactive arthritis, as well as with infections in the blood, in the meningeal linings of the brain, and in other deep body tissues. Persons experiencing severe symptoms often seek medical treatment. According to CDC, each year an estimated 8,000 to 18,000 hospitalizations, 2,400 bloodstream infections, and 500 deaths are associated with Salmonella infections. Also, according to CDC, 40 percent of people with a Salmonella infection who seek medical attention are treated with antibiotics. One particularly serious strain of Salmonella—Salmonella DT-104—is known to be resistant to several antibiotics. CDC estimates that between 68,000 and 340,000 cases of Salmonella DT-104 occur annually in the United States. About 95 percent of Salmonella DT-104 strains are resistant to five antimicrobials—ampicillin, chloramphenicol, streptomycin, sulfonamides, and tetracycline. Human illness from Salmonella DT-104 was first recognized in the United Kingdom in the mid-1980s. In 1993, veterinarians in England began to treat poultry with fluoroquinolones, an important class of antibiotics for treating diseases in humans. By 1996, United Kingdom scientists reported that 14 percent of the Salmonella DT-104 strains had a decreased susceptibility to fluoroquinolones. Scientists are very concerned about the development of fluoroquinolone-resistant Salmonella, because fluoroquinolones are the drugs of choice to treat Salmonella infections in adults. Although fluoroquinolone-resistant Salmonella infections are currently rare in the United States, there has been a trend of decreasing susceptibility to fluoroquinolones since they were first approved for agricultural use in 1995. Campylobacter Campylobacter is also an organism commonly found in poultry and other food of animal origin, including pork and beef. According to public health officials, 2 million to 4 million people suffer Campylobacter infections annually. Campylobacter infections generally cause intestinal distress and do not require medical treatment. However, one in every 1,000 reported cases of Campylobacter results in Guillain-Barré Syndrome, a disease associated with paralysis. The first case of domestically acquired fluoroquinolone-resistant Campylobacter in humans in the United States were identified in 1996, shortly after FDA approved fluoroquinolones for use in poultry. World Health Organization scientists concluded that prior to the use of fluoroquinolones in animals, there had been no reports of fluoroquinolone-resistant Campylobacter infections in humans who had no previous exposure to this class of antibiotics. CDC scientists believe this provides evidence that antibiotic-resistant strains of Campylobacter are transmitted directly from animals to humans. E. Coli Although many strains of E. coli are carried normally in the intestines of humans and animals, some strains cause foodborne illnesses. One strain—E. coli O157:H7—causes potentially serious illness, particularly for children and individuals with weakened immune systems. Each year in the United States, an estimated 50 to 100 people die from E. coli 0157:H7 infections. Although antibiotics are not the recommended treatment for E. coli O157:H7 infections, antibiotics are often given because of the symptoms displayed in the patient and because some doctors believe antibiotics will help. Antibiotic-resistant strains of E. coli O157:H7 have been identified in animals, food, and humans, and the emergence of antibiotic resistance in E. coli O157:H7 is of concern to scientists because laboratory studies have demonstrated that organisms may exchange genes, including the gene that allows an organism to resist an antibiotic. Comments From the Department of Health and Human Services GAO’s Comments 1. We recognize the complexity of antimicrobial resistance and have reviewed the considerable body of research on the human health implications of the agricultural use of antibiotics. However, this report is not intended to be a complete technical assessment of the public health issues surrounding the agricultural use of antibiotics. Rather, it provides information on agricultural use and the implications of that use for human health, federal roles and responsibilities regarding the use of antibiotics in agriculture, and the issues surrounding the debate over whether to further regulate or restrict agricultural use. With regard to this debate, we present the many divergent, sometimes conflicting, viewpoints. For a more technical discussion of this complex public health issue, with citations to several specific research papers, see Antimicrobial Resistance: Data to Assess Public Health Threat From Resistant Bacteria Are Limited (GAO/HEHS/NSIAD/RCED-99-132, Apr. 28, 1999). 2. The report acknowledges that the factors that contribute to antibiotic resistance include the nature of pathogens, environmental pressures, and the use of antibiotics in human medicine and in agriculture. The report also discusses three antibiotic-resistant foodborne infections linked to the use of antibiotics in food-producing animals. 3. It was not our intent to suggest that a major scientific study should be undertaken to quantify agriculture’s contribution to the resistance problem relative to other factors. However, the report does recognize that there is not consensus on agriculture’s role. Indeed, the U.S. Department of Agriculture (USDA) does not believe there is consensus among experts that research has linked the use of antibiotics in agriculture to the emergence of resistant strains of Salmonella, Campylobacter, and E. coli in humans. We revised the report to clarify the Department of Health and Human Services’ (HHS) positions that research has established that the use of antimicrobials in agriculture contributes to resistant foodborne pathogens and that there is a pressing need to promote the more prudent use of antibiotics in each setting. 4. HHS notes that growth promotants deserve careful scrutiny but that a simple ban on growth promotants would not address all uses of antibiotics in agriculture. HHS states that the Food and Drug Administration’s (FDA) proposal: Proposed Framework for Evaluating and Assuring the Human Safety of the Microbial Effects of Antimicrobial New Animal Drugs Intended for Use in Food-Producing Animals will address all uses of antibiotics in agriculture. While the framework is an important step forward, it does not include specific goals and time frames for such assessments to help ensure that needed evaluations occur in a timely manner. Moreover, the framework will be applied to currently approved antibiotics—including currently used growth promotants—only to the extent resources allow. We revised the report to clarify HHS’ position on the issue of antibiotic use in food-producing animals and to more fully describe FDA’s proposal. 5. The report text discusses the National Antimicrobial Resistance Monitoring System-Enteric Bacteria program, and appendix II discusses the emergence of multidrug-resistant strains of foodborne diseases. 6. HHS notes that the draft report did not mention that the lack of detailed animal drug use information is a barrier to advancing scientific discussion on the adverse human health consequences of antibiotic use in agriculture. HHS states that the implementation of FDA’s framework would obtain these data. We have revised the report to acknowledge that data are not available on the quantities of specific antibiotics used in agriculture and the purposes for which they are used. Our recommendation directs HHS and USDA to identify data gaps as part of a plan for evaluating the risks and benefits of existing and future uses of antibiotics in agriculture. As stated previously, however, we do not agree that the implementation of FDA’s framework would obtain these data in a timely fashion for new antibiotic uses or, necessarily, at any time for existing uses. 7. As our report states, only a few studies, primarily conducted in Europe, have examined agriculture’s contribution to the development of resistance in nonfoodborne human pathogens. We believe our report presents a balanced perspective with respect to the positions of industry, researchers, and federal agencies. However, in recognition of the different perspectives on the issue, we modified the recommendation to focus on the debate over the need to further regulate or restrict the agricultural use of antibiotics. 8. While our report does not discuss in detail the transfer of resistance from nonpathogenic organisms to human pathogens, which, as HHS points out, is a difficult and unresolved issue, it does discuss the development of resistance from other than direct pathogen transfer and the fact that laboratory studies have demonstrated that organisms can exchange genes, including the gene that allows resistance. 9. We revised the report to include the data on the extent to which Salmonella and Campylobacter pose a threat to humans. 10. HHS also pointed out that the public health community is concerned not only with growth promotion uses of antibiotics in agriculture but also with disease treatment and prevention uses, which “can be significant contributors to the pool of resistant microorganisms that enter the food chain” and often involve “critical drugs of last resort in treating a variety of human infections.” It was not our intent to suggest otherwise. Our report discusses several antibiotics that are importance to human medicine that have been approved for use on animals, including fluoroquinolones, which FDA has recently approved for disease treatment on poultry and cattle. We included this comment in the Agency Comments section of the report. 11. Finally, with regard to our recommendation, HHS pointed out that under FDA’s proposed framework, applicants would have to conduct tests to determine new animal drugs’ potential for inducing resistance. HHS also stated that the framework would allow FDA to withdraw already marketed antibiotics. As we noted earlier, the FDA framework is an important step, especially for developing data on antibiotic use; however, the proposal states that currently approved antibiotics and their uses will be assessed only to the extent resources allow. Moreover, without a specific plan, goals, time frames, and the identification of needed resources for such assessments, human health concerns that were raised more than two decades ago may remain unanswered. Major Contributors to This Report Robert E. Robertson, Associate Director Erin Lansburgh, Assistant Director Stuart Ryba, Evaluator-in-Charge Natalie Herzog Jerry Seigler Shannon Bondi The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on antibiotic resistance issues that may stem from the use of antibiotics in agriculture, focusing on the: (1) use of antibiotics in agriculture and the implications of that use for human health; (2) federal roles and responsibilities for overseeing the use of antibiotics in agriculture; and (3) issues surrounding the debate over whether to further regulate or restrict the use of antibiotics in agriculture. GAO noted that: (1) antibiotics are used in agriculture to treat and prevent diseases in animals and in food plants and as a feed additive to improve the growth rate in animals; (2) data are not available on the quantities of specific antibiotics used in agriculture and the purposes for which they are used; (3) research has linked the use of antibiotics in agriculture to the emergence of antibiotic-resistant strains of disease-causing bacteria; (4) although the ill effects of these foodborne pathogens are generally mild to moderate, each year several thousand persons have severe illnesses resulting in hundreds of deaths; (5) in addition to the direct transfer of antibiotic-resistant organisms through animal products, some research suggests that the use of antibiotics in food animals may reduce the effectiveness of related antibiotics when used to treat humans; (6) approving antibiotics and setting allowable levels for antibiotic residues in food products is determined by the Food and Drug Administration (FDA) for animals and the Environmental Protection Agency for food plants; (7) testing for antibiotic levels in foods is performed by the Food Safety and Inspection Service for meat and poultry and by FDA for eggs, milk, and food plants; (8) monitoring the development of resistance to antibiotics in humans is conducted under a program run jointly by the Department of Agriculture (USDA), FDA and the Centers for Disease Control and Prevention; (9) the debate over whether to further regulate or restrict the use of antibiotics in animals and plants centers around the risk their use may pose to human health relative to their benefits to agriculture; (10) this concern has prompted several European countries to ban the use in animal feed of four antibiotics that are considered very important in treating humans; (11) beef, pork, and poultry producers and pharmaceutical manufacturers believe agricultural use is only one potential contributor to antibiotic resistance in humans; they claim that research does not warrant restricting antibiotic use in agriculture; (12) USDA believes that more research is needed before decisions are made regarding the further regulation or restriction of antibiotic use in food animals; (13) the Department of Health and Human Services believes that based on the scientific evidence, steps are needed now, not at some time in the future, to decrease such use; and (14) FDA's recently proposed framework for evaluating the safety of antibiotics for use in food-producing animals does not include specific timeframes for reevaluating approved antibiotics.
Background In recognition of the lack of manufacturing knowledge at key decision points and the need to develop more affordable weapon systems, DOD made recent changes to its policy. In 2008, the department made constructive changes to its policy instruction on operation of the defense acquisition system. It also developed MRLs as a measure that could strengthen the way the department manages and develops manufacturing- intensive systems. In 2004, the Joint Defense Manufacturing Technology Panel sponsored a joint defense and industry working group to design and develop MRLs for programs across DOD. In May 2005, MRLs were first introduced to the defense community in DOD’s Technology Readiness Assessment Deskbook for science and technology and acquisition managers to consider. As new manufacturing readiness criteria, MRLs are a measurement scale designed to provide a common metric and vocabulary for assessing manufacturing maturity and risk. MRL assessments identify the risks and manufacturing readiness of a particular technology, manufacturing process, weapon system, subsystem, or element of a legacy program at key milestones throughout the acquisition life cycle. There are 10 basic MRLs designed to be roughly congruent with comparable levels of technology readiness levels for ease of use and understanding. Table 1 shows the MRLs and basic definitions (see appendix II for the detailed MRL definitions). The working group also developed a set of elements called “threads” to provide acquisition managers and those conducting assessments an understanding of the manufacturing risk areas (see table 2). For these threads, desired progress is defined for each MRL, to provide an understanding of risks as readiness levels increase from one MRL to the next. Conceptually, these threads are manufacturing elements that are essential to programs as they plan, prepare for, and manage the activities necessary to develop a product. For example, the materials thread requires an assessment of potential supplier capability by MRL 3 and an assessment of critical first-tier suppliers by MRL 7. Likewise, the manufacturing personnel thread calls for identifying new manufacturing skills by MRL 3 and identifying manufacturing workforce requirements for the pilot line by MRL 7. As shown, each basic thread (risk area) has a description and general requirements for assessing risks for each thread. The working group further decomposed these MRL threads into subthreads to provide users a detailed understanding of the various kinds of manufacturing risks. See appendix III for a detailed breakdown of these threads (risk areas) for each MRL. DOD’s Long-standing History of Manufacturing Problems GAO has conducted an extensive body of work that highlights many of the manufacturing-related problems that both DOD and its prime contractors have faced. In many respects, DOD has recognized the nature of these problems throughout the years and has taken a number of proactive steps to address them. GAO’s work has drawn on lessons learned and best practices to recommend ways for DOD to improve the way it develops and manufactures its weapon systems. Examples from our reports include the following: In 1996, GAO reported the practices that world-class commercial organizations had adopted to more efficiently produce quality products, to improve DOD’s quality assurance program. DOD was spending $1.5 billion extra per year on military-unique quality assurance requirements for major acquisitions and billions more on cost and schedule overruns to correct problems. GAO concluded that repeated unstable designs, poor process controls, and poor transition to production caused the manufacturing quality problems. While DOD had taken some actions, its culture was cited as the biggest reason for slow adoption and unimplemented recommendations. In 1998, GAO reported on best commercial practices to offer ways to improve the process DOD uses to manage suppliers engaged in developing and producing major weapon systems. In assessing defense contractors and two case studies of munitions programs, the report concluded that suppliers were critical in the amount of technological innovation they contribute to the final product. In 2002, GAO reported on how best practices could offer improvements to the way DOD develops new weapon systems, primarily the design and manufacturing aspects of the acquisition process. DOD’s record showed a history of taking longer and spending more than planned to develop and acquire weapon system which reduced its buying power. The report identified and recommended best practices for capturing and using design and manufacturing knowledge early and new development processes that s, included high-level decision points and knowledge-based exit criter before key decisions on production are made. Essentially, one of the high-level decision points has become what GAO commonly refers to as Knowledge Point 3—the point when a program has demonstrated the manufacturing processes are mature. The report also recommended a best practice that includes a standard called the Process Capability Index (Cpk), a process performance measurement that quantifies how closely a product is running to its specification limits. The index indicates how well the processes statistical performance meets its control limit requirement. In 2008, GAO reported on how DOD and its defense contractors can improve the quality of major weapon systems. We reported that if DOD continued to employ the same acquisition practices as it has in the past, the cost of designing and developing its systems could continue to exceed estimates by billions of dollars. Quality problems were identified as the cause for cost overruns, schedule delays, and reduced weapon-system availability. Like DOD prime contractors, leading commercial firms rely on many practices related to systems engineering, manufacturing, and supplier quality, but they were more disciplined, and had institutionalized processes to ensure quality. Since 2003, GAO has issued a series of annual assessment reports on selected weapons programs, increasing from 77 to 96 programs reviewed. At $296 billion, the cumulative cost growth for DOD programs reported in 2009 was found to be higher than it had been five years earlier, and the overall performance of weapon system programs was still poor. Although the cost growth and the 22-month average delay in delivering initial capabilities was not attributed to manufacturing alone, the lack of production maturity was cited as one of three key knowledge areas contributing to the department’s cost growth, schedule delay, and performance problems. Revised Policy Incorporates Manufacturing Best Practices DOD’s December 2008 revision to its policy instruction on operation of the defense acquisition system incorporates a number of the best practices we identified in our previous work. The instruction covers the entire life cycle and considers manufacturing risks earlier in the acquisition life-cycle framework. In a November 2003 report on DOD’s May 2003 revision to its policy, we reported that much of the revised policy agrees with GAO’s extensive body of work and that of successful commercial firms. While we assessed DOD’s revised policy as providing a good framework for capturing knowledge about critical technologies, product design and manufacturing processes, we reported in 2006 that acquisition officials were not effectively implementing the acquisition policy’s knowledge- based process. We reported that the effective implementation of policy was limited by the absence of effective controls that require compliance and specific criteria for clearly demonstrating that acceptable levels of knowledge about technology, design, and manufacturing have been attained at critical junctures before making further investments in a program. We concluded that without specific criteria—or standards against which a judgment or decision is quantifiably based—decision makers are permitted to make decisions on the basis of subjective judgment. The December 2008 revised policy instruction establishes target maturity criteria for measuring risks associated with manufacturing processes at milestone decision points. During the material solutions phase, prior to milestone A, the 2008 policy instruction requires the analysis of alternatives to assess “manufacturing feasibility.” During the technology development phase, prior to milestone B, the instruction states the following: Prototype systems or appropriate component-level prototyping shall be employed to “evaluate manufacturing processes.” A successful preliminary design review will “identify remaining design, integration, and manufacturing risks.” A program may exit the technology development phase when “the technology and manufacturing processes for that program or increment have been assessed and demonstrated in a relevant environment” and “manufacturing risks have been identified.” After milestone B, one of the purposes of the engineering and manufacturing development phase is to “develop an affordable and executable manufacturing process.” The instruction says that: “the maturity of critical manufacturing processes” is to be described in a post- critical design review assessment; system capability and manufacturing process demonstration shall show “that system production can be supported by demonstrated manufacturing processes;” and the system capability and manufacturing process demonstration effort shall end, among other things, when “manufacturing processes have been effectively demonstrated in a pilot line environment, prior to milestone C.” Finally, at milestone C, the instruction establishes two entrance criteria for the production and deployment phase, which include “no significant manufacturing risks” and “manufacturing processes under control (if Milestone C is full-rate production).” Low-rate initial production follows in order to ensure an “adequate and efficient manufacturing capability.” In order to receive full-rate production approval, the following must be shown: 1. “demonstrated control of the manufacturing process,” 2. “the collection of statistical process control data,” and 3. “demonstrated control and capability of other critical processes.” Even with the updated policy instruction in place that includes guidance for most knowledge-based practices, inconsistent implementation has hindered DOD’s past efforts to reform its acquisition practices. For example, we reported in 2006 that DOD was not effectively implementing the knowledge-based approach process and evolutionary approach emphasized in its policy. While the policy outlined specific knowledge- based process of concept refinement and technology development to help ensure a sound business case is developed before committing to a new development program, we found that almost 80 percent of the programs we reviewed were permitted to bypass this process. Manufacturing Problems Are Attributed to Several Factors during the Planning and Design Phases of Selected DOD Weapons Programs Defense acquisition programs continue to have problems manufacturing weapon systems. As a result, systems cost far more and take far longer to produce than estimated. Many programs authorized to enter production experienced billions of dollars in cost growth after the authorization— nearly two-thirds of those programs reported increases in average procurement unit costs. Several factors contribute to these issues during the planning and design phases. These include the inattention to manufacturing during planning and design, poor supplier management, and lack of a knowledgeable manufacturing workforce. Essentially, some of these programs moved into production without considering manufacturing risks earlier in development. This hindered managers from later managing those risks until they became problematic, and also led to subsequent problems with supplier management, such as prime contractors conducting little oversight of suppliers. Some programs also had an inadequate workforce—in terms of insufficient knowledge and numbers—to effectively manage and oversee defense manufacturing efforts. Manufacturing Contributed to Growth in Cost and Delays in Schedule Defense acquisition programs continue to be troubled by unstable requirements, immature technology, and a lack of manufacturing knowledge early in design, resulting in more costly products that take longer to produce. Our 2009 annual assessment shows that total research and development costs were 42 percent higher than originally estimated. These higher costs reflect in part the learning that takes place as manufacturing processes are established and used to produce the first prototypes. Even programs that have been authorized to begin production have experienced substantial cost growth after the production decision. Production performance can be measured by examining the cost growth as expressed in changes to average procurement unit cost. This represents the value DOD gets for the procurement dollars invested in a certain program and shows the net effect of procurement cost growth and quantity changes. Figure 1 shows the levels of average procurement unit- cost growth for selected major defense acquisition programs. As indicated in figure 1, nearly two-thirds of programs that entered production after 2000 reported more than a 5 percent increase in average unit cost growth, while 32 percent of programs reported average unit cost growth that ranged from 11 percent to more than 15 percent. One program reported a 25 percent increase in average procurement unit cost. Further, 42 percent of those programs experienced production cost increases when procured quantities decreased or remained the same. For example, the Black Hawk helicopter’s 2007 production estimate had no increase in quantities since 2005, yet its production cost increased $2.3 billion, and average procurement unit cost rose by 13 percent. The Joint Air-to-Surface Standoff Missile had an 8 percent quantity decrease since the 2004 production decision; but the production costs increased by $561 million and average procurement unit cost increased by 25 percent. As for schedule growth, DOD has continued to experience delays in delivering new or modified weapon systems to the warfighter. Over 50 percent of current programs in production have encountered some form of delay after the production decision, when manufacturing processes should be in control. Consequently, warfighters often must operate costly legacy systems longer than expected, find alternatives to fill capability gaps, or go without the capability altogether. The four DOD weapon systems we selected for in-depth review with known cost, schedule, and performance problems reported several key factors that contributed to manufacturing problems. These include the inattention to manufacturing during planning and design, poor planning for supplier management, and lack of a knowledgeable manufacturing workforce. Capturing critical manufacturing knowledge during the planning and design phases before entering production helps to ensure that a weapon system will work as intended and can be manufactured efficiently to meet cost, schedule, and quality targets. The programs in our review often lacked manufacturing knowledge at key decision points, which led to cost growth and schedule delays. For example, the Joint Air- to-Surface Standoff Missile program—an autonomous, air-to-ground missile designed to destroy high-value targets—experienced a critical unit- cost breach due to missile reliability problems not being addressed early in the design phase. Also, the Electromagnetic Aircraft Launch System—a new catapult technology being developed for the Navy’s newest class of aircraft carriers—had experienced problems manufacturing compatible materials, which resulted in cost growth and schedule delays and was the focus of recent congressional interest. Figure 2 summarizes contributing factors for manufacturing problems experienced by the four DOD weapon systems. As indicated, most of the programs had more than one major problem related to manufacturing. These issues illustrate the major problems we discussed with defense and contractor officials, but do not encompass all the manufacturing problems experienced by the programs. For example, recent Air Force study reports that manufacturing and quality assur ance requirements are not included in the contracts to develop weapon systems, which could affect the contractor’s approach to manufactu Officials from the Defense Contract Management Agency—a DOD component that works directly with defense suppliers to ensure that supplies and services are delivered on time, at projected cost, and meet performance requirements—also reported similar contra could affect contractor performance on manufacturing. ring. Manufacturing Was Overlooked du Development Each of the four programs we examined did not give manufacturing strong consideration during the early planning and design phases. Programs we re moved into production largely without considering manufacturing risks earlier in the acquisition process, as demonstrated by the experiences of the Exoatmospheric Kill Vehicle and the H-1 helicopter upgrade program The Exoatmospheric Kill Vehicle was designed to intercept and destroy high-speed ballistic missile warheads in mid-flight, while the H-1 upgrad program converts the attack helicopter and the utility helicopter to the AH-1Z and UH-1Y configurations, respectively. . The Exoatmospheric Kill Vehicle program was put on an accelerated development schedule in response to a directive to develop and deploy, the earliest possible date, ballistic missile defense drawing on the best technologies available. According to the contractor, it bypassed some of its normal development-review processes to accelerate delivery of the vehicle, which also resulted in a high acceptance of manufacturing risks without sufficient identification and management of risk-mitigation p For example, the program went into production without completing qualification testing. In addition, the contractor continued to incorpo design changes while supplier production was ongoing, resulting in rework and disruption to the production line. Early lots of kill vehicles were built manually by engineers in the absence of automated productio processes, which caused dissimilarities among vehicles in the fleet and will make refurbishments difficult. lans. For several reasons, the H-1 helicopter upgrade program did not inclu manufacturing in the early phases of planning and also proceeded to de production before its design was mature, according to the contracto First, the program underestimated the complexity of updating and remanufacturing the aircraft without historical drawings. The emphasis was placed on minimizing development costs and resources were not available to assess manufacturing challenges early in the redesign process. Furthermore, the program started low-rate production before completing operational evaluation testing. As a result, the problems uncovered during testing had to be corrected on aircraft that were on the assembly line. Also, constant change orders and factory bottlenecks, among other problems, affected program costs and schedules. The schedule pressure allowed little opportunity to remedy the manufacturing problems, resulting in more complicated and expensive fixes. Ultimately the schedule slowed and the costs increased to the point that the program abandoned the remanufacturing upgrade and, instead, opted to purchase newly manufactured aircraft cabins for the UH-1Y configuration. r. Poor Planning Led t Supplier Problems Inattention to manufacturing during planning and design led to subsequ problems with supplier management in two major defense acquisition programs we reviewed. Specifically, the prime contractors did not give adequate attention to managing their suppliers. For example, program officials for the Joint Air-to-Surface Standoff Missile told us that the responsibility for manufacturing processes and discipline shifted in the 1990s from the government to the defense contractors. The government started to rely on the prime contractor to ensure quality and reliability, particularly with subtier suppliers. In this case, the program office told that the prime contractor for the missile program relied on the subtier suppliers to self-report their capabilities and did not engage in effective oversight of their work, which led to defective parts. The program office recently recruited experts in manufacturing to help the prime contractor address their supplier problems more effectively. Lack of Manufacturing Knowledge Contributed to Problems Some DOD programs and prime contractors had an inadequate defense manufacturing workforce—both in terms of numbers and experience—to effectively manage and oversee manufacturing efforts, which resulted in schedule delays or cost inefficiencies. The manufacturing workforce includes occupations such as specialists in quality assurance, business, manufacturing engineering, industrial engineering, and production control. In many cases, the programs lacked manufacturing expertise early in development, which hindered the program’s ability to later manage manufacturing risks. For example, the contractor for the Electromagnetic Air Launch System did not have sufficient systems-engineering personnel involved in the design to help it transition from development to production. As a result, the program encountered schedule delays and cost increases. DOD conducted a program assessment review, which led the program office and contractor to increase systems engineering staff. For the Exoatmospheric Kill Vehicle program, the contractor’s workforce and manufacturing processes could not readily undertake the rigors of production for a space-based capability, part of which must be manufactured in a clean room environment, and all of which commands rigorous processes and procedures due to highly technical designs. The contractor’s hourly assembly personnel were trained to build tactical missiles on a high-rate production line and were not sufficiently trained in the quality-control standards required by clean-room manufacturing, such as carefully controlling foreign-object debris, specially maintaining the clean room, and using a partner in certain high-level tasks to ensure all steps are properly followed. These standards were not institutionalized, and the contractor eventually had to modify its facilities and production standards to correct the manufacturing problems. The facility had to be retooled and reconfigured late in development. The contractor also experienced high turnover in its workforce due to the increasing demands associated with working in a clean-room environment and working long hours. MRLs Have Been Proposed to Improve the Way DOD Identifies and Manages Manufacturing Risk and Readiness The Joint Defense Manufacturing Technology Panel working group has proposed MRLs as new manufacturing readiness criteria that could improve weapon system outcomes by standardizing the way programs identify and manage manufacturing risks associated with developing and fielding advanced weapon systems. MRLs were first introduced to the defense community in DOD’s 2005 Technology Readiness Assessment Deskbook as an important activity for science and technology and acquisition managers to consider. An analysis by the working group shows that MRLs address many of the manufacturing issues not covered by DOD’s technical reviews, particularly reviews conducted in the early phases of acquisition. In their development, comprehensive efforts were undertaken to design and develop MRLs from DOD as well as industry resources. For example, the working group formulated MRLs from a manufacturing knowledge base of defense, industry, and academia to address two key areas of risk—immature product technologies and immature manufacturing capability. The working group also designed MRLs as a structured and disciplined approach for the way manufacturing risk and readiness is expected to be identified and assessed. The working group also developed a set of tools that include a deskbook, checklist, and a website to help managers and users apply MRLs and conduct assessments. In addition, the Army and Air Force report that their use of MRLs on pilot programs contributed to substantial cost benefits on a variety of programs, including major acquisition programs. MRLs Were Developed from Knowledge-Based Resources on Manufacturing To develop MRLs, the working group conducted comprehensive sessions with industry participants to ensure the metrics and vocabulary for assessing manufacturing readiness would be an all-inclusive body of knowledge. Officials stated that a mature set of manufacturing knowledge resources already existed but it was scattered and not consistently applied in a disciplined way that aligned with the DOD acquisition life-cycle framework. In their formulation, MRLs were developed from an extensive body of manufacturing knowledge that included, but was not limited to, the following defense, industry, and academic sources: DOD Instruction 5000.02, Operation of the Defense Acquisition System (Dec. 8, 2008), Navy best-practices manual for using templates on design and Air Force manufacturing development guide, military standards and specifications, and Malcolm Baldrige quality award criteria. Other standards and technical sources were obtained from the Institute of Electrical and Electronics Engineers, the International Standards Organization on quality management systems, automotive industry quality standards, and the supplier model from the Massachusetts Institute of Technology. Analysis Shows MRLs Address Manufacturing Gaps in DOD’s Technical Reviews An analysis conducted by the working group shows that MRLs address many of the manufacturing gaps identified in several of DOD’s technical reviews that provide program oversight and determine how well programs are meeting expected goals, particularly the reviews conducted in the early acquisition phases. According to the working group, addressing these manufacturing gaps is fundamental to improving the way programs plan, design, and prepare for manufacturing. For example, the working group’s analysis shows that DOD’s current systems-engineering technical review checklist used for preliminary design reviews has only 27 of 759 total questions that deal with core manufacturing-related questions, whereas the MRL 6 assessment checklist for this juncture has 169 core manufacturing questions. More importantly, the technical review checklist did not address key manufacturing discipline in the areas of program management, systems engineering, requirements management, risk management, and program schedule. Similarly, the technical review checklist used for critical design reviews has only 22 of 824 total questions that deal with core manufacturing questions, whereas the MRL 7 assessment checklist for this juncture has 162 core questions. Core manufacturing disciplines were not addressed in the specific areas of management metrics, manufacturing planning, requirements management, system verification, and other areas. Finally, DOD’s technical review checklist used for production readiness reviews has 194 of 613 total questions that deal with core manufacturing questions. While the MRL 8 assessment checklist has 14 fewer core questions on manufacturing at this juncture, the working group stated these core manufacturing questions are addressed earlier in the acquisition framework, which is reflective of commercial best practices where such manufacturing topics and discipline are addressed, in contrast to DOD’s current practice. Draft Deskbook Explains MRL Application and Assessments The draft MRL deskbook is a detailed instructional resource on how to apply MRLs and conduct assessments of manufacturing risk and readiness, such as how to structure and apply evaluations to a technology, component, manufacturing process, weapon system, or subsystem using the MRL definitions. It also demonstrates how assessments should be carried out at various phases by the managers of science and technology projects and technology demonstration projects intending to transition directly to the acquisition community, as well as acquisition program managers and the people involved in conducting assessments. According to the working group, MRLs can not only be used to improve how DOD manages and communicates manufacturing risk and readiness, but can also give decision makers and manager’s better visibility into program risks. For example, a variety of manufacturing status and risk evaluations have been performed for years as part of defense acquisition programs in a variety of forms—for example, production readiness reviews, manufacturing management/production capability reviews, etc. However, these structured and managed reviews do not use a uniform metric to measure and communicate manufacturing risk and readiness. MRLs, when used in combination with technology readiness levels, are expected to address two key risk areas—immature product technologies and immature manufacturing capability. The draft deskbook says that it is common for manufacturing readiness to be paced by technology readiness or design stability, and that it is not until the product technology and product design are stable that manufacturing processes will be able to mature. MRLs can also be used to define manufacturing readiness and risk at the system or subsystem level. For these reasons, the MRL definitions were designed to include a target level of technology readiness as a prerequisite for each level of manufacturing readiness. Figure 3 shows the relationship of MRLs to system milestones and technology readiness levels in the defense acquisition life-cycle framework. MRL Assessments Provide Basis for Identifying, Planning, and Managing Program Risks MRL assessments are intended to leverage better manufacturing knowledge, enabling managers to be aware of problems or risks early in development, when they are easier to resolve and before significant investments are made. In turn, these risks can be addressed earlier in the life cycle when costs are lower. For example, the ability to transition technology smoothly and efficiently from the laboratories, onto the factory floor, and into the field is a critical enabler for evolutionary acquisition. Assessments can be applied to a technology, manufacturing process, weapon system, or subsystem using the definitions as a standard. As part of the assessment, a comparison is made between the actual MRLs and the target MRL levels. The difference between the two identifies the risks and forms the basis for assisting managers to develop a plan—called a manufacturing maturation plan—to remove or reduce them. Risks should be identified throughout the life cycle and, when targets are not met, the plan updated to ensure the appropriate MRL will be achieved at the next decision point. The manufacturing maturation plan identifies manufacturing risks and provides a plan for mitigating each risk area throughout the duration of the technology or product-development program. The draft MRL deskbook says every assessment of manufacturing readiness should have an associated plan for areas where the MRL has not achieved its target level. The deskbook requires a manufacturing maturation plan to include the most essential items in planning for the maturity of an element of assessment that is below its target MRL. These include a statement of the problem that describes areas where manufacturing readiness falls short of the target MRLs, including key factors and driving issues, solution options and consequences of each option, and a maturation plan with a schedule and funding breakout. Other information should include the status of funding to execute the manufacturing plan and specific actions to be taken and by whom, and the MRL to be achieved and when it will be achieved. MRL Pilot Programs Show Positive Benefits Army and Air Force programs have pilot-tested MRLs on science and technology and some major acquisition programs in an effort to increase the manufacturing readiness and maturity to higher levels appropriate to the phase of development. Both services performed MRL assessments on selected pilot programs to address manufacturing risks and assess technology transition. The Army reports numerous benefits from the use of MRLs such as manufacturing efficiencies, improved labor utilization, and cost benefits. Similarly, the Air Force has used MRLs to manage its manufacturing risks associated with new technologies, yielding tangible benefits. While MRLs cannot take full credit for all benefits derived in the pilot programs, officials noted they are a good way to manage, mitigate, and communicate—between science and technology, acquisition, the user, and the system developer—readiness and risks early and throughout the acquisition process to avoid major consequences from manufacturing- related problems. These programs provide insight on how the acquisition community can utilize MRLs within weapon system programs. Army In 2004, the Army’s Aviation and Missile Research, Development and Engineering Center began applying MRLs to various technologies in concept development, including those technologies transitioning to engineering and manufacturing development. Officials stated that without cost and manufacturing readiness planning, science and technology programs face certain barriers to transition, resulting in: (1) high unit production cost caused by a focus on technology without regard to affordability; and (2) manufacturing problems caused by design complexity resulting in a technology that is not feasible to manufacture. For example, the Army has applied MRLs to many programs, including warfighter-protection materials, Micro-Electro-Mechanical Systems, embedded sensors, and helicopter cabin structures. The warfighter- protection program—the next generation of helmets and body gear— reported that it was able to reduce scrap by 60 percent and reduced touch labor by 20 to 40 percent. On programs where cost benefits could be roughly calculated, the Army believes that MRLs, among other improvement initiatives, contributed to the $426 million in benefits on seven programs. MRLs were also used as a metric in the Technology Transition Agreement to communicate manufacturing maturity and facilitate a smooth transition to the acquisition community. Air Force Air Force officials we met with discussed using MRLs to assess and identify gaps and understand risks in manufacturing maturity that would delay technology transition into an advanced systems development program or a fielded system upgrade. The Air Force has conducted several MRL assessments on advanced technology demonstrations and major defense acquisition programs, including the MQ-9 Reaper Unmanned Aircraft, Joint Strike Fighter, Advance Medium-Range Air-to-Air Missile, X-band thin radar array, and Sensor Hardening for Tactical Systems. Officials reported that the use of MRLs have contributed millions of dollars in cost avoidance, increased production rates, and has accelerated technology transition. For example, the Air Force reported realizing $65 million in savings by addressing problems with a costly manual drilling process. MRLs were used to raise new drilling technology from MRL 4 to MRL 9, achieving a unit-cost savings of $17,000 per aircraft from reduced tooling, manpower, floor space usage, and time. Because of MRL assessment’s success on advanced technology programs, the Assistant Secretary of the Air Force for Acquisition directed the program office to perform MRL assessments on key MQ-9 Reaper manufacturing processes and technologies. The MQ-9 Reaper is an unmanned aerial vehicle designed to provide a ground attack capability during reconnaissance and surveillance missions. Officials stated that the MRL assessment results have (1) identified five areas that needed review prior to a milestone C production decision; (2) identified two risks to full- rate production—mitigations are in progress; and (3) provided evidence to support the contractor’s ability to meet the production goal of two aircraft per month. To ensure that manufacturing requirements are enforced, officials have developed policy for programs managers to assess manufacturing readiness at key decision points. To support that policy, the Air Force has developed training for integrated product teams to execute the manufacturing readiness assessments. Also in August 2009, the Air Force Institute of Technology established a Manufacturing Readiness Assessment course to provide training for the assessments within the Air Force and is currently open to all services and industry. DOD’s Proposed MRLs Embody Many Best Practices of Leading Commercial Firms To successfully develop and manufacture their products, the commercial firms we visited used a disciplined, gated process that emphasized manufacturing criteria early and throughout the product’s development. To measure manufacturing maturity, these firms developed processes that give manufacturing readiness and producibility primary importance throughout the product-development process, focusing on producing a product, not developing a technology. The goal is business profitability, and manufacturing maturity is important to this process from the earliest stages. The best practices they employed were focused on gathering a sufficient amount of knowledge about their products’ producibility in order to lower manufacturing risks and included stringent manufacturing readiness criteria—to measure whether the product was mature enough to move forward in its development. In most respects, these criteria are similar to DOD’s proposed MRLs. For example, as with MRLs, commercial firms assess producibility at each gate using clearly defined manufacturing readiness criteria, gain knowledge about manufacturing early, demonstrate manufacturing processes in a production-relevant emphasize the importance of effective supply-chain management. Essentially, commercial firms emphasize these criteria in order to maximize their understanding of manufacturing issues, to mitigate manufacturing risks that could affect business profitability or schedule goals for getting the product to market. DOD’s MRLs were designed to mitigate similar manufacturing risks. However, the difference is that the commercial firms we visited required that their manufacturing processes be in control prior to low-rate production, whereas DOD’s proposed MRL criteria do not require as early control of the manufacturing process. DOD’s MRLs Are Similar to Manufacturing Criteria Used by Leading Firms Leading commercial firms use manufacturing readiness criteria, similar to DOD’s MRLs, to assess the producibility of a system, gathering knowledge about the producibility of a product and the maturity of the manufacturing process. These criteria are applied early, even before a product formally enters into development, to identify and manage manufacturing risks and gaps. Additional manufacturing readiness criteria are applied through all the stages of a product’s development and production until the product is ready for commercial release. The firms we visited used manufacturing readiness criteria to measure both the readiness of the product or material to enter into development and to proceed through the necessary gates. Table 3 below shows examples of manufacturing readiness criteria that are common to both the MRLs and the commercial criteria, to illustrate their similarities. Both emphasized identifying risks and developing plans to mitigate these risks, setting realistic cost goals, and proving out manufacturing processes, material, and products. Best Practice: Commercial Companies Emphasize Manufacturing Criteria Early and at Every Stage of the Product-Development Life Cycle Each commercial firm we visited developed a disciplined framework for product development that assessed producibility at each gate using clearly defined manufacturing-maturity criteria that are similar in many respects to DOD’s MRLs. These include assessments of all aspects of manufacturing technology and risk, supply-chain issues, production facilities and tooling, and materials. Throughout the product-development life cycle, these criteria were applied to determine entry or exit into the next phase and led to informed decisions about whether the product was ready to move forward in its development. Manufacturing risks—such as those found in new manufacturing technologies or production facilities, new or revolutionary materials or supply-chain issues—were assessed at each step. Deliverables, including risk-identification and mitigation plans, manufacturing plans, and funding and resource needs, were required at each gate in order to progress to the next product-development gate. Targets were developed for each gate, including cost, schedule, and yield goals, and the product team was responsible for either meeting these targets or having risk-mitigation plans in place if the targets had not been met. GE Aviation exemplifies this disciplined process, using a highly structured gated process with detailed checklists for entry and exit into each phase. Like DOD’s MRLs, these checklists contain increasingly detailed criteria— as they move from product start to production—for evaluating manufacturing technologies, cost drivers, materials, and supply-chain issues. Structured teams are brought together, tools are identified for execution and control of the process, and scheduled reviews are conducted with defined deliverables and checklists for each milestone. At each milestone, a vigorous review of the plans for the product’s development and manufacturing and risk-reduction efforts highlights issues before they become problems. The firm’s goal is to have mature processes by production. To achieve this, it considers manufacturing readiness throughout. Each project’s team is cross-functional and includes senior management, mid-management and the project team. This robust review process leverages expertise across GE Aviation, reduces risk, and highlights issues before they become problems. As with all the commercial firms we visited, GE Aviation requires strong management involvement at each gate, along with decision reviews to determine if enough knowledge is available and risk-mitigation plans are in place to proceed or if actions to address and mitigate manufacturing risks can show a viable way forward. This allows management to resolve problems rather than pass them on to the next phase. At project start, which corresponds to MRL 4, the senior leadership team and product leadership team generate the product idea and assess the need for the project. They provide linkage between the business strategy and the project and develop the high-level project strategy. They identify any new product material or manufacturing processes and begin to develop a risk- reduction strategy for these issues. By the time the product enters the preliminary design phase, senior leadership and project teams agree on the approach to the project. At this time, product directors must have a manufacturing plan in place in order to identify how they are going to achieve manufacturing readiness. Technical risks are identified in the manufacturing plan, as well as risk-abatement strategies for materials and manufacturing processes and supply-chain risks. The plan has to show how issues will be successfully addressed by the detailed design phase, when leadership, the project team, and customers agree on the product to be delivered. If agreement is reached, they freeze the project plan and a decision is made to fund or terminate the project. Multidisciplinary Team / Manufacturing Experts In the commercial firms we visited, product-development teams were multidisciplinary, generally including management, manufacturing, quality, finance, suppliers, and engineering, with necessary skills available to assess manufacturing readiness. Leading firms recognize the value of having a knowledgeable, well-trained, and skilled manufacturing engineering workforce involved in these multidisciplinary teams from the beginning and throughout the process. When Honeywell reorganized its aerospace business in 2005, it created an advanced manufacturing engineering organization to focus on manufacturing concerns in the earliest phases of new product-development programs. This organization consists of engineers to support various manufacturing disciplines in Honeywell. An important part of this advanced engineering organization is its technology group, which consists of a select number of technology fellows with extensive expertise in key manufacturing disciplines that touch nearly all the products Honeywell produces. Honeywell retains highly skilled manufacturing expertise through this program and uses these experienced and knowledgeable manufacturing engineers to oversee each project’s manufacturing assessments. Maturing Technology and Manufacturing Processes Commercial firms focus on maturing and validating technology and manufacturing processes before these are associated with a product and before entry into the gated process. They keep invention and unproven technologies in the technology base until their producibility at the scale needed can be proven. As an example, GE Healthcare’s Gemstone scintillator underwent years of laboratory development on a small scale until GE Healthcare was satisfied that this material was ready to be used on its computed tomography (CT) scanners. Scintillators work by converting the X-rays in the CT scanner into visible light. GE Healthcare had been manufacturing its own scintillators since the late 1980s, but it needed an improved one that worked faster, for better clarity of vision and to reduce the amount of exposure to radiation. In 2001, the firm began basic composition development at the laboratory scale and narrowed down the alternatives to find the material with the best properties for this use. Even at this early stage, several years before the material would enter into GE Healthcare’s gated process, there was early engagement by the chemists with the manufacturing side. Before they decided on a solution, a determination was made that it could produce them with sufficient yield and quality: even if a material had the best optical qualities, it had to balance this with its producibility. GE Healthcare tested thousands of alternatives to determine what could meet its technical requirements and be producible in the quantities needed. The firm narrowed it down to a garnet-based, rare-earth minerals composite, and began producing it in small but increasing quantities. After narrowing the field to this garnet- based compound, GE Healthcare began to determine its suppliers and what equipment was needed. The firm then began building its first pilot plant to produce the material and the scintillators, 2 years before the scintillator entered the firm’s gated process. Figure 4 shows a photo of a CT scanner that uses the scintillator technology. Best Practice: Commercial Firms Have Adopted DOD’s MRLs or Are Employing Similar Criteria in Their Product- Development Process Because leading commercial firms focus on producibility as a key element to successfully develop products, they use rigorous analysis methods to assess producibility and to identify and manage manufacturing risks and gaps. They apply these methods and tools early and throughout product development and use them to manage their product development on a daily basis. This commercial approach is a process in which quality is designed into a product and manufacturing processes are brought into statistical control to reduce defects, in contrast to practices employed by many defense contractors where problems are identified and corrected after a product is produced. Some firms were familiar with the DOD MRL proposal and had taken st to use the concepts at their own companies. Honeywell, for example, determined that early decisions were responsible for many production issues and so they developed analytical tools and models that support evaluations of manufacturing and risk throughout the product- development life cycle. In 2005, Honeywell engineers began looking for a way to measure manufacturing readiness and producibility, since they realized that early program decisions were driving many production iss and that by the time a product entered engineering and manufacturing development, it was too late to efficiently affect these issues. Some of these issues include cost overruns, quality problems, low-yield issues, service and maintainability inefficiencies, and supply-chain problems. The output of this tool is an MRL assessment score that can identify gaps or risks. For example, spreadsheets show the MRL scoring at a glance for each of the elements evaluated, pinpointing the gaps; risk worksheets to quantify the risks; and action plans to close the gaps and mitigate these risks. It links to the firm’s gated process, providing entry and exit criteria and feedback on how to meet these criteria. The important information obtained is not necessarily what MRL level the item is at currently, but rather the robustness of the gap-closure plan to get to the desired level for the next gate. The application of the MRL tool helps identify what these key gaps are and what steps are required to close them. The three enabling producibility tools that provide support for this assessment and early input on the producibility risks are: a Design for Manufacturing Model, a Product Complexity Model and a Yield Prediction Model: Manufacturing Complexity Model: This model identifies the design features that are driving manufacturing complexity into the design and enables scenarios to be evaluated to see what actions can be taken to simplify the design. Higher-complexity designs generally cost more and are higher risk, so the goal is to identify alternative design solutions that minimize complexity, but still meet all the performance requirements. Yield Prediction Model: Honeywell has also developed yield prediction models based on statistical principles that correlate opportunities for defects in a design to established process capability benchmarks. This approach is used to predict yield during early design activities based on knowledge of the manufacturing processes used and the complexity of the design. Design for Manufacturing Scorecard analysis: The third Honeywell- developed tool is a design for manufacturing scorecard, which quantifies how well the design adheres to recommended best practices. The goal of using the tool is to provide feedback to the designers so that they see how their design decisions directly affect producibility and help pinpoint improvement areas early in the process. Honeywell then conducts an MRL workshop, with a team led by an engineer from its Advanced Manufacturing Engineering group that includes the program manager and various subject-matter experts. This team reviews the tools and the MRL criteria to gain consensus on ratings for each category. Honeywell’s Manufacturing Maturity Model, with input from these enabling tools, is used to develop an MRL score for the product. These assessments provide early producibility evaluations essential to mitigating design-driven risks. Since many producibility issues are driven by early design architecture decisions, these tools provide a way to analyze these decisions early and make the necessary performance and producibility trades through “virtual prototyping” long before actual hardware is built. The MRL score provides the necessary framework to ask the questions that such an analysis needs to answer. After the MRL assessment is complete and the MRL scores and risk- mitigation plans are approved, the MRL analysis and risk mitigations are incorporated into the daily schedule of the program office. The office continually monitors the MRL levels, updating them and working toward its risk mitigation goals. Best Practice: Leading Firms Prove Out Manufacturing Tooling, Equipment, and Processes before Entry into Production Companies we visited spent years prior to production developing and proving out their manufacturing processes, including building test articles on pilot production facilities to perfect these processes. This allowed them to perfect and validate these processes, eliminate waste and scale up gradually to the required manufacturing level. They reduce errors and inefficiencies with the purpose of retiring manufacturing risks. GE Aviation officials told us that certain advanced manufacturing technologies achieve significant cost savings by getting the costs lower earlier in the process and decreasing cycle time for faster implementation. An example of manufacturing techniques or processes that have made a big difference in costs, accuracy, and reliability include processes for drilling small shaped holes for turbine airfoils. GE Aviation’s Turbine Airfoils Lean Lab provides a mock-up of a production facility or process, where such technologies and production processes can be tested to eliminate waste, scrap, and excess steps. They focus on one part family or process, such as the turbine airfoil shaped-hole manufacturing. The turbine airfoil is a part of the jet engine that generates power—it extracts horsepower from the high-temperature, high-speed combusted gasses. Turbine airfoil blades require hundreds of cooling holes that help maintain part integrity at elevated operating temperatures. Traditionally, round holes were used, but the technology has evolved to compound-angle-shaped holes, which improve cooling effectiveness and reduce engine stress. These type of holes cannot be economically produced by traditional methods and require improved manufacturing techniques. Advanced laser drilling was determined to be feasible, and GE Aviation decided to initiate the program through the Lean Lab to ensure manufacturing readiness of the process. GE Aviation officials compared their processes in this case to DOD’s MRLs. Prior to entering their gated process, they began making investments in potential technologies, including tooling (MRL 1-3). As the gated process began, risks were identified and risk-abatement plans were put in place (MRL 4). GE Aviation then set up the Lean Lab to test the way the airfoil would actually be built. New processes were introduced that included new laser methods for hole drilling, improved robotic technology, machining, and grinding (MRL 5-6). The managers then ran the pilot production line for some time to manufacture these airfoils using actual production operators to be confident that the process would translate to the production line. Adjustments were made to improve efficiency and retested on the line until they were satisfied that they had worked out the best procedures. GE had tooling-design experts on the team at the Lean Lab to provide rapid part and tool manufacturing. Processes were brought into statistical control in order to take the complexity out of manufacturing, simplify the process, and reduce waste (MRL 7-8). They then dismantled the production line at the Lean Lab, took it to the manufacturing facility, and set it up exactly the same, with no variations allowed (MRL 9). This seamless introduction of the new manufacturing technology and the lean principles developed in the lab are expected to save many millions of dollars across GE Aviation, on production of this part family alone. Figure 6 shows a photo of GE Aviation’s Lean Lab setup. GE Healthcare provides another example of proving out manufacturing processes prior to production in their development of the Gemstone scintillator for use on their CT Scanners. In 2003, the technology for this transitioned into the firm’s formal gated process or product start-up, and it began a detailed and extensive development of the manufacturing process. The firm built a pilot plant for this purpose and began manufacturing the composite in increasing amounts. In this first pilot plant, it was able to process the materials in increased quantities from what it produced in the lab. GE Healthcare verified that it had the right technologies to minimize manufacturing risks. In the laboratory environment, the firm had already answered the question “Can this composite be made with the desired properties?” and now asked “Can it be made with sufficient yield and quality to be manufactured in the desired amounts?” This early engagement with manufacturing enabled the firm to develop the process and reduce errors and inefficiencies with the purpose of reducing manufacturing risks. GE Healthcare then built a second pilot production plant that further increased the amount produced above that of the first pilot plant. The firm continued its focus on gaining knowledge early, but on a larger scale: building the pilot plants was important to perfecting the process and gaining knowledge about the material’s producibility. At this stage, which coincides with MRL 8, it eliminated most of the technical risks involved in manufacturing the material. The firm then began to build its full-scale facility, which was ready 18 months before product launch. When the full-scale production facility was completed, further scale-up of the material’s manufacturing became the focus. Changes to the design were made as needed to facilitate this. Any remaining manufacturing risks were eliminated prior to entry into the next stage, the product-validation stage. The Food and Drug Administration requires validation of finished medical devices. GE Healthcare told us that this means that all the equipment, processes, procedures, and factory workers are the same as will be used in actual production. Through use of the pilot plants to perfect the manufacturing of the scintillator material, GE Healthcare was able to produce production-representative material to satisfy this requirement. Best Practice: Commercial Firms Work Closely with Suppliers, Who Must Meet High Quality Standards for Parts and Supplies Commercial firms focus on developing strong relationships with their suppliers to ensure quality parts are provided in a timely manner. This begins with rigorous supplier-selection criteria to create a strong supplier base to provide quality parts. Similarly, DOD’s MRL supply-chain thread focuses on supplier capability throughout the acquisition life cycle, from as early as pre–milestone A (MRL 3), where initial assessment of the supply chain begins, through MRL 5, where supply-chain sources have been identified, and continuing to MRL 8, where the supply-chain should be stable and adequate to support low-rate production. Commercial firms generally have long-term relationships with these suppliers and can identify the supplier that is the best source of material or parts early, well before production begins. Leading commercial firms apply the same standards to these suppliers as they apply to their own manufacturing processes, such as ISO 9000 or other quality standards. Throughout product development and production, they establish effective communications with their suppliers so they can continually assess their performance. These firms work closely with their suppliers to retain these beneficial relationships, providing training where necessary and assistance if manufacturing problems arise. GE Healthcare suppliers have to be validated before production begins, but qualifying them starts in the design phase. Suppliers are expected to meet the ISO 9000 standards and the Food and Drug Administration’s medical devices standards, but GE Healthcare’s own standards are more stringent that those. The supplier-qualification process ensures that suppliers meet GE Healthcare’s requirements, have a quality system that provides the appropriate controls for the part provided and meet regulations and requirements of multiple agencies, such as the Food and Drug Administration. Once a supplier is qualified, it becomes an approved supplier. GE Healthcare also audits most of its suppliers and looks for issues such as lapsed ISO 9000 certification or a failed review. If it finds these things, GE Healthcare will ask the supplier for a plan to correct the deficiency and reaudit the supplier. GE Healthcare does annual risk assessments on the suppliers, based on data gathered during these audits, with sole-source or single-source suppliers being a high risk. If a supplier falls out of qualified status, GE Healthcare will do more frequent assessments. It constantly monitors the suppliers for quality. It helps the supplier get to the quality needed, but quality goals must be met. Siemens is a global company that employs about 70,000 people in the United States. We visited Siemens Mobility Division, which builds light rail cars for public transit. Siemens places special emphasis on its supplier relationships, since it knows its suppliers can contract to other rail-car builders, as there is competition for suppliers in this market. If it has a good relationship with its suppliers, it can continue to benefit from the relationships with high-quality suppliers. Once it qualifies a supplier, it takes the responsibility for keeping the supplier qualified, providing technical assistance if necessary to keep the supplier in its pipeline. Even as early as the bid phase of the contract, Siemens knows who it will need as suppliers and if any particular supplier is new or challenged in some respect. Siemens applies a three-step supplier-qualification process to its suppliers. This starts with a supplier self-assessment. The firm’s supplier- qualification personnel then visit the supplier’s plant and evaluate the supplier on the same self-assessment form, to determine if the supplier will make it to the vendor-qualification list. Once a supplier is on the approved vendor-qualification list, Siemens does risk ratings for these vendors to be sure it can keep them on the qualified-vendor list. The firm updates these assessments if the vendor situation changes, rating the vendor at low risk if it is fully qualified and working with it if some aspects are not qualified. Siemens takes responsibility for keeping the approved suppliers qualified, since finding and qualifying new vendors can be time- consuming and risky. It tries not to overload any one supplier, because some of their suppliers are small or specialty operations, so it keeps a pool of qualified suppliers for as many parts or materials as it can. Commercial Firms Require That Manufacturing Processes Be in Control Earlier Than DOD’s MRLs Although the firms we visited used manufacturing readiness criteria similar to DOD's proposed MRLs, one important difference we observed is that the commercial best practice is to have manufacturing processes in control prior to the production decision, while DOD's MRLs require manufacturing processes and procedures to be established and controlled during MRL 9, which occurs after the milestone C production decision, which authorizes a program to enter low-rate initial production, or equivalent. Although DOD’s MRLs incorporate many of the commercial manufacturing best practices into their manufacturing design and implementation criteria, the process controls criteria would be met too late in the process to achieve their full effect. DOD’s MRL matrix states that low-rate production yield and rate targets should be achieved at MRL 9, after the production decision has been made. The commercial firms we talked to emphasized that production processes must be in control before this decision is made. They realize that they are unable to make predictions about production performance until the process is stable and defects are predictable. Not achieving process control could result in low quality, extensive rework and waste, and not meeting cost and schedule targets. Firms established pilot lines to prove out production material, processes, and tooling, and worked to get processes under control before the system could move from the pilot line to production line. Figure 7 shows a depiction of the commercial manufacturing process approach. The companies we visited used various approaches to build process capability and provide timely information on whether manufactured components, subsystems, or systems meet design specification. For example, GE Aviation uses a statistical measurement, called Z sigma level, to determine whether its processes have been brought under control or if variations in its manufacturing process could affect the quality of the product. The product is not moved into production until the firm is satisfied that these processes are in control. Similarly, GE Healthcare’s milestone process requires that a set of quality targets are part of the program and that those quality targets are met. Measures of process control vary from company to company, such as using yield or scrap and rework rates or sigma levels, but each looks carefully at those measures to ensure they carried no product-quality risk and uses this information to determine if the product is ready to be manufactured. Two Successful DOD Programs Used Criteria Similar to Commercial Firms Two DOD programs, the Army’s Lakota aircraft and the Missile Defense Agency’s Standard Missile 3 Block 1A, that had successful manufacturing outcomes employed some of the same practices as leading commercial firms. Both used a type of manufacturing readiness criteria to evaluate whether the programs were ready to enter into production and both programs focused on manufacturability as a key indicator of program success, using well-developed technology and a conservative approach in design and development. The Lakota aircraft, a light utility helicopter that conducts noncombat missions, was a mature aircraft design when the Army entered into the contract with the European Aeronautic Defence and Space Company to purchase this commercially available helicopter. The program shows how careful attention to manufacturing readiness can reduce program risks. According to program office officials, the contractor was chosen in part because of its manufacturing track record, and it completed extensive planning, both internally and with its supplier base, to ensure on-time and reliable deliveries. Production planning and preparation were accomplished, including assessments of the manufacturing processes, capabilities, and facilities. These assessments determined that the program was low risk and ready for full-rate production. The Lakota is currently in full-rate production and has met its cost and schedule targets. The Standard Missile 3 is a ship-based, antiballistic missile used by the Aegis ballistic missile defense system. Similar to the Lakota, the system met its cost and schedule goals by using an incremental, low-risk approach. Like the commercial firms we visited, the program built knowledge through the use of a type of manufacturing readiness criteria, which allowed the early identification of risk and implementation of mitigation strategies. The Standard Missile 3 Block IA was also on target for manufacturing cost and schedule and reported a lower cost per unit than was originally estimated on its production buys. As in the successful commercial firms we visited, manufacturing issues were considered very early in the design phase, leading to minimal changes in the program from flight test to production. MRLs Are Hampered by Lack of an Agencywide Policy and Manufacturing Workforce Concerns While acceptance of MRLs is growing within DOD and the defense industry, the services’ leadership appears to be resistant, and adoption efforts have been slow. For example, obtaining agreement on a policy that would institutionalize MRLs defensewide has proven difficult. Concerns raised by the military-service policymakers have centered on when and how the MRL assessments would be used. Officials responsible for the draft policy have promoted MRLs as an initiative that can address the manufacturing element in the design and production of weapon systems, citing commercial best practices that employ similar methods, and benefits derived from pilot programs. While extensive efforts have been made to promote the benefits of MRLs in support of a revised draft policy, it has taken nearly 2 years to allay concerns and it has not yet been approved. DOD is likely to face serious challenges even if an agreement is reached to approve the policy, however, because the number of DOD’s production and manufacturing career-field employees has diminished, particularly within the Air Force. Although the services are at the beginning stages of revitalizing their production and manufacturing workforce, DOD currently does not have adequate in-house expertise with the requisite knowledge to assess manufacturing throughout DOD. Essentially, the military services and Defense Contract Management Agency have identified knowledge and manpower gaps in their manufacturing workforce and believe that any initiative deploying MRLs defensewide could be hampered as a result. Draft Policy to Institutionalize MRLs Has Proven Difficult, but the DOD Community Is Starting to See Its Value While acceptance of MRLs is growing within DOD and the defense industry, the Army’s, Navy’s, and Air Force’s leadership appears to be resistant and adoption efforts have been slow. For example, a July 2008 draft MRL policy memorandum garnered disagreement among the military- service policymakers. The military services’ leadership agreed that MRLs provide value in the early acquisition phases but disagreed with the policy’s intent to formalize the process. For example, the MRL policy memorandum stated that on the basis of analyses by GAO and the Defense Science Board—as well as positive results on two Air Force pilot programs—that acquisition category I programs be assessed using the MRL scale. In particular, the draft policy included provisions that would require programs at milestone B to be assessed at MRL 6 or beyond for all critical technologies; programs at milestone C to be assessed at MRL 8 for all critical technologies; procedures to be coordinated for including assessments of manufacturing readiness in addition to technology readiness assessments at milestone B and C; and incorporation of guidance into training materials and guidebooks on best practices for addressing manufacturing from the earliest stages of development through production and sustainment. In response to the draft policy, each of the military services issued memorandums in July 2008 to the Under Secretary of Defense (Acquisition, Technology and Logistics) or the Director, Defense Research and Engineering, stating they support MRLs and their use earlier in the acquisition process but they saw limited value in doing formal assessments prior to milestone C. In general, the services had concerns on when and how MRL assessments would be used. More specifically, their concerns included the following: Evaluation results that could be used as the basis for go / no go decisions. A growing number of assessments being levied on acquisition programs. Resources required to prove out multiple production lines in a competitive prototyping environment during the technology- development phase. Since 2008, officials responsible for the draft policy memorandum have been working to address concerns raised by the services. According to the working group, most concerns pointed to a need to clarify how the information is intended to be used by decision makers at key milestones, particularly at the earlier milestones. According to the working group officials we interviewed, the intent is to inform decision makers with critical information—such as manufacturing risk and readiness measures, as appropriate to the phase of acquisition—so that knowledge-based decisions can be made earlier in the process to influence better outcomes in terms of cost and schedule in the later acquisition phases. Moreover, they cite that similar methods are employed by leading commercial firms as a best practice, plus the fact that MRL pilot programs have already demonstrated significant benefits. The revised MRL draft policy has not yet been approved. Officials familiar with the status of the draft policy stated that the leadership at one of the military services is still opposed to the idea of standardizing MRLs across DOD, and efforts to get approval have not yet occurred within the Office of the Director, Defense Research and Engineering. DOD experienced similar problems introducing technology readiness levels. There was opposition to the use of technology readiness levels, but they became a standard for programs to follow, and the standard that technologies should be demonstrated in a relevant environment became a statutory requirement for all major acquisition programs seeking to enter system development. Programs report benefits from using technology readiness levels. Some officials believe that MRLs could significantly reduce cost growth. For example, the Army and Air Force have reported MRLs were a factor that contributed to benefits of hundreds of millions of dollars in reduced program costs, improved schedule, and better performance of products. MRL Acceptance Is Growing within DOD and Defense Industry A number of Army, Air Force, and Missile Defense Agency programs—as well as defense contractors—have embraced MRLs as the method for assessing manufacturing maturity, risk, and readiness. For example, some Army commands have opted to use them on their science and technology efforts that have manufacturing elements, and have developed a formal process for identifying them. Similarly, two of three Air Force product centers under the materiel command—the Aeronautical Systems Center and the Air Armament Center—have recently issued local policy that mandate the use of MRLs. For example, in a policy memorandum by the Aeronautical Systems Center, dated October 13, 2009, all programs are now required to have manufacturing readiness assessments using MRLs, prior to each major milestone review. The memorandum acknowledged that the transition to production has historically been challenging for many programs and that manufacturing assessments are a key tool to ensure that programs are ready to begin production. The Missile Defense Agency has included MRLs as part of their assessment criteria. In addition, senior missile defense manufacturing personnel have developed and conducted training on how to conduct these assessments. Similarly, a number of defense contractors have implemented MRLs as a discipline for identifying, managing, and communicating manufacturing risk and readiness. These contractors report a number of benefits using the MRLs, including reductions in program costs and improved production schedule. For example, in 2006, Raytheon participated in pilot MRL program assessments involving the Advanced Medium-Range Air-to-Air Missile and a portfolio of other programs and concluded the approach makes good business sense to lower risk. Raytheon claimed cost reductions of 30 percent or more could be achieved by using MRLs. Raytheon officials state that the combination of technology and manufacturing assessment processes changes the culture by driving a collaborative partnership between programs, design, and manufacturing engineering earlier in the product-development life cycle where maturity efforts can have the greatest effect on improving program affordability and predictability. As a result, Raytheon is deploying MRLs as a standard across the organization. Lockheed Martin is exploring ways to integrate MRLs within its existing review processes. As previously discussed, Honeywell adopted MRLs for use on both its defense and commercial products, and developed several models as an analysis-based approach to quantify their producibility risks. Manufacturing Workforce Knowledge and Manpower Gaps May Impede Implementation of MRLs The services are in the beginning stages of revitalizing their manufacturing workforce, largely in response to a February 2006 Defense Science Board task force report on “The Manufacturing Technology Program: A Key to Affordably Equipping the Future Force.” The report acknowledged that both the manufacturing expertise in the workforce and program funding have declined, thus eliminating much of the engineering and manufacturing talent across DOD and the industrial base. The report concluded that what was once a promising career field in the military services—with promotion paths, training, and professional development— has been systematically eliminated over the past few decades. Table 4 shows the decrease in the manufacturing career field across DOD from 2001 to 2007. As indicated, DOD’s manufacturing career workforce trends show an overall decline, with the Army and Air Force having had the biggest declines at 14 percent and 37 percent, respectively. According to a DCMA official, the agency experienced about a 30 percent decrease during the same timeframe. An Army official responsible for workforce planning activities noted, however, there are no positions designated specifically for manufacturing, which make it difficult to determine the true career workforce numbers in this category. Fewer experts mean that fewer people at both the working level and in leadership positions understand the processes involved in developing and manufacturing defense systems and their importance in producing high-quality and reliable systems. Further, fewer people are capable of conducting production-readiness reviews, evaluating industry’s work on programs, and staying abreast of industry research and development. According to a recent study, of major concern is that recent estimates show 30 percent of the civilian manufacturing workforce—classified as production, quality, and manufacturing—are eligible for full retirement, and approximately 26 percent will become eligible for full retirement over the next 4 years. This means DOD will soon have an exodus of its manufacturing workforce and, accordingly, must plan for this eventuality. Although the services are at the beginning stages of revitalizing their production and manufacturing workforce, program officials believe they currently do not have the in-house expertise with the requisite knowledge to assess manufacturing, if MRLs were to be mandated and deployed across DOD. For example, in interviews with career planning officials at the military services, most report that they have workforce challenges in manufacturing knowledge gaps or insufficient number of personnel to conduct the work, or both. The Defense Contract Management Agency reported similar manufacturing knowledge gaps due to a lack of focus in this area, but it now has new leadership in place and is establishing plans to address these deficiencies. Essentially, these knowledge deficiencies affect many areas, such as policy support for programs, the ability to develop an effective strategic plan and investment strategy for manufacturing technology, the ability to implement MRLs and conduct assessments, and the ability to effectively and affordably acquire high- quality weapon systems. Conclusions MRLs, resourced and used effectively, offer the potential for DOD to achieve substantial savings and efficiencies in developing and acquiring weapon systems. MRLs have been shown to work in reducing the cost and time for developing technologies and producing systems. Moreover, they have been shown to work on individual programs, and some Army commands and Air Force centers have adopted them. They are consistent with commercial best practices and have even been adopted by some defense firms. Yet, they have not been adopted DOD-wide. MRLs are being met with resistance similar to that experienced by technology readiness levels when they were first introduced. However, technology readiness levels are now widely accepted and used across DOD. While MRLs represent a common body of knowledge and reflect many of the practices used by leading commercial companies, there is room for improvement. Criteria used for getting manufacturing processes under control are still not specific enough, allowing demonstration of controls to occur too late in the process—after the milestone C decision authorizing low-rate initial production—whereas commercial firms require that critical processes be in control earlier. While MRLs represent positive change, unless these criteria are strengthened at the time a production decision is made, DOD will have missed an opportunity to reduce the risk of continued cost growth on acquisition programs. Moreover, use of MRLs would be enhanced by the development of analytical tools, such as those used by Honeywell, to support MRL assessments. A serious concern is that DOD’s in-house manufacturing workforce has been diminishing for decades and that, therefore, could hamper successful implementation of MRLs. Unless DOD develops long-range plans to build its in-house manufacturing workforce, it may not be able to realize the full potential of integrating manufacturing readiness levels into its processes. Recommendations for Executive Action To ensure that DOD is taking steps to strengthen and improve the producibility and manufacturing readiness of technologies, weapon systems, subsystems, or manufacturing processes, we recommend that the Secretary of Defense do the following: Require the assessment of manufacturing readiness across DOD programs using consistent MRL criteria as basis for measuring, assessing, reporting, and communicating manufacturing readiness and risk on science and technology transition projects and acquisition programs. Direct the Office of the Director, Defense Research and Engineering to examine strengthening the MRL criteria related to the process capability and control of critical components and/or interfaces prior to milestone C, or equivalent, for low-rate initial production decision. Direct the Office of the Director, Defense Research and Engineering to assess the need for analytical models and tools to support MRL assessments. Assess the adequacy of the manufacturing workforce knowledge and skills base across the military services and defense agencies and develop a plan to address current and future workforce gaps. Agency Comments and Our Evaluation DOD provided us written comments on a draft of this report. DOD partially concurred with our recommendation to require the assessment of manufacturing readiness across DOD programs using MRL criteria, and concurred with our other recommendations. Their comments can be found in appendix IV of this report. In its comments, DOD partially concurred with the recommendation that DOD programs be required to assess manufacturing readiness using consistent MRL criteria as the basis for measuring, assessing, reporting, and communicating manufacturing readiness and risk on science and technology transition projects and acquisition programs. DOD cites the Department of Defense Instruction 5000.02 as reflecting on manufacturing throughout the acquisition life cycle and, specifically, establishing a framework to continually assess and mitigate manufacturing risks. In its remarks, DOD states that the manufacturing readiness criteria will be tailored to programs and embedded into reviews and assessment templates, including systems engineering reviews, preliminary design reviews, and critical design reviews as well as acquisition phase exit criteria. While we are encouraged by DOD’s plans to incorporate manufacturing readiness criteria into various assessments, we are concerned about the absence of any reference to MRLs, which identify specific benchmarks for each acquisition phase. It is unclear from DOD’s comments whether it intends to use a common definition of manufacturing readiness as acquisition phase exit criteria or whether the exit criteria will be decided on a case-by-case basis. While tailoring to individual programs is appropriate, tailoring must take place in the context of well-understood criteria for moving from phase to phase. A hallmark of the commercial programs we have looked at in this and other reviews is the reliance on disciplined processes for assessing readiness to proceed into more costly development and production phases. Firm criteria are needed to identify and address producibility and manufacturing risks on a timely basis, before they result in expensive production problems. We also received technical comments from DOD, which have been addressed in the report, as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; Secretary of the Navy; Secretary of the Air Force; Director, Missile Defense Agency; Director, Defense Contract Management Agency; and Office of Management and Budget. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix V. Appendix I: Scope and Methodology This report compares the Department of Defense (DOD) and its large prime contractors’ manufacturing practices with those of leading commercial companies—with a focus on improving the manufacturing of defense weapon systems. Specifically, we assessed (1) the manufacturing problems experienced by DOD, (2) how manufacturing readiness levels (MRLs) can address DOD’s manufacturing problems, (3) how proposed MRLs compare to manufacturing best practices of leading commercial companies, and (4) the challenges and barriers to implementing MRLs at DOD. To identify the manufacturing problems experienced by DOD, we performed an aggregate analysis of DOD programs from our annual assessment database. We also conducted case studies of four programs with known cost and schedule problems to make observations on the types of problems DOD weapon systems may experience. The programs we reviewed, along with the prime contractors responsible for developing the systems, are Joint Air-to-Surface Standoff Missile, an air-to-surface missile funded by the Air Force and developed by Lockheed Martin; Exoatmospheric Kill Vehicle, a ballistic-missile interceptor funded by the Missile Defense Agency and developed by Raytheon; Electromagnetic Aircraft Launch System, a launch system for aircraft carriers funded by the Navy and developed by General Atomics; and H-1 helicopter upgrade, tactical utility and attack helicopters funded by the Navy and developed by Bell Helicopter. To evaluate the four DOD weapon programs, we examined program documentation, such as acquisition decision memos and production readiness reviews, and held discussions with manufacturing and systems engineering officials from DOD program offices, the prime contractors, and the Defense Contract Management Agency. Based on the information gathered through interviews conducted and documentation synthesized, we identified commonalities among the case studies. To determine how MRLs can address the manufacturing problems experienced by defense programs, we conducted interviews with officials from the Office of Secretary of Defense, Office of the Director, Defense Research & Engineering, Joint Defense Manufacturing Technology Panel working group, National Center for Advanced Technologies, National Defense Industrial Association, and Defense Acquisition University on their observations on MRLs. We also reviewed the MRL deskbook, matrix (risk areas), analyses, and training materials. We also conducted interviews with Army, Navy, and Air Force officials who were involved or familiar with the pilot tests of MRLs on various programs. The pilot programs we examined at the military services include the following Army—micro electro-mechanical systems inertial measurement unit, micro electro-mechanical systems safety arm, ferroelectric and micro electro-mechanical systems phase shifter, low-cost materials for improved protection, rotorcraft cabin floor structure, embedded sensors, and armor manufacturing; Air Force—MQ-9 Reaper, F-35 Joint Strike Fighter induct inlet, high- durability hot exhaust structures, F-135 Pratt & Whitney propulsion system, sensor hardening for tactical systems, and X-Band thin array radar; and Navy—P-8A aircraft. To identify practices and criteria used by leading commercial companies that can be used to improve DOD’s manufacturing process, we selected and visited five companies based on several criteria: companies that (1) make products that are comparable to DOD in terms of complexity, (2) are recognized as leaders in developing manufacturing readiness criteria, or (3) have won awards for their manufacturing best practices, or a mix of the above. We met with these companies to discuss their product- development and manufacturing practices and the steps that they take to mitigate manufacturing risks, ensure manufacturing readiness, and improve supplier quality. We met with these companies to discuss their product-development life cycle and the methods and metrics they use to measure manufacturing maturity and producibility; manufacturing risk management; supplier management; and the key factors in the company’s successful manufacturing outcomes. We generalized much of the information due to the proprietary nature of the data relating to their manufacturing processes. Several companies provided data on specific processes or products that they agreed to allow us to include in this report. We reported on four of the five companies we visited. The five companies we visited include the following GE Aviation, a leading aerospace company, whose portfolio includes commercial engines and services, military engines and services, business and general aviation, engine components, and aviation systems. We met with manufacturing and quality officials in Cincinnati, Ohio, and discussed their manufacturing practices and manufacturing maturity metrics. We also toured their Lean Lab production facility and saw how these practices were applied. GE Healthcare, which manufactures a range of products and services that includes medical imaging and information technologies and medical diagnostics. We met with manufacturing officials at their Milwaukee, Wisconsin, plant and discussed their manufacturing practices, including the development and manufacturing of their Gemstone scintillator for use on advanced CT scanners. Honeywell Aerospace, a global provider of integrated avionics, engines, systems, and services for aircraft manufacturers, airlines, business and general aviation, and military and space operations. We met with manufacturing officials at their Phoenix, Arizona, facility and discussed their manufacturing maturity processes and the models and tools they used to assess this. Siemens Mobility, a division of Siemens that develops and builds light rail cars for the North American market. We met with manufacturing and procurement officials at their Sacramento, California, manufacturing and assembly plant to discuss the manufacturing processes used in building their rail cars and their supplier management practices. Toyota Motor Engineering and Manufacturing is responsible for Toyota’s engineering design, development and manufacturing activities in North America. We met with officials in their production engineering division in Erlanger, Kentucky, and also in their Toyota Technical Center located in Ann Arbor, Michigan, and discussed their vehicle development process and their methods for assuring supplier quality. At each of the companies, we interviewed senior management officials knowledgeable about the manufacturing methods, techniques, and practices used throughout manufacturing and product development to ensure manufacturing maturity and producibility of their products. In particular, we discussed their (1) product development life cycle and the methods, metrics, and tools used to determine manufacturing maturity and producibility, (2) methods for identifying and mitigating risks in manufacturing a product, and (3) methods for supplier management to provide steady supply of quality parts. In addition, we compared the practices of commercial firms to two major defense weapon systems known to be producing systems within cost and schedule goals and with successful manufacturing outcomes. To evaluate these two programs, we examined program documentation and held discussions with program and contracting officials. The two systems we reviewed, along with the prime contractors responsible for developing the systems, are Lakota aircraft, a light utility helicopter that conducts noncombat missions, funded by the Army and developed by the European Aeronautic Defence and Space Company; and Standard Missile 3 Block 1A, a ship-based antiballistic missile, funded by the Missile Defense Agency and developed by Raytheon. To determine the challenges and barriers to MRL implementation efforts, we interviewed officials who were involved with the draft policy to standardize MRLs, as well as the military-service policy organizations that commented on the proposal. We also synthesized the information gathered at the various levels throughout the defense community to determine the issues surrounding MRLs as well as their merits. These DOD organizations include the Office of the Secretary of Defense, each of the military-service policy groups and program offices, the Office of the Director, Defense Research and Engineering, Systems and Software Engineering, and Joint Defense Manufacturing Technology Panel. To obtain an understanding of the workforce challenges in manufacturing, we reviewed selected documentation—such as Defense Science Board studies—and interviewed officials at each of the military services and Defense Contract Management Agency who were responsible for workforce planning activities and revitalization initiatives. We conducted this performance audit from January 2009 to February 2010 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Manufacturing Readiness Level (MRL) Definitions MRL 1—Basic Manufacturing Implications Identified This is the lowest level of manufacturing readiness. The focus is to address manufacturing shortfalls and opportunities needed to achieve program objectives. Basic research (i.e., budget activity 6.1 funds) begins in the form of studies. This level is characterized by describing the application of new manufacturing concepts. Applied research (i.e., budget activity 6.2 funds) translates basic research into solutions for broadly defined military needs. Typically this level of readiness in the science and technology environment includes identification, paper studies, and analysis of material and process approaches. An understanding of manufacturing feasibility and risk is emerging. MRL 3—Manufacturing Proof of Concept Developed This level begins the validation of the manufacturing concepts through analytical or laboratory experiments. This level of readiness is typical of technologies in the science and technology funding categories of Applied Research and Advanced Development (i.e., budget activity 6.3 funds). Materials or processes, or both, have been characterized for manufacturability and availability but further evaluation and demonstration is required. Experimental hardware models have been developed in a laboratory environment that may possess limited functionality. MRL 4—Capability to Produce the Technology in a Laboratory Environment This level of readiness is typical for science and technology programs in the budget activity 6.2 and 6.3 categories and acts as exit criteria for the materiel solution analysis phase approaching a milestone A decision. Technologies should have matured to at least technology readiness level 4. This level indicates that the technologies are ready for the technology- development phase of acquisition. At this point, required investments, such as manufacturing technology development, have been identified. Processes to ensure manufacturability, producibility, and quality are in place and are sufficient to produce technology demonstrators. Manufacturing risks have been identified for prototype build, and mitigation plans are in place. Target cost objectives have been established and manufacturing cost drivers have been identified. Producibility assessments of design concepts have been completed. Key design performance parameters have been identified as well as any special tooling, facilities, material handling, and skills required. MRL 5—Capability to Produce Prototype Components in a Production-Relevant Environment This level of maturity is typical of the midpoint in the technology- development phase of acquisition, or in the case of key technologies, near the midpoint of an advanced technology-demonstration project. Technologies should have matured to at least technology readiness level 5. The industrial base has been assessed to identify potential manufacturing sources. A manufacturing strategy has been refined and integrated with the risk-management plan. Identification of enabling/critical technologies and components is complete. Prototype materials, tooling and test equipment, as well as personnel skills, have been demonstrated on components in a production-relevant environment, but many manufacturing processes and procedures are still in development. Manufacturing technology development efforts have been initiated or are ongoing. Producibility assessments of key technologies and components are ongoing. A cost model has been constructed to assess projected manufacturing cost. MRL 6—Capability to Produce a Prototype System or Subsystem in a Production-Relevant Environment This MRL is associated with readiness for a milestone B decision to initiate an acquisition program by entering into the engineering and manufacturing development phase of acquisition. Technologies should have matured to at least technology readiness level 6. It is normally seen as the level of manufacturing readiness that denotes completion of science and technology development and acceptance into a preliminary system design. An initial manufacturing approach has been developed. The majority of manufacturing processes have been defined and characterized, but there are still significant engineering or design changes, or both, in the system itself. However, preliminary design of critical components has been completed and producibility assessments of key technologies are complete. Prototype materials, tooling and test equipment, as well as personnel skills have been demonstrated on systems or subsystems, or both, in a production-relevant environment. A cost analysis has been performed to assess projected manufacturing cost versus target cost objectives and the program has in place appropriate risk reduction to achieve cost requirements or establish a new baseline. This analysis should include design trades. Producibility considerations have shaped system-development plans. Industrial capabilities assessment for milestone B has been completed. Long-lead and key supply-chain elements have been identified. All subcontractors have been identified. MRL 7—Capability to Produce Systems, Subsystems, or Components in a Production-Representative Environment This level of manufacturing readiness is typical for the midpoint of the engineering and manufacturing-development phase leading to the post- critical design review assessment. Technologies should be maturing to at least technology readiness level 7. System detailed design activity is underway. Material specifications have been approved and materials are available to meet the planned pilot-line build schedule. Manufacturing processes and procedures have been demonstrated in a production- representative environment. Detailed producibility trade studies and risk assessments are underway. The cost model has been updated with detailed designs, rolled up to system level, and tracked against allocated targets. Unit-cost reduction efforts have been prioritized and are underway. The supply chain and supplier quality assurance have been assessed and long-lead procurement plans are in place. Production tooling and test equipment design and development have been initiated. MRL 8—Pilot-Line Capability Demonstrated; Ready to Begin Low- Rate Initial Production This level is associated with readiness for a milestone C decision, and entry into low-rate initial production. Technologies should have matured to at least technology readiness level 7. Detailed system design is essentially complete and sufficiently stable to enter low-rate production. All materials are available to meet the planned low-rate production schedule. Manufacturing and quality processes and procedures have been proven in a pilot-line environment and are under control and ready for low-rate production. Known producibility risks pose no significant challenges for low-rate production. The engineering cost model is driven by detailed design and has been validated with actual data. The Industrial Capability Assessment for milestone C has been completed and shows that the supply chain is established and stable. MRL 9—Low-Rate Production Demonstrated; Capability in Place to Begin Full-Rate Production At this level, the system, component, or item has been previously produced, is in production, or has successfully achieved low-rate initial production. Technologies should have matured to at least technology readiness level 9. This level of readiness is normally associated with readiness for entry into full-rate production. All systems- engineering/design requirements should have been met such that there are minimal system changes. Major system design features are stable and have been proven in test and evaluation. Materials are available to meet planned rate production schedules. Manufacturing process capability in a low-rate production environment is at an appropriate quality level to meet design key-characteristic tolerances. Production risk monitoring is ongoing. Low- rate initial production cost targets have been met, with learning curves validated. The cost model has been developed for the full-rate production environment and reflects the effect of continuous improvement. MRL 10—Full-Rate Production Demonstrated and Lean Production Practices in Place This is the highest level of production readiness. Technologies should have matured to at least technology readiness level 9. This level of manufacturing is normally associated with the production or sustainment phases of the acquisition life cycle. Engineering/design changes are few and generally limited to quality and cost improvements. System, components, or items are in full-rate production and meet all engineering, performance, quality, and reliability requirements. Manufacturing process capability is at the appropriate quality level. All materials, tooling, inspection and test equipment, facilities, and manpower are in place and have met full-rate production requirements. Rate production unit costs meet goals, and funding is sufficient for production at required rates. Lean practices are well established and continuous process improvements are ongoing. Appendix III: Manufacturing Readiness Level (MRL) Threads and Subthreads (Risk Areas) Appendix (MRL) III: Manufacturing Readiness Level Threads and Subthreads (Risk Areas) Potential sources identifi ed for technol- ogy needs. Understand state of the art. Industrial base capabilities surveyed and known; gaps/ risks identifi ed for preferred concept, key technologies, components, and/or key processes. Industrial base assessment initiated to identify poten- tial manufac- turing sources. Sole/ single/ foreign source vendors have been identifi ed and planning has begun to minimize risks. Industrial Capa- bilities Assess- ment (ICA) for Milestone (MS) B has been com- pleted. Indus- trial capability in place to support manufacturing of development articles. Plans to minimize sole/ foreign sources complete. Need for sole/single/ foreign sources justifi ed. Poten- tial alternative sources identi- fi ed. Industrial capabil- ity to support production has been ana- lyzed. Sole/ single/ for- eign sources stability is assessed/ monitored. Developing potential alternate sources as necessary. ICA for MS C has been completed. Industrial capability is in place to support Low Rate Initial Produc- tion (LRIP). Sources are available, multi- sourcing where cost- effective or necessary to mitigate risk. Industrial capability is in place to support start of Full Rate Produc- tion (FRP). Industrial capability supports FRP and is assessed to support modifi ca- tions, upgrades, surge and other potential manufactur- ing require- ments. Manu- facturing technology concepts identifi ed through ex- periments/ models. Manufactur- ing Science and Advanced Manufacturing Technology requirements identifi ed. Required manufacturing technology development efforts initiated as applicable. Manufacturing technology efforts continu- ing. Required manufacturing technology development solutions dem- onstrated in a production-rele- vant environ- ment. Manufactur- ing technol- ogy efforts continuing. Required manu- facturing technology development solutions demon- strated in a production- representative environment. Primary manu- facturing technol- ogy efforts concluding and some improve- ment efforts continuing. Required manu- facturing technology solutions validated on a pilot line. Manufactur- ing technol- ogy process improve- ment efforts initiated for FRP. Manu- facturing technology continuous process im- provements ongoing. Produc- ibility improve- ments imple- mented on system. Should be assssed experimen- tal results. completed. Results con- sidered in selection of preferred de- sign concepts and refl ected in Technology Development Strategy (TDS) key com- ponents/ technologies. initiated as ap- propriate. On- going design trades consider manufactur- ing processes and industrial base capabil- ity constraints. Manufactur- ing processes assessed for capability to test and verify in production, and infl uence on operations & support. technologies/ components completed. Results used to shape Acquisi- tion Strategy (AS), Systems Engineering Plan (SEP), Manufacturing and Produc- ibility plans, and planning for Engineering and Manufacturing Development (EMD) or tech- nology insertion programs. Preliminary design choices assessed against manufactur- ing processes and industrial base capabil- ity constraints. Producibility enhancement efforts (e.g., Design for Manufacturing (DFX)) initiated. and related manufactur- ing process capability completed. Producibility enhancement efforts (e.g., DFX) ongoing for optimized integrated system. Manufactur- ing processes reassessed as needed for capability to test and verify potential in- fl uence on operations & support. Known producibil- ity issues have been resolved and pose no signifi - cant risk for LRIP. during LRIP. Producibility issues/ risks discovered in LRIP have been mitigated and pose no signifi cant risk for FRP. produc- ibility im- provements ongoing. All modifi ca- tions, upgrades, Diminishing Manufactur- ing Sources and Material Shortages (DMSMS), and other changes assessed for produc- ibility. System al- located baseline established. System and subsystem preliminary design suffi cient for EMD. All enabling/critical technologies/ components have been demonstrated. Preliminary de- sign KCs defi ned. Detailed design of product features and inter- faces is complete. All product data essential for system manu- facturing has been released. Should be assssed technical re- quirements evaluated. Performance Parameter (KPPs) identifi ed for preferred systems con- cept. System characteristics and measures to support required capa- bilities identi- fi ed. Form, fi t, and function constraints identifi ed, and manufacturing capabilities identifi ed for preferred sys- tems concepts. design Critical Characteristic (KCs) initiated. Product data required for prototype component manufacturing released. component manufactur- ing has been released. Potential KC risk issues have been identifi ed and mitigation plan is in place. Design change traffi c does not sig- nifi cantly impact LRIP. KCs are attain- able based upon pilot line demonstra- tions. Confi gura- tion Audit (PCA) or equivalent complete as necessary. Design change traf- fi c is limited. All KCs are controlled in LRIP to appropri- ate quality levels. obsoles- cence. All KCs are controlled in FRP to appropri- ate quality levels. Initial cost targets and risks identifi ed. High level process chart model developed. Technology cost models developed for new process steps and materials based on experi- ments. Manufacturing, material and special require- ment cost drivers identi- fi ed. Detailed process chart cost models driven by pro- cess variables. Cost driver uncertainty quantifi ed. Prototype components produced in a produc- tion relevant environment, or simulations drive end- to-end cost models. Cost model includes materials, labor, equip- ment, tooling/ (Special Test Equipment (STE), setup, yield/ scrap/ rework, Work in Process (WIP), and capabil- ity/capacity constraints). Cost model updated with design require- ments, material specifi cations, tolerances, inte- grated master schedule, results of system/ subsystem simulations and production rel- evant prototype demonstrations. Cost model updated with the results of systems/ sub-systems produced in a production- representative environment and with pro- duction plant layout and design and obsolescence solutions. Cost models updated with results of pilot line build. FRP cost model up- dated with result of LRIP build. Cost model validated against actual FRP cost. Should be assssed Sensitiv- ity analysis conducted to defi ne cost drivers and produc- tion devel- opment strategy (i.e., lab to pilot to factory). Producibility cost risks as- sessed. Initial cost models support Analy- sis of Alterna- tives (AoA) and Alternative Systems Review (ASR). Costs analyzed using prototype component ac- tuals to ensure target costs are achievable. Decisions re- garding design choices, make/ buy, capacity, process capa- bility, sources, quality, KCs, yield/rate, and variabil- ity infl uenced by cost models. Costs analyzed using prototype system/subsys- tem actuals to ensure target costs are achiev- able. Allocate cost targets to subsystems. Cost reduction and avoid- ance strategies developed. Manufactur- ing costs rolled up to system/sub- system level and tracked against tar- gets. Detailed trade studies and engineer- ing change requests supported by cost esti- mates. Cost reduction and avoidance strategies underway. Costs analyzed using pilot line actuals to ensure target costs are achievable. Manufac- turing cost analysis supports proposed changes to require- ments or confi gura- tion. Cost reduction initiatives ongoing. LRIP cost goals met and learn- ing curve analyzed with actual data. Cost reduction initiatives ongoing. Touch labor effi ciency analyzed to meet production rates and elements of inef- fi ciency are identifi ed with plans in place for reduction. FRP cost goals met. Cost reduction initiatives ongoing. Program/ projects have reasonable budget estimates for reaching MRL 4 by MS A. Manufacturing technology initiatives iden- tifi ed to reduce costs. Program has reasonable budget esti- mate for reach- ing MRL 6 by MS B. Estimate includes capi- tal investment for production- relevant equipment. All outstand- ing MRL 4 risk areas understood, with approved mitigation plans in place. Program has updated bud- get estimate for reaching MRL 6 by MS B. All outstand- ing MRL 5 risk areas understood, with approved mitigation plans in place. Program has reasonable budget estimate for reaching MRL 8 by MS C. Estimate includes capital investment for production- representative equipment by CDR and pilot line equipment by MS C. All outstanding MRL 6 risk areas understood, with approved mitigation plans in place. Program has updated budget estimate for reaching MRL 8 by MS C. All outstanding MRL 7 risk areas under- stood, with approved mitigation plans in place. Program has reasonable budget estimate for reach- ing MRL 9 by the FRP decision point. Estimate includes investment for LRIP and FRP. All outstand- ing MRL 8 risk areas under- stood, with approved mitigation plans in place. Program has reasonable budget estimate for FRP. All outstanding MRL 9 risk areas under- stood, with approved mitigation plans in place. Production budgets suffi cient for production at required rates and schedule to support funded program. Should be assssed Material properties validated and as- sessed for basic manufactur- ability using experi- ments. Survey deter- mines that the projected material has been produced in a laboratory environment. Survey deter- mines that the projected material has been produced in a laboratory environment. Materials have been manufactured or produced in a prototype environment (maybe in a similar applica- tion/program). Maturity efforts in place to address new material pro- duction risks for technology demonstration. Mate- rial maturity verifi ed through technology demonstration articles. Prelimi- nary material specifi cations in place and material proper- ties have been adequately characterized. Material maturity suf- fi cient for pilot line build. Material specifi cations approved.Ma- terial maturity suffi cient for pilot line build. Material specifi cations approved. Materials proven and validated during EMD as adequate to sup- port LRIP. Material specifi ca- tion stable. Material is proven and controlled to specifi ca- tion in LRIP. Material is proven and controlled to specifi ca- tion in LRIP. Material is proven and controlled to specifi ca- tion in FRP. Material is proven and controlled to specifi ca- tion in FRP. Material scale-up issues identifi ed. Projected lead times have been identifi ed for all diffi cult- to-obtain, diffi cult-to- process, or hazardous materials. Quantities and lead times estimated. Availability is- sues addressed for prototype build. Signifi - cant material risks identifi ed for all materi- als. Planning has begun to address scale-up issues. Availability issues addressed to meet EMD build. Long-lead items identifi ed. Potential obso- lescence issues identifi ed. Availabil- ity issues addressed to meet EMD builds. Long-lead procurement identifi ed/ planned for LRIP. Obsoles- cence plan in place. Long-lead procure- ment initiated for LRIP. Availability issues pose no signifi - cant risk for LRIP. Long-lead procure- ment initi- ated for FRP. Availability issues pose no signifi - cant risk for FRP. Program is in FRP, with no signifi cant material availability issues. Initial as- sessment of potential supply chain capa- bility. Survey completed for potential supply chain sources. Potential supply chain sources identi- fi ed. Supply chain plans in place (e.g. teaming agreements and so forth) leading to an EMD con- tract award. Effective supply chain management process in place. Assess- ment of criti- cal fi rst tier supply chain completed. Supply chain ade- quate to support LRIP. As- sessment of critical second and lower tier supply chain completed. Supply chain is stable and adequate to support FRP. Long-term agreements in place where practical. Supply chain proven and supports FRP require- ments. Special handling procedures applied in pilot line environ- ment. Special handling procedures effectively implement- ed in FRP. Should be assssed applied in the lab. Special handling concerns assessed. handling requirements identifi ed. handling requirement gaps identifi ed. New special handling pro- cesses demon- strated in lab environment. requirement gaps complete. ment. Special handling procedures developed and anno- tated on work instructions. Special handling procedures demon- strated in EMD or technology insertion programs. Special han- dling issues pose no signifi cant risk for LRIP. All work instructions contain special handling provisions, as required. handling procedures demonstrat- ed in LRIP. Special han- dling issues pose no signifi cant risk for FRP. Identifi ca- tion of proposed manu- facturing concepts or produc- ibility needs based on high-level process fl owchart models. Production modeling and simulation approaches for process or product are identifi ed. Initial simula- tion models (product or process) de- veloped at the component level. Initial simulation models devel- oped at the subsystem or system level. Simulation models used to determine system constraints and identify improvement opportunities. Simulation models veri- fi ed by pilot line build. Results used to improve process and determine that LRIP require- ments can be met. Simulation model veri- fi ed by LRIP build, assists in manage- ment of LRIP and determines that FRP re- quirements can be met. Simula- tion model verifi ed by FRP build. Production simulation models used as a tool to assist in manage- ment of FRP. Document high-level manufac- turing processes. Critical manufactur- ing process- es identifi ed through experimen- tation. Complete a survey to determine the current state of critical processes. Maturity has been assessed on similar processes in produc- tion. Process capability requirements have been identifi ed for pilot line, LRIP and FRP. Manufacturing processes dem- onstrated in pro- duction-relevant environment. Begin collecting or estimating process capabil- ity data from prototype build. Manufactur- ing processes demon- strated in a production- representative environment. Continue collecting or estimating process capa- bility data. Manu- facturing processes verifi ed for LRIP on a pilot line. Process capability data from pilot line meets target. Should be assssed to show FRP impact and potential for continuous improve- ment. FRP objec- tives. Initial estimates of yields and rates based on experi- ments or state of the art. Yield and rates assessment on proposed/simi- lar processes complete and applied within AoA. Target yields and rates established for pilot line, LRIP, and FRP. Yield and rate issues identifi ed. Improvement plans devel- oped/initiated. Yields and rates from production-rele- vant environ- ment evaluated against targets and the results feed improve- ment plan. Yields and rates from production- representative environment evaluated against pilot line targets and the results feed improvement plans. Pilot line targets achieved. Yields and rates required to begin LRIP verifi ed using pilot line articles. Improve- ment plans ongo- ing and updated. LRIP yield and rate targets achieved. Yield im- provements ongoing. FRP yield and rate targets achieved. Yield im- provements ongoing. Quality strate- gy identifi ed as part of the TDS and included in SEP. Quality strat- egy updated to refl ect KC identifi cation activities. Initial quality plan and quality management system is in place. Quality risks and metrics have been identifi ed. -lauQ ity targets established. Demonstrate ability to collect and analyze quality data (process and system) in the production- representative environment. Qual- ity targets demon- strated on pilot line. Continuous quality im- provement ongoing. Supplier products have com- pleted qualifi - cation testing and fi rst-article inspection. Supplier products pass accep- tance test- ing at a rate adequate to begin LRIP. Quality tar- gets verifi ed on LRIP line. Continu- ous quality improvement ongoing. Supplier products pass accep- tance testing at a rate adequate to transition to FRP. Quality tar- gets verifi ed on FRP line. Continuous quality im- provement ongoing. Should be assssed New manufactur- ing skills identifi ed. Manufactur- ing skill sets identifi ed and production workforce requirements (technical and operational) evaluated as part of AoA. Determine availabil- ity of process development workforce for the TDP. Skill sets identifi ed and plans devel- oped to meet prototype and produc- tion needs. Special skills certifi cation and training requirements established. Manufacturing workforce skills available for production in a relevant envi- ronment. Iden- tify resources (quantities and skill sets) and develop initial plans to achieve requirements for pilot line and production. Manufactur- ing workforce resource requirements identifi ed for pilot line. Plans developed to achieve pilot line require- ments. Plans updated to achieve LRIP workforce requirements. Pilot line workforce trained on representative environment. Manu- facturing workforce resource re- quirements identifi ed for LRIP. Plans developed to achieve LRIP re- quirements. Plans updated to achieve FRP workforce require- ments. LRIP personnel trained on pilot line where pos- sible. LRIP personnel requirements met. Imple- ment plan to achieve FRP workforce re- quirements. FRP person- nel require- ments met. Production workforce skill sets maintained due to attrition of workforce. Tooling/STE/ SIE require- ments are considered as part of AoA. Identify tool- ing and Special Test Equip- ment / Special Inspection Equipment (STE/SIE) requirements and provide supporting rationale and schedule. Prototype tool- ing and STE/SIE concepts dem- onstrated in pro- duction relevant environment. Production tool- ing and STE/SIE requirements developed. noitcudorP tooling and STE/SIE design and development efforts under- way. Manu- facturing equipment maintenance strategy developed. All tooling, test, and inspection equipment proven on pilot line and re- quirements identifi ed for LRIP. Manu- facturing equipment mainte- nance demon- strated on pilot line. All tooling, test, and inspection equipment proven in LRIP and requirements identifi ed for FRP. Manu- facturing equipment maintenance schedule demon- strated. Proven tool- ing, test, and inspection equipment in place to support maximum FRP. Planned equipment mainte- nance schedule achieved. Specialized facility re- quirements/ needs identifi ed. Manufacturing facilities identi- fi ed and plans developed to produce prototypes. Manufacturing facilities identi- fi ed and plans developed to produce pilot line build. Should be assssed evaluated as part of AoA. produce LRIP build. facilities adequate to begin LRIP. Plans in place to support transition to FRP. LRIP. Capac- ity plans adequate to support FRP. maximum FRP require- ments. Manufactur- ing strategy developed and integrated with acquisi- tion strat- egy. Prototype schedule risk mitigation efforts incor- porated into TDS. Manufactur- ing strategy refi ned based upon preferred concept. Proto- type schedule risk mitigation efforts initi- ated. Initial manufac- turing approach developed. All system- design-related manufacturing events included in Integrated Master Plan/ Integrated Master Schedule (IMP/IMS). Manufacturing risk mitigation approach for pilot line or technology insertion pro- grams defi ned. -unam laitinI facturing plan developed. Manufactur- ing planning required to achieve MRL 8 has been included in the IMP/IMS. Manufactur- ing risks integrated into risk miti- gation plans. Develop initial work instructions. Effective production control sys- tem in place to support pilot line. Manufac- turing plan updated for LRIP. All key manufac- turing risks are identi- fi ed and as- sessed with approved mitigation plans in place. Work instructions fi nalized. Effective production control system in place to support LRIP. Manufactur- ing plan updated for FRP. All manufactur- ing risks tracked and mitigated. Effective production control sys- tem in place to support FRP. All manufac- turing risks mitigated. Technology development article component list developed with associ- ated lead-time estimates. Technology development part list matur- ing. Make/ buy evalua- tions begin and include production considerations refl ecting pilot line, LRIP, and FRP needs. Lead times and other risks identifi ed. Most material decisions com- plete (make/ buy), material risks identifi ed, and mitigation plans devel- oped. Bill of Materials (BOM) initiated. yub/ekaM decisions and BOM complete for pilot line build. Mate- rial planning systems in place for pilot line build. Make/buy decisions and BOM complete to sup- port LRIP. Material planning systems in place for LRIP build. Make/buy decisions and BOM complete to support FRP. Material planning sys- tems in place for FRP. Mate- rial planning systems validated on FRP build. Appendix IV: Comments from the Department of Defense DoD Response to GAO-10-439 Recommendations GAO DRAFT REPORT DATED MARCH 12, 2010 GAO-10-439 (GAO CODE 120793) “BEST PRACTICE: DOD CAN ACHIEVE BETTER OUTCOMES BY STANDARDIZING THE WAY MANUFACTURING RISKS ARE MANAGED” DEPARTMENT OF DEFENSE COMMENTS TO THE GAO RECOMMENDATION 1: The GAO recommends that the Secretary of Defense require the assessment of manufacturing readiness across DoD programs using consistent MRL criteria as a basis for measuring, assessing, reporting, and communicating manufacturing readiness and risk on science and technology transition projects and acquisition programs. DOD RESPONSE: Partially concur: The Department of Defense recognizes that mature manufacturing processes and readiness are critical to achieving predictable and successful program outcomes. It also recognizes the value in assessing manufacturing risks during science and technology research on technologies planned to be incorporated into acquisition programs. Department of Defense Instruction (DoDI) 5000.02, Operation of the Defense Acquisition System, dated 8 December 2008 reflects an increased focus on manufacturing throughout the acquisition lifecycle for programs of all acquisition categories. Specifically, it establishes a framework to continually assess and mitigate manufacturing risks during the Analysis of Alternatives, 2366b certifications to Congress, Preliminary and Critical Design Reviews; and acquisition milestones. The Department’s new manufacturing readiness criteria will form the basis for assessing pertinent science and technology efforts, and acquisition programs throughout the acquisition lifecycle on programs of all acquisition categories. These criteria will be a tool to identify relevant manufacturing risks which require mitigation. These manufacturing readiness criteria are expected to be tailored for programs and will be included in the Department’s criteria for systems engineering technical reviews; the Department’s templates for Preliminary Design Review/Critical Design Review reports; and acquisition phase exit criteria. These manufacturing readiness criteria will also be assessed as part of the Program Support Reviews which the Department conducts on Major Defense Acquisition Programs. These reviews evaluate manufacturing as part of an overall integrated program assessment. These manufacturing readiness criteria and products will be made available to government and industry. Their use by the Services on lower ACAT programs will also be encouraged. The Navy’s Gate Review process currently assesses manufacturing risks but is being updated with the new manufacturing readiness criteria. DoD Response to GAO-10-439 Recommendations RECOMMENDATION 2: The GAO recommends that the Secretary of Defense direct the Office of the Director, Defense Research and Engineering to examine strengthening the MRL criteria related to the process capability and control of critical components and/or interfaces prior to the Milestone C low rate initial production decision. DOD RESPONSE: Concur. Department of Defense Instruction 5000.02 directs that programs at Milestone C have no significant manufacturing risks; that manufacturing processes have been effectively demonstrated in a pilot line environment; and manufacturing processes are under control (if Milestone C is full-rate production). While the Department notes that all manufacturing processes do not warrant the same level of process capability and control, appropriate levels of control are certainly warranted on a case by case basis. The Department will examine strengthening the manufacturing readiness criteria related to process capability and control of critical components and/or interfaces prior to the Milestone C low rate initial production decision. However, program offices and contractors should continue to have the latitude to jointly agree on the targets and specific process control demonstrations required on the pilot production line during the Engineering and Manufacturing Development to ensure success. RECOMMENDATION 3: The GAO recommends that the Secretary of Defense direct the Office of the Director, Defense Research and Engineering to assess the need for analytical models and tools to support MRL assessments. DOD RESPONSE: Concur. The Department will collaborate with government services, contractors, and academia to capture knowledge and provide improved tools for government and contractor usage in conducting assessments of manufacturing readiness as part of systems engineering technical reviews and milestone reviews. RECOMMENDATION 4: The GAO recommends that the Secretary of Defense assess the adequacy of the manufacturing workforce knowledge and skills base across the military services and defense agencies and develop a plan to address current and future workforce gaps DOD RESPONSE: Concur. We agree that the production, quality and manufacturing (PQM) career field has suffered erosion, as have other DoD career fields. The USD (AT&L) Director of Human Capital has launched a review of the PQM career field design to identify the skills, knowledge and training required at each level of career progression in order to develop training courses and evaluate progression of anticipated DoD planned new hires. The Department has started to implement hiring and retention strategies to mitigate the potential loss in experienced, senior-level PQM talent and increase the size of the manufacturing workforce. As part of the Secretary’s growth strategy and other initiatives, the PQM career field is projected to grow approximately DoD Response to GAO-10-439 Recommendations 1,300 (13%) by FY2015. Each of the military services and other DOD components has been actively planning and deploying initiatives that support the DOD acquisition workforce growth strategy. Components have submitted planning inputs to OSD and to the Defense Acquisition Workforce Senior Steering Board, and growth is underway. Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments Key contributors to this report were Karen Zuckerstein, Assistant Director; John M. Ortiz, Jr.; Beverly Breen; Leigh Ann Nally; Dr. W. Kendal Roberts; Andrea Bivens; Kristine Hassinger; Kenneth Patton; Bob Swierczek; and Dr. Timothy Persons, Chief Scientist. Related GAO Products Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C., September 14, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. Why Some Weapon Systems Encounter Production Problems While Others Do Not: Six Case Studies. GAO/NSIAD-85-34. Washington, D.C.: May 24, 1985.
Cost growth and schedule delays are prevalent problems in acquiring defense weapon systems. Manufacturing systems has proven difficult, particularly as programs transition to production. In December 2008, the Department of Defense (DOD) issued an updated version of its acquisition policy that reflects earlier consideration of manufacturing risks. A joint defense and industry group developed manufacturing readiness levels (MRL) to support assessments of manufacturing risks. Use of MRLs on all weapon acquisition programs has been proposed. In response to a congressional request, this report assesses the manufacturing problems faced by DOD, how MRLs can address manufacturing problems, how MRLs compare to manufacturing best practices of leading commercial firms, and challenges and barriers to implementing MRLs at DOD. In conducting our work, we contacted DOD, military services, and contractors; held interviews with leading commercial firms; reviewed program documents and policy proposals; and spoke with manufacturing experts. DOD faces problems in manufacturing weapon systems--systems cost far more and take much longer to build than estimated. Billions of dollars in cost growth occur as programs transition from development to production, and unit-cost increases are common after production begins. Several factors contribute to these problems including inattention to manufacturing during planning and design, poor supplier management, and a deficit in manufacturing knowledge among the acquisition workforce. Essentially, programs did not identify and resolve manufacturing risks early in development, but carried risks into production where they emerged as significant problems. MRLs have been proposed as new criteria for improving the way DOD identifies and manages manufacturing risks and readiness. Introduced to the defense community in 2005, MRLs were developed from an extensive body of manufacturing knowledge that includes defense, industry, and academic sources. An analysis of DOD's technical reviews that assesses how programs are progressing show that MRLs address many gaps in core manufacturing-related areas, particularly during the early acquisition phases. Several Army and Air Force centers that piloted MRLs report these metrics contributed to substantial cost benefits on a variety of technologies and major defense acquisition programs. To develop and manufacture products, the commercial firms we visited use a disciplined, gated process that emphasizes manufacturing criteria early in development. The practices they employ focus on gathering sufficient knowledge about the producibility of their products to lower risks, and include stringent manufacturing readiness criteria to measure whether the product is sufficiently mature to move forward in development. These criteria are similar to DOD's proposed MRLs in that commercial firms (1) assess producibility at each gate using clearly defined manufacturing criteria to gain knowledge about manufacturing early, (2) demonstrate manufacturing processes in a production-relevant environment, and (3) emphasize relationships with critical suppliers. However, a key difference is that commercial firms, prior to starting production, require their manufacturing processes to be in control--that is, critical processes are repeatable, sustainable, and consistently producing parts within the quality standards. DOD's proposed MRL criteria do not require that processes be in control until later. Acceptance of MRLs has grown among some industry and DOD components. Yet, DOD has been slow to adopt a policy that would require MRLs across DOD. Concerns raised by the military services have centered on when and how the MRL assessments would be used. While a joint DOD and industry group has sought to address concerns and disseminate information on benefits, a consensus has not been reached. If adopted, DOD will need to address gaps in workforce knowledge, given the decrease in the number of staff in the production and manufacturing career fields.
Background On January 14, 2004, the President articulated a new vision for space exploration for NASA. Part of the Vision includes the goal of retiring the space shuttle following completion of the International Space Station (ISS), planned for the end of the decade. In addition, NASA plans to begin developing a new manned exploration vehicle, or CEV, to replace the space shuttle and return humans to the moon as early as 2015, but no later than 2020, in preparation for more ambitious future missions. As this Subcommittee is aware, NASA’s Administrator has recently expressed his desire to accelerate the CEV development to eliminate the gap between the end of the Space Shuttle Program, currently scheduled for 2010, and the first manned operational flight of the CEV, currently scheduled for 2014. If the CEV development cannot be accelerated, NASA will not be able to launch astronauts into space for several years and will likely have to rely on Russia for transportation to and from the ISS. A 1996 “Balance Agreement” between NASA and the Russian space agency, obligated Russia to provide 11 Soyuz spacecraft for crew rotation of U.S. and Russia crews. After April 2006, this agreement will be fulfilled and Russia no longer must allocate any of the seats on its Soyuzes for U.S. astronauts. Russian officials have indicated that they will no longer provide crew return services to NASA at no cost at that time. However, NASA may face challenges to compensating Russia for seats on its Soyuzes after the agreement is fulfilled due to restrictions in the Iran Nonproliferation Act. The space shuttle, NASA’s largest individual program, is an essential element of NASA’s ability to implement the Vision because it is the only launch system presently capable of transporting the remaining components necessary to complete assembly of the ISS. NASA projects that it will need to conduct an estimated 28 flights over the next 5 to 6 years to complete assembly of and provide logistical support to the ISS. However, NASA is currently examining alternative ISS configurations to meet the goals of the Vision and satisfy NASA’s international partners, while requiring as few space shuttle flights as possible to complete assembly. Prior to retiring the space shuttle, NASA will need to first return the space shuttle safely to flight and execute whatever number of remaining missions are needed to complete assembly of and provide support for the ISS. At the same time, NASA will begin the process of closing out or transitioning its space shuttle assets that are no longer needed to support the program —such as its workforce, hardware, and facilities—to other NASA programs. The process of closing out or transitioning the program’s assets will extend well beyond the space shuttle’s final flight (see fig. 1). The planning window for the first flight is July 13 through July 31, 2005. Retiring the space shuttle and, in the larger context, implementing the Vision, will require that the Space Shuttle Program rely on its most important asset—its workforce. The space shuttle workforce consists of about 2,000 civil service and 15,600 contractor personnel, including a large number of engineers and scientists. While each of the NASA centers support the Space Shuttle Program to some degree, the vast majority of this workforce is located at three of NASA’s Space Operations Centers: Johnson Space Center, Kennedy Space Center, and Marshall Space Flight Center. Data provided by NASA shows that approximately one quarter of the workforce at its Space Operations centers is 51 years or older and about 33 percent will be eligible for retirement by fiscal year 2012. The space shuttle workforce and NASA’s human capital management have been the subject of many GAO and other reviews in the past that have highlighted various challenges to maintaining NASA’s science and engineering workforce. In addition, over the past few years, GAO and others in the federal government have underscored the importance of human capital management and strategic workforce planning. In response to an increased governmentwide focus on strategic human capital management, NASA has taken several steps to improve its human capital management. These include steps such as devising an agencywide strategic human capital plan, developing workforce analysis tools to assist in identifying critical skills needs, and requesting and receiving additional human capital flexibilities. Progress toward Developing a Strategy to Sustain the Space Shuttle Workforce Is Limited NASA has made only limited progress toward developing a detailed long- term strategy for sustaining its workforce through the space shuttle’s retirement. While NASA recognizes the importance of having in place a strategy for sustaining a critically skilled workforce to support space shuttle operations, it has only taken preliminary steps to do so. For example, the program identified lessons-learned from the retirement of programs comparable to the space shuttle, such as the Air Force Titan IV Rocket Program. Among other things, the lessons learned reports highlight the practices used by other programs when making personnel decisions, such as the importance of developing transition strategies and early retention planning. Other efforts have been initiated or are planned; examples include the following: contracted with the National Academy of Public Administration to assist it in planning for the space shuttle’s retirement and transitioning to future programs and began devising an acquisition strategy for updating propulsion system prime contracts at MSFC to take into account the Vision’s goal of retiring the space shuttle following completion of the ISS. NASA’s prime contractor for space shuttle operations, USA, has also taken some preliminary steps, but its progress with these efforts depends on NASA making decisions that impact contractor requirements through the remainder of the program. For example, USA has begun to define its critical skills needs to continue supporting the Space Shuttle Program, devised a communication plan, contracted with a human capital consulting firm to conduct a comprehensive study of its workforce; and continued to monitor indicators of employee morale and workforce stability. Contractor officials said that further efforts to prepare for the space shuttle’s retirement and its impact on their workforce are on hold until NASA first makes decisions that impact the space shuttle’s remaining number of flights and thus the time frames for retiring the program and transitioning its assets. The Potential Impact of Workforce Problems and Other Challenges the Space Shuttle Program Faces Highlight the Need for Workforce Planning Making progress toward developing a detailed strategy for sustaining a critically skilled space shuttle workforce through the program’s retirement is important given the impact that workforce problems could have on NASA-wide goals. According to NASA officials, if the Space Shuttle Program faces difficulties in sustaining the necessary workforce, NASA- wide goals, such as implementing the Vision and proceeding with space exploration activities, could be impacted. For example, workforce problems could lead to a delay in flight certification for the space shuttle, which could result in a delay to the program’s overall flight schedule, thus compromising the goal of completing assembly of the ISS by 2010. In addition, officials said that space exploration activities could slip as much as 1 year for each year that the space shuttle’s operations are extended because NASA’s progress with these activities relies on funding and assets that are expected to be transferred from the Space Shuttle Program to other NASA programs. NASA officials told us they expect to face various challenges in sustaining the critically skilled space shuttle workforce. These challenges include the following: Retaining the current workforce. Because many in the current workforce will want to participate in or will be needed to support future phases of implementing the Vision, it may be difficult to retain them in the Space Shuttle Program. In addition, it may be difficult to provide certain employees with a transition path from the Space Shuttle Program to future programs following retirement. Impact on the prime contractor for space shuttle operations. Because USA was established specifically to perform ground and flight operations for the Space Shuttle Program, its future following the space shuttle’s retirement is uncertain. Contractor officials stated that a lack of long-term job security would cause difficulties in recruiting and retaining employees to continue supporting the space shuttle as it nears retirement. In addition, steps that the contractor may have to take to retain its workforce, such as paying retention bonuses, are likely to require funding above normal levels. Governmentwide budgetary constraints. Throughout the process of retiring the space shuttle, NASA, like other federal agencies, will have to contend with urgent challenges facing the federal budget that will put pressure on discretionary spending—such as investments in space programs—and require NASA to do more with fewer resources. Several Factors Have Impeded Workforce Planning Efforts While the Space Shuttle Program is still in the early stages of planning for the program’s retirement, its development of a detailed long-term strategy to sustain its future workforce is being hampered by several factors: Near-term focus on returning the space shuttle to flight. Since the Space Shuttle Columbia accident, the program has been focused on its near-term goal of returning the space shuttle safely to flight. While this focus is understandable given the importance of the space shuttle’s role in completing assembly of the ISS, it has led to the delay of efforts to determine future workforce needs. Uncertainties with respect to implementing the Vision. While the Vision has provided the Space Shuttle Program with the goal of retiring the program by 2010 upon completion of the ISS, the program lacks well-defined objectives or goals on which to base its workforce planning efforts. For example, NASA has not yet determined the final configuration of the ISS, the final number of flights for the space shuttle, how ISS operations will be supported after the space shuttle is retired, or the type of vehicle that will be used for space exploration. These determinations are important because they impact decisions about the transition of space shuttle assets. Lacking this information, NASA officials have said that their ability to progress with detailed long-term workforce planning is limited. Despite Uncertainties, NASA Could Follow a Strategic Human Capital Management Approach Despite these uncertainties, the Space Shuttle Program could follow a strategic human capital management approach to plan for sustaining its critically skilled workforce. Studies by several organizations, including GAO, have shown that successful organizations in both the public and private sectors follow a strategic human capital management approach, even when faced with an uncertain future environment. In our March 2005 report, we made recommendations aimed at better positioning NASA to sustain a critically skilled space shuttle workforce through retirement. In particular, we recommended that the agency begin identifying the Space Shuttle Program’s future workforce needs based upon various future scenarios the program could face. Scenario planning can allow the agency to progress with workforce planning, even when faced with uncertainties such as those surrounding the final number of space shuttle flights, the final configuration of the ISS and the vehicle that will be developed for exploration. The program can use the information provided by scenario planning to develop strategies for meeting the needs of its potential future scenarios. NASA concurred with our recommendation, and NASA’s Assistant Associate Administrator for the Space Shuttle program is leading an effort to address the recommendation. Since we issued our report and made our recommendation, NASA has taken action and publicly recognized that human capital management and critical skills retention will be a major challenge for the agency as it moves toward retiring the space shuttle. This recognition was most apparent at NASA’s Integrated Space Operations Summit held in March 2005. As part of the Summit process, NASA instituted panel teams to examine the Space Shuttle Program’s mission execution and transition needs from various perspectives and make recommendations aimed at ensuring that the program will execute its remaining missions safely as it transitions to supporting emerging exploration mission needs. The reports that resulted from these examinations are closely linked by a common theme—the importance of human capital management and critical skills retention to ensure success. In their reports, the panel teams highlighted similar challenges to those that we highlighted in our report. The panels made various recommendations to the Space Flight Leadership Council on steps that the program should take now to address human capital concerns. These recommendations included developing and implementing a critical skills retention plan, developing a communication plan to ensure the workforce is informed, and developing a detailed budget that includes funding for human capital retention and reductions, as well as establishing an agencywide team to integrate human capital planning efforts. Conclusions There is no question that NASA faces a challenging time ahead. Key decisions have to be made regarding final configuration and support of the ISS, the number of shuttle flights needed for those tasks, and the timing for development of future programs, such as the CEV—all in a constrained funding environment. In addition, any schedule slip in the completion of the construction of the ISS or in the CEV falling short of its accelerated initial availability (as soon as possible after space shuttle retirement) may extend the time the space shuttle is needed. But whatever decisions are made and courses of action taken, the need for sustaining a critically skilled workforce is paramount to the success of these programs. Despite a limited focus on human capital management in the past, NASA now acknowledges that it faces significant challenges in sustaining a critically skilled workforce and has taken steps to address these issues. We are encouraged by these actions and the fact that human capital management and critical skills retention was given such prominent attention throughout the recent Integrated Space Operations Summit process. The fact that our findings and conclusions were echoed by the panel teams established to support the Integrated Space Operations Summit is a persuasive reason for NASA leadership to begin addressing these human capital issues early and aggressively. Madam Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Contacts and Acknowledgments For further information regarding this testimony, please contact Allen Li at (202) 512-4841 or lia@gao.gov. Individuals making key contributions to this testimony included Alison Heafitz, Jim Morrison, Shelby S. Oakley, Karen Sloan, and T.J Thomson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Aeronautics and Space Administration's (NASA) space shuttle program is key to implementing the President's vision for space exploration, which calls for completing the assembly of the International Space Station (ISS) by the end of the decade. Currently, the space shuttle, which is to be retired after ISS assembly is completed, is the only launch system capable of transporting ISS components. To meet the goals of the President's vision and satisfy ISS's international partners, NASA is examining alternative launch vehicles and ISS configurations. Retiring the space shuttle and, in the larger context, implementing the President's vision, will require NASA to rely on its most important asset--its workforce. Because maintaining a skilled workforce through retirement will be challenging, GAO was asked to discuss the actions NASA has taken to sustain a skilled space shuttle workforce and the challenges it faces in doing so--findings reported on in March 2005 (see GAO, Space Shuttle: Actions Needed to Better Position NASA to Sustain Its Workforce through Retirement, GAO-05-230). While NASA recognizes the importance of sustaining a critically skilled workforce to support space shuttle operations, it has made limited progress toward developing a detailed long-term strategy to do so. At the time of our March 2005 review, the Space Shuttle Program had identified lessons learned from the retirement of comparable programs, and United Space Alliance--NASA's prime contractor for space shuttle operations--had begun to prepare for the impact of the space shuttle's retirement on its workforce. However, timely action to address workforce issues is critical given their potential impact on NASA-wide goals. Significant delays in implementing a strategy to sustain the space shuttle workforce would likely lead to larger problems, such as overstretched funding and failure to meet NASA program schedules. NASA and United Space Alliance acknowledge that sustaining their workforces will be difficult, particularly if a career path beyond the space shuttle's retirement is not apparent. Fiscal challenges facing the federal government also make it unclear whether funding for retention tools, such as bonuses, will be available. Our March 2005 report identified several factors that have hampered the Space Shuttle Program's workforce planning efforts. For example, the program's near-term focus on returning the space shuttle safely to flight has delayed other efforts that will help the program determine its workforce requirements, such as assessing hardware and facility needs. Program officials also noted that due to uncertainties in implementing the President's vision for space exploration, requirements on which to base workforce planning efforts have yet to be defined. Despite these factors, our work on strategic workforce planning has shown that even when faced with uncertainty, successful organizations take steps, such as scenario planning, to better position themselves to meet future workforce requirements. Since we issued our report and made our recommendation, NASA has publicly recognized, at its Integrated Space Operations Summit, that human capital management and critical skills retention will be a major challenge for the agency as it progresses toward retirement of the space shuttle.
Background Employers provide retirement benefits using two basic types of plans— defined benefit plans and defined contribution plans. In a defined benefit plan, the employer determines the employee’s retirement benefit amount using specific formulas that consider factors such as age at retirement, years of service, and salary levels. Employers are responsible for ensuring that sufficient funds are available to pay promised benefits. The amount an employer must contribute to a defined benefit plan varies from year to year depending on changes in factors such as workforce demographics or investment earnings. Employees covered by a defined benefit plan are also protected by a federal plan termination insurance program administered by the Pension Benefit Guaranty Corporation (PBGC). In a defined contribution plan (also known as an individual account plan), the employer establishes an individual account for each eligible employee and generally promises to make a specified contribution to that account each year. Employee contributions are sometimes allowed or required. Each defined contribution plan specifies whether the plan participants, the employer, or both will make decisions about how the funds in the accounts are invested. Regardless of who makes the investment decisions in a defined contribution plan, the employer is not responsible for ensuring that a specified amount is available upon an employee’s retirement. An employee’s retirement benefit from such a plan depends on the total employer and employee contributions to the account as well as the investment returns that have accumulated in the account by the time the employee retires. In a defined contribution plan, the employee assumes the risk for the investments. Defined contribution plans include thrift savings plans, profit-sharing plans, and ESOPs. Such plans that allow employees to choose to contribute a portion of their pre-tax compensation to the plan under section 401(k) of the Internal Revenue Code are generally referred to as 401(k) plans. Investment income earned on a 401(k) plan accumulates tax free until an individual withdraws the funds. Table 1 shows the number of plans, plan assets, and plan participants for 1993 for single-employer defined benefit and defined contribution plans. ERISA imposes certain requirements and restrictions on those who manage and administer private pension plans. These fiduciary rules apply to both defined benefit and defined contribution plans and require, among other things, that plans diversify their investments and, more specifically, invest no more than 10 percent of total plan assets in employer securities and real property. Currently, ERISA exempts 401(k) plans from the diversification requirement and the 10-percent limitation rule. Accordingly, 401(k) plans can invest in employer securities and real property generally without restriction. (See app. II for more details on federal fiduciary rules on investment of plan assets.) In August 1997, the Congress amended title I of ERISA to protect plan participants in 401(k) plans that require that employee contributions be invested in employer securities and real property. Section 1524 of the Taxpayer Relief Act of 1997, which takes effect in 1999, will extend the ERISA 10-percent limitation rule on investments in employer securities and real property to that portion of these 401(k) plans consisting of employee contributions and the earnings thereon unless they meet one of three exemptions. A 401(k) plan is exempt if the fair market value of the assets of all the defined contribution plans the employer maintains is no more than 10 percent of the fair market value of the assets of all the employer’s pension plans. A 401(k) plan is also exempt if it requires that not more than 1 percent of an employee’s compensation be invested in employer securities and real property. Finally, ESOPs are exempt. Relatively Few 401(k) Plans Invested in Employer Securities and Real Property Less than 2 percent of 401(k) plans invested in employer securities and real property in 1993. Because many of the 401(k) plans that owned employer securities and real property were large plans, however, the number of participants covered and the value of employer securities and real property were substantial. Participant-directed plans had about 3.9 million participants and about $40.7 billion in employer securities and real property. Employer-directed plans covered 1.4 million participants and had $12.3 billion invested in employer securities and real property. Large 401(k) Plans Owned Most of the Employer Securities and Real Property Only 2,449 of the 159,196 401(k) plans that filed a Form 5500 for 1993 reported that they had invested in employer securities or real property. As shown in table 2, a relatively few large 401(k) plans owned most of the employer securities and real property. In this regard, 109 plans with 10,000 or more participants owned over $34 billion (nearly 65 percent) of the $53 billion of employer securities and real property owned by all 401(k) plans. These large plans also covered most of the participants in 401(k) plans that owned any employer securities or real property. Plans with more than 10,000 participants covered 57 percent of participants. Plans with 1,000 or more participants covered 92 percent of the participants and owned 95 percent of employer securities and real property. Because of the influence of large plans, the $53 billion of employer securities and real property owned represented about 11 percent of all 401(k) plan assets, and the 5.3 million participants represented almost 26 percent of the participants in all 401(k) plans. (See fig. 1.) Employer real property investments represented only $381 million (less than 1 percent) of the $53 billion in employer securities and real property owned by 401(k) plans. Eleven large 401(k) plans owned employer real property in 1993. Individual plan holdings ranged from a low of about $4,000 to approximately $340 million; however, two plans owned 96 percent of the employer real property. These two plans collectively owned about $365 million of employer real property, with separate holdings of $340 million and $25 million. For these two plans, employer real property represented 56 and 87 percent of their plan assets, respectively. For the other nine plans, employer real property generally represented 15 percent or less of total plan assets. One possible reason for the relatively low number of 401(k) plans that owned employer securities and real property is that many plans may not allow such investments. Periodically, the Bureau of Labor Statistics (BLS) conducts a survey of private nonfarm establishments with 100 or more workers and develops information on the types of investments that pension plans may make. BLS estimates that 49 percent of employees in thrift savings plans (which BLS officials said are a proxy for 401(k) plans) were in plans in 1993 that permitted ownership of employer securities.Another reason may be that many employers sponsoring 401(k) plans are too small to issue their own company securities. Most 401(k) Plan Participants Directed Investment of Their Own Contributions Important to the issue of the need for protections for 401(k) plan investments are the number of 401(k) plans that are employer directed and the number of participants in those plans. In an employer-directed plan, the employer—rather than the plan participant—decides how to invest participant contributions as well as the company’s own matching contributions, if any. In a participant-directed plan, the participant determines how to invest his or her contributions and may also determine how to invest the employer’s matching contributions. Information on employer-directed plans is important because it indicates the maximum number of individuals with no control over the investment decisions affecting their 401(k) plan assets. These individuals may be vulnerable to their employers’ investing significant amounts of their 401(k) plan assets in employer securities or real property. Form 5500 filings for 1993 indicate that about 35 percent of the 159,196 401(k) plans were employer directed. These plans, which totaled 55,411, accounted for about 27 percent of the participants and 27 percent of the assets of all 401(k) plans. The remaining 103,785 plans were participant directed and accounted for 73 percent of the participants and 73 percent of the assets of all 401(k) plans. (See table 3.) Although each of the employer-directed plans could theoretically have invested in employer securities and real property in 1993, only 756 plans (1.4 percent) actually did so. Because some of these 756 plans were large plans, they covered a disproportionately high percentage (25 percent) of all participants in employer-directed plans. In total, these plans had 1.4 million participants and $12.3 billion invested in employer securities and real property. About the same proportion of participant-directed plans invested in employer securities and real property. In total, 1,693 of 103,785 participant-directed plans (1.6 percent) owned this type of asset. Again, because some of these plans were large, they represented a much larger percentage (26 percent) of participants in participant-directed plans. These plans had 3.9 million participants and $40.7 billion invested in employer securities and real property. (See table 4.) Most Employer-Related Investments Associated With Supplemental 401(k) Plans Also important to the issue of the need for additional protections for 401(k) investments is whether a plan is the primary retirement plan or a supplemental one offered by the employer to eligible employees. A supplemental plan provides income in addition to that provided by a primary plan but may nonetheless represent a significant portion of an individual’s total retirement income. Form 5500 filings for 1993 indicate that the number of primary and supplemental 401(k) plans that actually owned employer securities and real property was roughly the same. Of the total of 2,449 401(k) plans, 1,302 (or 53 percent) were primary plans and the remaining 1,147 (or 47 percent) were supplemental plans. The 1,147 supplemental plans, however, owned 90 percent of all employer securities and real property and covered 81 percent of the participants in 401(k) plans that owned this type of asset. (See table 5.) Investment in Employer Securities and Real Property Generally 10 Percent or More of Plan Assets Of the 2,449 plans that invested in employer securities and real property in 1993, 1,679 (69 percent) had 10 percent or more of their assets invested in this type of asset. These 1,679 plans included 1,211 that had between 10 and 50 percent of assets invested in employer securities and real property and 468 that had 50 percent or more invested this way. (See table 6.) As table 6 illustrates, plans that had smaller percentages of assets invested in employer securities and real property had the most participants. Fifty-nine percent of the participants were in 401(k) plans that had less than 30 percent of their assets invested in employer securities and real property; almost 81 percent were in plans that had less than 50 percent invested this way. Plan size appeared to relate somewhat to the percentage of plan assets invested in employer securities and real property. Plans with fewer than 100 participants and plans with over 5,000 participants tended to invest a higher percentage of their total assets in employer securities and real property. The percentage of the plans’ assets invested in employer-related assets, however, generally did not exceed 30 percent of total plan assets. (See app. III for more information on investment in employer securities and real property by different sized plans.) Many Participants Have No Control Over Investments in Employer-Directed Plans Despite the concentration of participants and employer securities and real property in participant-directed supplemental plans, 756 employer- directed plans in 1993 had almost 1.4 million participants and about $12.3 billion invested in employer securities and real property. Participants in these plans reportedly had no choice in how the assets of their 401(k) plans, including their own contributions, were invested. Over 932,000 of these individuals had 10 percent or more of their 401(k) plan assets invested in employer securities or real property. (See fig. 2.) New Legislation Will Provide Additional Protection, but Administrative Problems Exist Enacting section 1524 of the Taxpayer Relief Act of 1997 was one of several actions that the Congress could have chosen to help safeguard the assets of participants in 401(k) plans requiring employee contributions to be invested in employer securities and real property. With enactment of this legislation, beginning in 1999, the provisions of the ERISA 10-percent limitation rule (which, before the Congress passed section 1524, applied only to defined benefit plans) will be applied to that portion of employer- directed 401(k) plans consisting of employee contributions and the earnings thereon unless the plans meet one of three exemptions. The new legislation will prevent employer-directed plans that have more than 10 percent of employee contributions invested in employer securities and real property from investing more employee contributions in assets of this type. The 10-percent limitation rule alone, however, cannot prevent a plan from investing employee contributions in employer securities and real property whose value is declining. In addition, certain information needed to implement and enforce the section 1524 provisions is not readily available. Changes have been proposed to the Form 5500, which, if implemented, may remedy some of the data deficiencies we identified before section 1524 goes into effect in 1999. ERISA 10-Percent Limitation Rule Cannot Always Protect Participants As has always been the case with defined benefit plans, the 10-percent limitation rule alone cannot always protect plan participants. All defined benefit plans that rely on the 10-percent limitation rule for protection also have to use other federal ERISA fiduciary rules, such as the diversification, prudent man, and exclusive benefit rules to protect plan participants. (See app. II.) Participants in employer-directed 401(k) plans with employer securities and real property investments at or near 10 percent of employee contributions and the earnings thereon are theoretically vulnerable to employers’ further investment in such assets. For illustrative purposes, assume that a plan in which the employer directs the investment of both the employee and employer contributions has 10 percent of employee contributions invested in employer securities. If the value of those employer securities declines to less than 10 percent of employee contributions, the employer may use employee contributions to buy additional employer securities until the 10-percent limit is once again reached. This situation occurs when employer securities underperform compared with other assets in which employee contributions are invested. More specifically, additional employee contributions can be invested in employer securities when (1) the value of employer securities declines and other plan investments increase or remain constant, (2) the value of other plan investments appreciates and the value of employer securities appreciates less or does not change, or (3) other investments decline and the value of employer securities and real property declines more. In each of these cases, if the value of employer securities falls significantly below 10 percent of employee contributions, all new employee contributions could be invested in declining, nonperforming, or underperforming employer securities or real property. When the value of employer securities declines significantly compared with other nonemployer securities and real property plan investments, other securities in which employee contributions are invested could be sold to generate funds to buy employer securities. In such instances, however, these actions may be subject to review under the exclusive benefit or prudent man fiduciary rules. Employer-Directed Plans Cannot Be Identified With Certainty Section 1524 provisions apply to plans in which the employer requires employee contributions to be invested in employer securities or real property. The information currently provided on the Form 5500, however, is not reliable enough to identify such plans with certainty. The current Form 5500 has a section in which the filer enters a code to indicate that the plan is participant directed. If a filer does not enter the code, the plan is considered to be an employer-directed plan. According to the Form 5500 instructions, a participant-directed plan is “a pension plan that provides for individual accounts and permits a participant or beneficiary to exercise independent control over the assets in his or her account (see ERISA section 404(c).” PWBA officials told us that they believe some filers were not completing this section because the filers were misinterpreting the Form 5500 instructions. That is, filers of plans other than 404(c) plans were not sure if they should complete this section. Therefore, some participant-directed plans were incorrectly classified as employer directed (the default if the section is not completed). Even when the filer completes the section, the form provides no way of indicating whether the participants direct the investment of employee contributions, employer contributions, or both. Revisions proposed to Form 5500 by officials of the Departments of Labor and the Treasury and of PBGC have three “feature codes” to better identify participant- and employer-directed 401(k) plans. These three codes will allow PWBA to more accurately determine which plans are employer directed or participant directed and what portion of the account the participants control. Exemptions Cannot Be Verified An employer-directed 401(k) plan is exempt from the section 1524 amendments if the fair market value of all the assets of the individual account plans the employer maintains is not more than 10 percent of the fair market value of the assets of all the employer’s pension plans. Under ERISA, the term “employer” means the employer sponsoring the plan and all the members of any controlled group of corporations to which the employer belongs. ERISA defines a controlled group as a group of corporations under common control (for example, a parent corporation and subsidiaries) in which the parent owns at least 50 percent of the voting stock. A plan filing a Form 5500 is not required to indicate its membership in a controlled group or identify the members of any controlled group to which it belongs. Likewise, information on controlled group composition is not currently available from the Department of Labor or the Internal Revenue Service (IRS). Therefore, it is not possible to assign individual plans to their respective controlled groups and determine if the individual account plans of the controlled group represent more than 10 percent of the assets of all the pension plans of that controlled group. Moreover, none of the proposed changes to the Form 5500 will identify members of a controlled group of corporations. Extent of Employer Securities and Real Property Investments Cannot Be Readily Identified Although the section 1524 provisions apply to 401(k) plans that require that employee contributions be invested in employer securities and real property, it may be difficult to determine the amount of employer securities and real property owned by these plans. This is because a 401(k) plan that pools its assets with those of other plans for investment purposes reports only one asset amount on the Form 5500. This amount represents its interest in the pooled arrangement but provides no information about separate investments, such as employer securities and real property. Although PWBA contracts with a private firm to spread these single amounts into separate investments, including employer securities and real property, some cannot be spread. Some of the plans whose assets are not spread may be 401(k) plans and may own employer securities or real property. The new Form 5500 will provide more information on the type and amount of assets a plan owns through a pooled investment arrangement. Under the proposal, a plan will continue to file its own Form 5500. In addition, reports for each pooled arrangement to which it belongs will be made in separate Form 5500 reports to PWBA. The Form 5500 for each pooled arrangement will show investments in stocks, bonds, employer securities, and the like. Attachments to the Form 5500 will provide the name, employer identification number, plan number, and monetary interest of each plan or other entity belonging to the pooled arrangement. Other Mechanisms Available to Safeguard Participants in Employer-Directed 401(k) Plans We identified other mechanisms that would be available to policymakers if additional safeguards are needed in the future. Two mechanisms— enhanced reporting and disclosure and prescribed education programs— identified during our review could be administratively implemented under existing authority provided in title I of ERISA. Two other mechanisms— adoption of the diversification requirement used for ESOPs and use of independent fiduciaries to examine investment decisions—would require the Congress to add or amend statutory requirements. Enhanced Reporting and Disclosure One such mechanism would use authority already granted to the Secretary of Labor to require that 401(k) plans provide information on the plan’s investment in employer securities and real property to plan participants. Although this mechanism could be applied only to employer-directed 401(k) plans requiring that employee contributions be invested in employer securities and real property, the same mechanism could also benefit participants in all 401(k) plans. ERISA requires that the plan administrator provide each plan participant with a summary of the pension plan information reported to the Department of Labor on the Form 5500. The summary annual report (SAR) must contain the information and be in the format prescribed by the Secretary of Labor. Currently, the Secretary does not require that the SAR contain information about employer securities and employer real property that the plan owns. The Secretary could require a revised and expanded SAR that would disclose the amount of employer securities owned by the plan, its current or fair market value, the percentage of the plan assets it represents, and whether the employer securities are publicly traded on a national exchange or privately held. Finally, plan participants could be provided some statement about the employer’s financial condition and other information to be more fully informed about their holdings and any potential risk associated with them. If needed, plans with a specified threshold of employer securities and real property could provide additional reports to PWBA officials. In this regard, additional reporting might focus on the specific nature, scope, and percentage of investment in those securities or real property; whether the securities are publicly traded or privately held; and whether they are valued at share or par value. Plans exceeding the threshold could also be required to report to PWBA if the shares of employer securities or the value of employer real property held by the 401(k) plan would decline precipitously within some specified time period. Reporting of this information to PWBA could enable the agency to more strictly scrutinize the plans and seek earlier enforcement efforts when appropriate to preserve a plan’s assets and secure the retirement income of plan participants. The enhanced reporting to PWBA would be similar to that currently provided to the PBGC under certain situations. Education Programs Another mechanism would be to require an educational program for plan participants in 401(k) plans that hold any or a prescribed threshold of qualifying employer securities and real property. The educational requirements for 401(k) plans with employer securities and real property might require that participants in the affected plans be told about or given materials informing them about (1) investment concepts, such as risk and return, diversification, dollar cost averaging, and compounded return; (2) historic differences in rates of return among different asset classes (for example, stocks, bonds, employer securities, or cash); (3) estimating future retirement income needs; (4) determining investment time horizons, including models involving hypothetical individuals with different time horizons and risk profiles; and (5) assessing risk tolerance. Participants could also be provided questionnaires, worksheets, software, and similar materials to give them a way to estimate future retirement income needs and assess the impact of different asset allocations on retirement income. Even with the benefits of increased participant knowledge, enrolling plan participants in educational programs would have little or no impact on the level of employer securities and real property investments if such investments are typically controlled by the employer. Nevertheless, such a program might broaden plan participants’ perspective and enable them to make better judgments about their retirement income security. Use of ESOP Diversification Requirements A third approach would extend the ESOP diversification requirements to employer-directed 401(k) plans requiring employer securities and real property investments. The Internal Revenue Code requires ESOPs to provide the means for “qualified participants” nearing retirement to diversify part of their ESOP account balance for stocks acquired after 1986. In general, beginning with the plan year following the participant’s reaching both age 55 and completing 10 years of plan participation, the plan must allow the participant to diversify at least 25 percent of the total account. Five years later, the participant must be allowed to diversify at least 50 percent of the account. Alternatively, the ESOP may distribute to the participant the amount that could be diversified. In all likelihood, this provision’s effectiveness in protecting participants whose employer’s stock or real property is declining in value would be minimal because the provision would not protect plan participants who had not reached age 55 and completed 10 years of plan participation. This latter impediment could be overcome by requiring periodic “open seasons” in which plan participants could diversify investments. Examination of Investment Decisions by Independent Fiduciaries A final option would be to require an independent fiduciary to examine and make decisions about whether and to what extent amounts of employer securities and real property could be contributed by the plan sponsor. Such a mechanism has been used frequently by PWBA to better protect plans in possible conflict-of-interest situations. An independent fiduciary could more discreetly and expertly examine whether or not the plan would be inordinately subjected to financial loss by certain levels of investment in employer securities and real property. The independent fiduciary would have broad authority to limit, remove, or conceivably add employer securities and real property to the mix of investments for that plan to better protect plan participants’ retirement income. In essence, the independent fiduciary would be an unaligned third party who could review the plan’s holdings and transactions to better ensure the proper diversification of plan assets and the value of holdings to improve the financial security of plan participants in such arrangements. The independent fiduciary would act as an honest broker seeking to minimize plan losses, maximize its profits, and eliminate instances of self-dealing which might otherwise threaten the plan’s financial security. Although using this approach would minimize the possibility of self- dealing by the employer or the employer’s representative, it might add administrative cost otherwise not incurred by employer-directed plans. Agency Comments and Our Evaluation In commenting on a draft of this report, the Assistant Secretary for Pension and Welfare Benefits expressed overall agreement with our report findings. Our findings on the scope of employer securities and real property investments by 401(k) plans were considered to be “generally consistent with PWBA’s own, albeit less comprehensive, analysis of the Form 5500 Series data in this area.” The Assistant Secretary also indicated that the mechanisms we identified for consideration by policymakers if future alternatives to the enactment of the section 1524 provisions are needed to protect participants in employer-directed 401(k) plans merit further consideration. In this regard, the mechanisms discussed in our report will be studied when the Department of Labor’s Advisory Council on Employee Welfare and Pension Benefit Plans, which is also conducting a review of 401(k) plan investments in employer securities and real property, completes its study and makes recommendations to the Secretary of Labor. Our report findings and the Advisory Council’s recommendations will be used to determine what further action may be appropriate regarding employer securities and real property investments by 401(k) plans. The Assistant Secretary also had technical comments about our report. These focused on our discussion of (1) the PWBA contractor’s ability to fully identify the extent of employer securities and real property owned by 401(k) plans in 1993 and (2) the improvements proposed in the ongoing revision of the Form 5500 to provide more information on the type and amount of assets owned by plans through pooled investment arrangements. We made changes to the final report, where appropriate, after further discussions with PWBA officials about these issues. (See app. IV.) Observations Although it is always possible for some employers sponsoring 401(k) plans to go bankrupt in the future, the potential for a large number of workers to lose benefits because their 401(k) plan is invested in a bankrupt employer’s securities or real property is not widespread. For the latest year for which data are available, fewer than 2,500 of the nearly 160,000 401(k) plans that filed Form 5500s reported that they owned employer securities and real property. In addition, most participants in 401(k) plans that owned employer securities and real property had participant-directed 401(k) plans that PWBA identified as supplemental to other pension plans the employer offered. Nonetheless, 756 401(k) plans had 1.4 million participants in 1993 in which the employer directed the investment of all plan assets. Participants in employer-directed 401(k) plans will always be somewhat vulnerable to investment decisions over which they have no control. However, beginning in 1999, the recently enacted section 1524 provisions in title I will prevent employer-directed plans that have more than 10 percent of employee contributions invested in employer securities and real property from investing more employee contributions in assets of this type. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, the Commissioner of the Internal Revenue Service, the Director of the Office of Management and Budget, and other interested parties. Please contact me on (202) 512-7215 or Fred E. Yohey, Jr., Assistant Director, on (202) 512-7218 if you have any questions about this report. Major contributors to this report are listed in appendix V. Scope and Methodology The Chairman of the House Ways and Means Committee asked us for information on the Employee Retirement Income Security Act of 1974 (ERISA) rules governing investments by 401(k) plans. After subsequent discussions with the chairman’s office, we agreed to address the following questions: (1) To what extent are 401(k) pension plan assets invested in employer securities and real property? (2) What potential problems may be associated with implementing and enforcing the title I amendments? and (3) What mechanisms could safeguard the retirement benefits of participants in employer-directed 401(k) plans that invest in employer securities and real property? To determine the extent to which the assets of single-employer defined benefit and defined contribution plans were invested in employer securities and real property, we analyzed the Internal Revenue Service (IRS) Form 5500 computerized database maintained by the Department of Labor’s Pension and Welfare Benefits Administration (PWBA). Under ERISA, all private employers are required to annually report certain financial, participant, and actuarial data for each of their defined benefit and defined contribution plans. Our data analysis was limited to the most recent plan year (plan year 1993) for which final plan-specific data were available. The data we developed differ from that produced by PWBA and published in its Abstract of 1993 Form 5500 Annual Reports. Although PWBA’s data are based on analyses of all large plans with 100 participants or more combined with a representative sample of 5 percent of small plan (fewer than 100 participants) filers, our report data are based on the analyses of data submitted by each large and small pension plan contained in the databases that PWBA provided for our review. In addition, we worked extensively with IRS to identify all known 401(k) plans in the Form 5500 database. We did not independently verify the accuracy of the research databases because IRS and PWBA check the data for accuracy and consistency. Importantly, the data we analyzed were accurate only to the extent that employers exercised appropriate care in completing their annual Form 5500 reports. To examine the protections provided by and any potential problems associated with the recent amendments to ERISA, we discussed the amendments with officials of the Departments of Labor and Treasury. We also developed a model to show the amount of employee contributions that could be invested in employer securities and real property assuming a 10-percent limitation on such investments and various changes in the value of employer securities and other investments. Finally, we determined whether available Form 5500 data were adequate to implement and enforce the new amendments. To determine what rules and other mechanisms could be considered to better protect plan participants in employer-directed 401(k) plans, we identified alternative strategies already authorized by federal law. In addition, we identified or learned about mechanisms suggested by experts that could be considered to protect pension plan assets—and ultimately future retirement benefits—of plan participants and their families. The organizations with whom we discussed these strategies are the American Association of Retired Persons; 401(k) Association; Profit Sharing/401(k) Council; ERISA Industry Committee; Association of Private Pension and Welfare Plans; American Institute of Certified Public Accountants; Investment Company Institute; Employee Benefit Research Institute; and Financial Executives Institute as well as officials from the Department of the Treasury, IRS, and PWBA. We also spoke with a former Minority Counsel for the Senate Committee on Labor and Public Welfare (1970 to 1975), who is a nationally recognized expert on private pension plan issues. We performed our review in Washington, D.C., from November 1996 through September 1997 in accordance with generally accepted government auditing standards. Federal Fiduciary Rules on Investing Pension Plan Assets The Employee Retirement Income Security Act of 1974 (ERISA) imposes fiduciary rules on the conduct of those charged with managing or administering a pension plan, including investing plan assets. All pension plans are subject to these rules; however, significant exemptions exist for defined contribution plans, including 401(k) plans. ERISA Fiduciary Investment Rules Apply to Defined Benefit and Defined Contribution Plans According to title I of ERISA, pension plan fiduciaries—people exercising discretionary authority or control for managing pension plans or disposing of their assets—have a duty to act solely in the interest of plan participants and beneficiaries regarding the pension plans they manage or administer. Under this broad general requirement, ERISA requires a fiduciary to observe the following rules: The “exclusive purpose” rule. The fiduciary must have an individual loyalty to participants and beneficiaries to provide benefits and defray reasonable expenses in administering the plan. The “prudent man” rule. The fiduciary must act with the care, skill, prudence, and diligence under the prevailing circumstances that a prudent person, acting in a like capacity and familiar with such matters, would use in the same sort of situation. The “diversification” rule. The fiduciary must diversify the plan’s investments by type, geographic area, dates of maturity, and industrial classification to minimize the risk of large losses. The diversification rule attempts to minimize the risk of large losses that might occur from an overconcentration of plan assets in any of these four areas. The fiduciary must observe the requirements of, and act in accordance with, the documents and instructions governing the plan. In addition to the fiduciary rules, ERISA also specifies a number of prohibited transactions. A fiduciary is barred from engaging in such transactions if he or she knows or should know that ERISA prohibits them. The types of transactions prohibited by ERISA are those involving an inherent conflict of interest between the plan and people associated with the plan (“parties-in-interest”). For example, ERISA prohibits fiduciaries from allowing a plan to engage in a transaction for selling or leasing any property; lending money or extending credit; furnishing goods, services, or facilities between the plan and a party-in-interest; or transferring any plan assets to a party-in-interest. ERISA also prohibits fiduciaries from dealing with a plan’s assets for their own personal interest and involving the plan in transactions with parties whose interests are adverse to the plan’s participants or beneficiaries. Finally, ERISA has a rule that places a 10-percent limitation (10-percent limitation rule) on acquiring and holding qualified employer securities and qualified employer real property. The 10-percent limitation rule states that a plan may not acquire any qualified employer securities or real property if immediately after the acquisition the aggregate fair market value of such assets exceeds 10 percent of the fair market value of the plan’s total assets. Employer securities and real property that appreciate in value after acquisition to 10 percent or more of total plan assets do not have to be sold. Eligible Defined Contribution Plans Are Exempt Although the fiduciary rules discussed above apply to both defined benefit and defined contribution plans, specific exemptions do exist for eligible defined contribution plans. Such plans may include profit-sharing plans, thrift savings plans, money purchase plans, and employee stock ownership plans as well as 401(k) plans with some restrictions after the passage of the Taxpayer Relief Act of 1997. The Congress’ decision to exempt these defined contribution plans may reflect recognition that certain defined contribution plans have always invested in employer securities or real property. Concerning acquiring and holding employer securities and real property, defined contribution plans are specifically exempt from the general diversification requirement and from the prudence requirement to the extent that the prudence requirement would require diversification. Accordingly, fiduciary rules do not require defined contribution plans to diversify plan assets. Nor are defined contribution plans required to follow ERISA’s 10-percent limitation rule on investments in employer securities and real property as long as the transactions are for adequate consideration and no commissions are charged. So, qualifying defined contribution plans may currently acquire or hold as much employer securities and real property as they want. Additional Data on Pension Plans’ Investments in Employer Securities and Real Property This appendix contains additional information on pension plans’ investments in employer securities and real property. Table III.1 shows plan year 1993 data on investments in employer securities and real property among defined benefit, employee stock ownership plans (ESOP), other defined contribution plans, and 401(k) plans. Table III.2 shows the percentage of 401(k) plan assets invested in employer securities and real property by plan size. Overall Investment in Employer Securities and Real Property Only 10,191 of the 634,280 single-employer defined benefit and defined contribution pension plans that filed a Form 5500 for plan year 1993 owned employer securities or real property. Total value of these employer-related investments, including both securities and real property, was approximately $162 billion. ESOPs, which are required by law to purchase and hold employer stock, were the main owners of this type of asset. In fact, ESOPs owned about $91 billion of employer stock, which amounted to more than all other types of plans combined. The second largest owners were 401(k) plans, with 2,449 plans owning about $53 billion. Defined benefit plans and defined contribution plans (other than ESOPs and 401(k) plans) owned considerably less. (See table III.1.) Employer-related investments consisted almost exclusively of employer securities. Of the $162 billion of employer securities and real property owned, only $598 million was identified as employer real property. Overall, we identified only 86 large plans that owned employer real property, including 9 ESOPs, 11 401(k) plans, 44 defined benefit plans, and 22 other types of defined contribution plans. The 10,191 plans that invested in employer securities and real property covered about 16.6 million workers. This included approximately 5.4 million in ESOPs, 5.3 million in 401(k) plans, 4.9 million in defined benefit plans, and about 1 million in other defined contribution plans. Employer securities owned (billions) Employer real property owned (billions) Combined(billions) Total employer securities and real property owned (billions) Includes 613 ESOPs with a 401(k) feature. In total, these 613 ESOPs had approximately 2.1 million participants and $54 billion of their $100 billion of plan assets invested in employer securities and real property in 1993. Percentage of 401(k) Plan Assets Invested in Employer Securities and Real Property by Plan Size The size of 401(k) plans appeared to relate somewhat to the percentage of plan assets invested in employer securities and real property. In this regard, plans with less than 100 participants and plans with over 5,000 participants tended in 1993 to invest a higher percentage of their total plan assets in employer securities and real property. The entries in each cell in table III.2 represent the percentage of plans of the size shown in column one that have the percentage of plan assets shown at the top of the column invested in employer securities and real property. For example, the first cell shows that 28 percent of the plans with less than 100 participants have up to 9 percent of plan assets invested in employer securities or real property. Comments From the Department of Labor Major Contributors to This Report Fred E. Yohey, Jr., Assistant Director, (202) 512-7218 Harry A. Johnson, Evaluator-in-Charge Dennis M. Gehley, Senior Evaluator Paula J. Bonin, Program Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the extent to which 401(k) plan assets are invested in employer securities and real property; (2) the protection and any possible problems associated with the recent amendments to title I of the Employee Retirement Income Security Act of 1974 (ERISA); and (3) alternate mechanisms that might safeguard the retirement benefits of participants in 401(k) plans in which the employer decides how to invest assets. GAO noted that: (1) only 2,449 of about 160,000 401(k) plans owned employer securities or real property in 1993; (2) these plans owned $53 billion of employer securities and real property and covered 5.3 million plan participants; (3) in most of these plans, plan participants directed the investment of their own contributions; (4) plans for which the employer solely decided how to invest assets totalled 756; (5) in these plans, employees exercised no control over how their 401(k) plan assets were invested and the employer made all the investment decisions; (6) these plans covered 1.4 million participants and had $12.3 billion invested in employer securities and real property; (7) in August 1997, the Congress amended title I of ERISA to provide that not more than 10 percent of employee contributions be invested in employer securities and real property by defined contribution 401(k) plans requiring that employee contributions be invested in this way; (8) this change increases protection for 401(k) plan participants; (9) the 10-percent limitation rule alone does not, however, prevent plans from investing employee contributions in employer securities and real property whose value is declining; (10) some of the information needed to implement and enforce the new legislation is not readily available; (11) proposed changes to the Department of Labor's Form 5500, if implemented, may remedy some of the data deficiencies; (12) other mechanisms are available to policymakers if alternate safeguards are needed in the future; and (13) these mechanisms include enhanced reporting and disclosure, prescribed education programs, adoption of the diversification requirement used for employee stock option ownership plans, and use of independent fiduciaries to examine investment decisions.
GAO Performs a Broad Range of Work for Congress GAO has broad statutory authority under title 31 of the United States Code to audit and evaluate agency financial transactions, programs, and activities. To carry out these audit and evaluation authorities, GAO has a broad statutory right of access to agency records. Using the authority granted under title 31, we perform a range of work to support Congress that, among other things, includes the following: Evaluations of federal programs, policies, operations, and performance: For example, evaluations of transportation security programs related to passenger-screening operations at airports, our work to assess enforcement of immigration laws, and our work on the U.S. Coast Guard’s Deepwater acquisition to replace its aging fleet. Management and financial audits to determine whether public funds are being spent efficiently, effectively, and in accordance with applicable laws: For example, DHS’s appropriations acts for fiscal years 2002 through 2006 have mandated that we review expenditure plans for the U.S. Visitor and Immigrant Status Indicator Technology (U.S.VISIT) program. Investigations to assess whether illegal or improper activities may have occurred: For example, we investigated the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program to determine the vulnerability of the program to fraud and abuse in the wake of Hurricanes Katrina and Rita. Constructive engagements in which we work proactively with agencies, when appropriate, to help guide their efforts toward transformation and achieving positive results: For example, we have worked to establish such an arrangement with the Transportation Security Administration (TSA) on its design and implementation of the Secure Flight Program for passenger pre-screening for domestic flights whereby we could review documents on system development as they were being formulated and provide TSA with our preliminary observations for its consideration. Congress mandated TSA certify that the design and implementation of the program would meet 10 specific criteria. Congress also mandated that we review and comment on TSA’s certification. TSA’s certification has not yet occurred. Auditing Standards and Our Protocols Address Accessing Information We carry out most of our work in accordance with generally accepted government auditing standards. Our analysts and financial auditors are responsible for planning, conducting, and reporting their work in a timely manner without internal or external impairments. These standards require that analysts and financial auditors promptly obtain sufficient, competent, and relevant evidence to provide a reasonable basis for any related findings and conclusions. Therefore, prompt access to all records and other information associated with these activities is needed for the effective and efficient performance of our work. Our work involves different collection approaches to meet the evidence requirements of generally accepted government auditing standards. Such evidence falls into four categories: physical (the results of direct inspection or observation); documentary (information created by and for an agency, such as letters, memorandums, contracts, management and accounting records, and other documents in various formats, including electronic databases); testimonial (the results of face-to-face, telephone, or written inquiries, interviews, and questionnaires); and analytical (developed by or for GAO through computations, data comparisons, and other analyses). We have promulgated protocols describing how we will interact with the agencies we audit. We expect that agencies will promptly comply with our requests for all categories of needed information. We also expect that we will receive full and timely access to agency officials who have stewardship over the requested records; to agency employees responsible for the programs, issues, events, operations, and other factors covered by such records; and to contractor personnel supporting such programs, issues, events, and operations. In addition, we expect that we will have timely access to an agency’s facilities and other relevant locations while trying to minimize interruptions to an agency’s operations when conducting work related to requests for information. We provide an appropriate level of security to information obtained during the course of our work. We are statutorily required to maintain the same level of confidentiality of information as is required of the agency from which it is received, and we take very seriously our obligation to safeguard the wide range of sensitive information we routinely receive. For example, we ensure that GAO employees have appropriate security clearances to access information. We also have well-established security policies and procedures. Timely access to information, facilities, and other relevant locations is in the best interests of both GAO and the agencies. We need to efficiently use the time available to complete our work to minimize the impact on the agency being reviewed and to meet the time frames of our congressional clients. Therefore, we expect that an agency’s leadership and internal procedures will recognize the importance of and support prompt responses to our requests for information. When we believe that delays in obtaining requested access significantly impede our work, we contact the agency’s leadership for resolution and notify our congressional clients, as appropriate. DHS Has Implemented Burdensome Processes for Working with GAO Unlike those of many other executive agencies, DHS’s processes for working with us includes extensive coordination among program officials, liaisons, and attorneys at the departmental and component levels and centralized control for all incoming GAO requests for information and outgoing documents. In an April 2004 directive on GAO relations, DHS established a department liaison to manage its relationship with us. In addition, DHS has a GAO coordinator within all of its components and, within the DHS General Counsel Office, an Assistant General Counsel for General Law who provides advice on GAO relations. According to the directive, the department liaison (1) receives and coordinates all GAO notifications of new work, (2) participates in all entrance conferences, and (3) notifies the Assistant General Counsel of new work to obtain participation of counsel. The directive requires the Assistant General Counsel to participate in all entrance meetings to ensure that the scope of any request is clear and finite, and that mutual obligations between DHS and GAO are met. The component coordinator handles all matters involving GAO for the component, generally participates in GAO entrance meetings, and seeks advice of component’s counsel, as appropriate. The following figure illustrates the coordination of information among DHS officials described above when we make a request for information. Typically when we begin an engagement, we send a letter to the department liaison to notify DHS that we are starting a new engagement and we request an entrance meeting to discuss the work. During the course of our review, we provide written requests for meetings and documents to component coordinators using a DHS-prescribed form. The component coordinators then forward our requests to program officials and consult with component counsel, who may consult with the Assistant General Counsel. Figure 1. DHS Process for Working with GAO. In a memo that transmitted the above directive to senior managers in DHS components, the then-Under Secretary for Management emphasized the importance of a positive working relationship between the two agencies. The memo stated that failure to meet or brief GAO staffs in a timely manner, as well as being viewed as nonresponsive to GAO document requests, could result in tense and acrimonious interactions. The Under Secretary also reminded senior officials that prompt and professional discharge of their responsibilities to GAO requests could affect both DHS’s funding and restrictions attached to that funding. GAO Has Experienced Difficulties Accessing DHS Information In testimony before this committee and the House Committee on Appropriations, Subcommittee on Homeland Security in February 2007, we stated that DHS has not made its management or operational decisions transparent enough to allow Congress to be sure that the department is effectively, efficiently, and economically using its billions of dollars of annual funding. We also noted that our work for Congress to assess DHS’s operations has been significantly hampered by long delays in obtaining access to program documents and officials. We emphasized that for Congress, GAO, and others to independently assess the department’s efforts, DHS would need to become more transparent and minimize recurring delays in providing access to information on its programs and operations. At most federal agencies and in some cases within DHS, we obtain the information we need directly from program officials, often on the spot or very soon after making the request. For example, our work on the Secure Border Initiative (SBI) has so far met with a very welcome degree of access to both DHS officials and documents. SBI is a comprehensive multiyear program established in November 2005 to secure U.S. borders and reduce illegal immigration. One element of SBI is SBInet, the program within the U.S. Customs and Border Protection (CBP) responsible for developing a comprehensive border protection system of tactical infrastructure, rapid response capability, and technology. The fiscal year 2007 Department of Homeland Security Appropriations Act required that, before DHS could obligate $950 million of the $1.2 billion appropriated for SBInet, it had to prepare a plan for expending these funds, have it reviewed by GAO, and then submit it to Congress for approval. The plan was to be submitted within 60 days of the act’s passage. CBP officials provided us office space at CBP headquarters, gave us access to all levels of SBInet management, and promptly provided us with all the documentation we requested, much of which was still in draft form and predecisional. DHS met the 60-day requirement when it submitted its plan to the Appropriations Committees on December 4, 2006. We met our responsibilities by being able to review the plan as it developed over the 60-day period, and to provide the results of our review to the House and Senate Appropriations Committees on December 7 and 13, 2006, respectively. In contrast to the access we were afforded in the above example, the process used in most of our interactions with DHS is layered and time- consuming. As discussed earlier, we are asked to submit each request for documents to the component coordinator rather than directly to program officials even if we have already met with these officials. Also as mentioned earlier, the component coordinator often refers our request to component counsel. And the Assistant General Counsel for General Law in DHS’s General Counsel’s office may become involved. The result is that we often wait for months for information that in many cases could be provided immediately. In some cases, DHS does not furnish information until our review is nearly finished, greatly impeding our ability to provide a full and timely perspective on the program under review. Each access issue with DHS requires that we make numerous and repetitive follow-up inquiries. Sometimes, despite GAO’s right of access to information, DHS delays providing information as it vets concerns internally, such as whether the information is considered deliberative or predecisional. At other times, we experience delays without DHS expressing either a concern or a cause for the delays. On other occasions, DHS is unable to tell us when we might obtain requested information or even if we will obtain it. We have encountered access issues in numerous engagements, and the lengths of delay are both varied and significant and have affected our ability to do our work in a timely manner. We have experienced delays with DHS components that include CBP, U.S. Immigration and Customs Enforcement (ICE), FEMA, and TSA on different types of work such as information sharing, immigration, emergency preparedness in primary and secondary schools, and accounting systems. I have examples of two engagements to share with you today that illustrate the types of delays we experience and how they have affected the timing of our work. My first example is of an engagement related to detention standards for aliens in custody, where the team working on this engagement experienced delays of up to 5 months in obtaining various documents. The objective of this work, which is still under way and is being done for the House Committee on Homeland Security, is to assess ICE efforts to review facilities that house alien detainees, determine whether the facilities have complied with DHS standards, and determine the extent that complaints have been filed about conditions in the facilities. Some of the facilities are owned and operated by DHS; others are operated under contract with DHS. In order to determine the extent to which facilities are complying with DHS standards, we requested that ICE provide copies of the reports of inspections it conducted in 2006 at 23 detention facilities. We requested those reports in December 2006 and did not receive the final four of the inspection reports until just last week, after DHS departmental intervention. We had several meetings and discussions with DHS officials including program officials, liaisons, and attorneys, and we were never provided a satisfactory answer about the reason for this 5-month delay. We also experienced delays on this engagement obtaining a copy of the contract for detainee phone services between ICE and the phone service contractor. DHS took 1 month to provide the contract and redacted almost the entire document because a DHS attorney contended the information was “privileged.” We followed up with DHS officials to communicate that our authority provided for access to this type of information and then waited another 2 weeks before we were able to get an unredacted copy of the contract. In another engagement being done at the request of the then-Chairman of the House Committee on Government Reform, we are reviewing an emergency preparedness exercise that DHS conducted in June 2006 called Forward Challenge 06. The purpose of the exercise was to allow agencies to activate their continuity of operations plans, deploy essential personnel to an alternate site, and perform essential functions as a means of assessing their mission readiness. Our objective is to determine the extent to which participating agencies were testing the procedures, personnel, and resources necessary to perform essential functions in their continuity- of-operations plans during the exercise. We began our work a few months before the exercise and had arranged with DHS to observe the actual exercise. However, 2 days before its start, DHS officials told us we would not be permitted to observe the exercise and stated that after completion, they would instead brief us on the exercise and the lessons they had learned from it. They provided that briefing in August 2006, at which time we requested relevant documentation to support the claims the DHS officials made to us. Subsequently, in November 2006, DHS provided us with one-third of the agency after-action reports we requested but redacted key information, including the identity of the participating agencies. DHS, however, was reluctant to provide us with the balance of the documents requested, stating that it considered these to be “deliberative materials” and expressing concern that sharing these with us would have a significant and negative impact on participants’ level of openness in future exercises. Despite GAO’s right of access to the information, the involvement of GAO and DHS officials at the highest level, and a letter of support from the former and current chairman of the committee, we did not receive access to the requested documentation until March 2007. Our report for this engagement was to be issued in November 2006; because we did not receive the needed information until March 2007, we will not be able to issue our analysis until later this year. GAO Has Taken and Suggested Steps to Resolve Access Issues with DHS We have made good faith efforts to resolve access issues. Specifically, we have undertaken many steps to work with DHS to resolve delays as expeditiously as possible and gain access to information needed for our work. At our audit team level we have asked staff to set reasonable time frames for requesting DHS to provide information and arrange for meeting and when we encounter resistance, to ensure that the information we request is critical to satisfying the audit objectives. When delays occur, our approach is to involve various management levels at both GAO and DHS, beginning with lower-level managers and working up to the Comptroller General and the Secretary. At each level, our managers and legal staff contact their counterpart liaisons and counsel, component heads, or DHS senior managers, as appropriate, either by telephone, e-mail, or letter, to communicate our access authority and need for the information to satisfy audit objectives. Our communication efforts have generally resulted in obtaining the requested or alternative information, or making other accommodations. We have proposed to DHS that the department take several steps that would enhance the efficiency of its process. First, our staff should be able to deal directly with program officials after we have held our initial entrance conference. If these officials have concerns about providing us requested information, they can involve DHS liaison or coordinators. Second, to the extent that DHS counsel finds it necessary to screen certain sensitive documents, it should do so on an exception basis. Other documents should be provided directly to us without prior review or approval by counsel. We provide DHS several opportunities to learn how we are using the information its officials provide us—we provide routine updates on our work to program officials; we provide program officials, liaisons, and counsel a “statement of facts” that basically describes what we learned during the engagement; and we formally provide DHS a copy of our draft report that contains our evidence, conclusions, and recommendations for its comment. There is no reason to hold information back from us when it has been made available to contractors, other federal agencies, state and local governments, or the public, or when its only sensitivity is that DHS considers it confidential or classified. The Secretary of DHS and the Under Secretary for Management have stated their desire to work with us to resolve access issues. We are willing to work with DHS to resolve any access-related concerns. Nevertheless, we remain troubled that the design and implementation of the current DHS process is routinely causing unnecessary delays. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions your or other members of the subcommittee may have at this time. Contact Information For further information about this statement, please contact Norman J. Rabkin, Managing Director, Homeland Security and Justice Issues, on (202) 512-8777 or rabkinn@gao.gov. Individuals making key contributions to this testimony include Linda Watson, John Vocino, Jan Montgomery, Geoff Hamilton and Richard Ascarate. Appendix I: Key GAO Audit and Access Authorities GAO’s Audit and Evaluation Authority GAO has broad statutory authority under title 31 of the United States Code to audit and evaluate agency financial transactions, programs, and activities. Under 31 U.S.C. § 712, GAO has authority to investigate all matters related to the receipt, disbursement, and use of public money. Section 717 of title 31, U.S.C., authorizes GAO to evaluate the results of programs and activities of federal agencies, on GAO’s own initiative or when requested by either house of Congress or a committee of jurisdiction. Section 3523(a) of title 31 authorizes GAO to audit the financial transactions of each agency, except as specifically provided by law. GAO’s Access-to- Records Authority To carry out these audit and evaluation authorities, GAO has a broad statutory right of access to agency records. Under 31 U.S.C. § 716(a), federal agencies are required to provide GAO with information about their duties, powers, activities, organization, and financial transactions. When an agency does not make a record available to GAO within a reasonable period of time, GAO may issue a written request to the agency head specifying the record needed and the authority for accessing the record. Should the agency fail to release the record to GAO, GAO has the authority to enforce its requests for records by filing a civil action to compel production of records in federal district court. A limitation in section 716, while not restricting GAO’s basic statutory right of access, acts to limit GAO’s ability to compel production of particular records through a court action. For example, GAO may not bring such an action to enforce its statutory right of access to a record where the President or the Director of the Office of Management and Budget certifies to the Comptroller General and Congress (1) that a record could be withheld under one of two specified provisions of the Freedom of Information Act (FOIA) and (2) disclosure to GAO reasonably could be expected to impair substantially the operations of the government. The first prong of this certification provision requires that such record could be withheld under FOIA pursuant to either 5 U.S.C. § 552(b)(5), relating to inter-agency or intra-agency memorandums or letters that would not be available by law to a party other than an agency in litigation with the agency, or 5 U.S.C. § 552(b)(7), relating to certain records or information compiled for law enforcement purposes. The second prong of the certification provision, regarding impairment of government operations, presents a very high standard for the agency to meet. The Senate report on this section 716 limitation stated: “As the presence of this additional test makes clear, the mere fact that materials sought are subject to 5 U.S.C. 552(b)(5) or (7) and therefore exempt from public disclosure does not justify withholding them from the Comptroller General. Currently GAO is routinely granted access to highly sensitive information, including internal memoranda and law enforcement files, and has established a fine record in protecting such information from improper use or disclosure. Thus, in order for the certification to be valid, there must be some unique or highly special circumstances to justify a conclusion that possession by the Comptroller General of the information could reasonably be expected to substantially impair Government operations.” The committee report also points out that the Comptroller General’s statutory right of access to agency records is not diminished by the certification provisions of the legislation. The certification simply allows the President or Director of the Office of Management and Budget (OMB) to preclude the Comptroller General from seeking a judicial remedy in certain limited situations. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In testimony before this committee and the House Committee on Appropriations, Subcommittee on Homeland Security in February 2007, GAO stated that the Department of Homeland Security (DHS) has not made its management or operational decisions transparent enough to allow Congress to be sure that the Department is effectively, efficiently, and economically using its billions of dollars of annual funding. GAO also noted that its work for Congress to assess DHS's operations has, at times, been significantly hampered by long delays in obtaining access to program documents. Following the aforementioned testimonies, GAO was asked to testify about its access issues. This testimony provides information on (1) the scope of GAO's work, (2) GAO protocols for accessing agency information, (3) DHS processes for working with GAO, (4) access issues GAO has encountered, and (5) steps GAO has taken to address these issues. To carry out its audit and evaluation authorities, GAO has a broad statutory right of access to agency records. Auditing standards require that analysts and financial auditors promptly obtain sufficient, competent, and relevant evidence to provide a reasonable basis for any related findings and conclusions. Therefore, prompt access to all records and other information associated with these activities is needed for the effective and efficient performance of GAO's work. This is also necessary in order for the Congress to be able to conduct its constitutional responsibilities in a timely and effective manner. Since DHS began operations in 2003, GAO has provided major analyses of the department's plans and programs for transportation security, immigration, Coast Guard, and emergency management. GAO has also reported on DHS's management functions such as human capital, financial management, and information technology. GAO has processes it applies in working with departmental agencies across the federal government that work well. DHS's adopted processes have frequently impeded GAO's efforts to carry out its mission by delaying access to documents required to assess the department's operations. This process involves multiple layers of review by department- and component-level liaisons and attorneys and results in frequent and sometimes lengthy delays in obtaining information. GAO recognizes that the department has legitimate interests in protecting certain types of sensitive information from public disclosure. GAO shares that interest as well and follows strict security guidelines in handling such information. GAO similarly recognizes that agency officials will need to make judgments with respect to the manner and the processes they use in response to GAO's information requests. However, to date, because of the processes adopted to make these judgments, GAO has often not been able to do its work in a timely manner. GAO has been able to eventually obtain information and answer audit questions, but the delays experienced at DHS impede GAO's ability to conduct audit work efficiently and to provide timely information to congressional clients.
Background The Forest Service, BLM, and Park Service manage more than 530 million acres of federal lands across the country (see fig. 1). Each agency has a unique mission focusing on priorities that shape how they manage those lands. Specifically, The Forest Service manages more than 190 million acres to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. The agency manages and issues permits for activities such as recreation, timber harvesting, mining, livestock grazing, and rights-of-way for road construction. The Forest Service manages lands under its jurisdiction through nine regional offices, 155 national forests, 20 grasslands, and over 600 districts (each forest has several districts). BLM manages about 256 million acres to sustain the health, diversity, and productivity of the public lands for the use and enjoyment of present and future generations. The agency manages and issues permits for activities such as recreation, timber harvesting, mining, livestock grazing, and oil and gas development. BLM manages public lands under its jurisdiction through 12 state offices, with each state office having several subsidiary district and field offices. The Park Service manages 391 national park units covering more than 84 million acres to conserve the scenery, natural and historic objects, and wildlife of the national park system so they will remain unimpaired for the enjoyment of this and future generations. The park units have varied designations corresponding to the natural or cultural features they are supposed to conserve, including national parks, monuments, lakeshores, seashores, recreation areas, preserves, and historic sites. While managing their respective lands, these three agencies must comply with the Government Performance and Results Act of 1993 (GPRA). This act shifts the focus of government decision making and accountability away from activities that are undertaken—such as the number of plans developed—to the results of those activities, which, for the land management agencies, might include gains in resource protection and quality of recreational opportunities. Under GPRA, strategic plans are the starting point and basic underpinning for results-oriented management. As such, these plans should include, among other things, (1) results-oriented short- and long-term goals, (2) strategies to achieve the goals, (3) time frames for carrying out the strategies, and (4) performance measures to monitor incremental progress. Results-oriented goals have the potential to help agencies focus on the outcomes of their programs, rather than on outputs such as staffing or numbers of activities. In addition, developing strategies is important, so that agencies can identify how they intend to achieve their goals. Setting time frames for the strategies and developing performance measures to monitor incremental progress ensure that agencies make progress toward achieving their goals in a timely manner. Finally, since one purpose of GPRA is to improve the management of federal agencies, it is particularly important that agencies’ plans address key management challenges. Federal agencies’ management of OHV use on federal lands is also guided by two executive orders issued in the 1970s. The first executive order establishes policies and procedures to control and direct the use of OHVs on federal lands in a manner that protects the resources of those lands, promotes the safety of all users, minimizes conflicts among federal land uses, communicates with the public about available OHV opportunities, prescribes appropriate penalties for violating OHV regulations, and monitors the effects of OHV use. The executive order also directs each federal land management agency to develop and issue regulations that designate specific areas and trails on public lands as open or closed with respect to OHV use. In making these designations, agencies are directed to minimize damage to the soil, watersheds, vegetation, or other resources of the federal lands; harassment of wildlife or significant disruption of wildlife habitats; and conflicts between the use of OHVs and other types of recreation. The second executive order directs agency heads to close areas or trails if OHVs are causing considerable adverse effects. The Forest Service, BLM, and Park Service initially implemented these executive orders by designating areas as open, which allows cross-country OHV use; limited, which allows OHV use on a specific route authorized by an agency; or closed, which prohibits OHV use. In recent years, the agencies have begun to reevaluate the procedures they use to make OHV designations—or are in the process of developing additional regulations for OHV use—in light of the recent increase in popularity of OHV use. Specifically, in 2005, the Forest Service issued a travel management regulation, in part to standardize the process that individual national forests and grasslands use to designate the roads, trails, and areas that will be open to motorized travel. This designation process applies only to motorized vehicles and does not address other forms of transportation, such as biking, horseback riding, and hiking. After roads, trails, and areas are designated, the travel management regulation requires that motorized travel be limited to designated roads, trails, and areas, reducing the acreage within national forests that is open to cross-country travel. The travel management regulation also requires that designated roads, trails, and areas be displayed on a motor vehicle use map. The Forest Service developed a schedule to complete the route designations and to develop the required motor vehicle use maps by the end of calendar 2009. As of March 2009, the Forest Service had completed travel management planning for 53 million acres, or about 28 percent of its lands. In January 2009, the Forest Service updated its travel management guidance to provide individual forests with details on how to designate roads, trails, and areas for motorized use. This guidance, among other things, describes the process that forests should go through to make travel management decisions, including the criteria for making these decisions. These criteria include effects on natural and cultural resources, effects on public safety, provision of recreation opportunities, access needs, conflicts among uses of national forest lands, the need for maintenance, and the availability of resources for such maintenance. Like the Forest Service, BLM has also begun to reevaluate the procedures it uses to make OHV designations. Over the past 10 years, BLM has issued increasingly detailed guidance on how its field offices should address travel management in their resource management plans. In accordance with the executive orders, BLM regulations require that all its lands be given an area designation of either open, limited, or closed with respect to motorized travel and that these designations be based on protecting resources, promoting the safety of users, and minimizing conflicts between users. As of March 2009, BLM had designated about 32 percent of its lands as open to motorized travel, 48 percent as limited, and 4 percent as closed; 16 percent are not yet designated. BLM’s most recent guidance, issued in 2007, provided additional details related to how field units should conduct travel planning in the context of resource management planning. While updating a resource management plan, BLM field unit officials are to inventory and evaluate OHV routes and area designations (such as open, limited, and closed), seek public input, and make changes as appropriate. For example, when BLM’s Moab Field Office in Utah finalized its resource management plan in October 2008, the plan changed the area designations of many lands under the field office’s jurisdiction. Specifically, open areas were reduced from 1.2 million acres to 2,000 acres, limited areas were increased from 600,000 acres to 1.5 million acres, and closed areas were increased from 24,000 acres to 339,000 acres. For areas designated for limited OHV use, BLM guidance states that the resource management plan must include a map identifying the OHV route system. In addition, because of recent increases in OHV use on public lands and the potential for related resource damage, BLM’s latest guidance encourages field units not to designate large areas as open to motorized travel. BLM headquarters officials have estimated that in about 10 years they will complete updating resource management plans to include travel planning. The Park Service is currently developing regulations for OHV use for particular units. By regulation, the Park Service prohibits OHV use except in certain units designated as lakeshores, seashores, national recreation areas, or preserves. To authorize OHV use in such units, the unit is required to develop special regulations describing the areas where OHV use is permitted. Of the 391 national park units, 50 (13 percent) fall within one of these four designations. While many Park Service units with OHV use have developed special regulations, some units are currently in the process of developing their special regulations. Many different types of OHV are operated on federal lands. For the purposes of this report, an OHV is any motorized vehicle capable of, or designed for, cross-country travel immediately on or over land, not including personal watercraft, snowmobiles, or aircraft. OHVs used on federal lands include off-highway motorcycles, all-terrain vehicles, utility terrain vehicles, dune buggies, swamp buggies, jeeps, and rock crawlers (see fig. 2). These vehicles may be used for various purposes, ranging from trail and open-area riding to hunting and accessing lakeshores, seashores, or in-holdings (private or state-owned lands inside the boundaries of federal lands). National OHV user groups have described OHV recreation as a way to experience challenge and excitement, enjoy the outdoors, and have fun as a family. In addition, OHV use may provide economic benefits to local communities near recreation sites. The environmental impacts of OHV use, both direct and indirect, have been studied and documented over the past several decades. In fact, in 2004, the Forest Service Chief identified unmanaged motorized recreation as one of the top four threats to national forests, estimating that there were more than 14,000 miles of user-created trails, which can lead to long- lasting damage. Potential environmental impacts associated with OHV use include damage to soil, vegetation, riparian areas or wetlands, water quality, and air quality, as well as noise, wildlife habitat fragmentation, and the spread of invasive species. For example, studies on the impacts of OHV use indicate that soil damage can increase erosion and runoff, as well as decrease the soil’s ability to support vegetation. Additionally, research has shown that habitat fragmentation from OHV use alters the distribution of wildlife species across the landscape and affects many behaviors such as feeding, courtship, breeding, and migration; habitat fragmentation can also negatively affect wildlife beyond the actual amount of surface area disturbed by roads. In 2007, the U.S. Geological Survey reported that as a result of OHV use, the size and abundance of native plants may be reduced, which in turn may permit invasive or nonnative plants to spread and dominate the plant community, thus diminishing overall biodiversity. Another potential impact of OHV use is damage to cultural resources, including archaeologically significant sites such as Native American grave sites, historic battlefields, fossilized remains, and ruins of ancient civilizations. The Use of Off- Highway Vehicles Has Increased on Federal Lands, with Varying Environmental, Social, and Safety Impacts OHV use on federal lands generally increased from fiscal year 2004 through fiscal year 2008, according to a majority of field unit officials from the Forest Service, BLM, and Park Service. Most field unit officials reported that environmental impacts associated with OHV use occurred on less than 20 percent of the lands they manage, although a few field unit officials reported that 80 percent or more of their lands are affected. Most field unit officials also indicated that social and safety impacts occasionally occurred on their lands. Off-Highway Vehicle Use Has Increased over the Past 5 Fiscal Years OHV use, including authorized and unauthorized use, increased on federal lands from fiscal year 2004 through fiscal year 2008. Specifically, most Forest Service and BLM field unit officials and some Park Service field unit officials reported an increase in authorized OHV use. Similarly, most BLM field unit officials, a majority of Forest Service field unit officials, and some Park Service field unit officials reported an increase in unauthorized OHV use. These agencies’ field unit officials attributed the increased use of OHVs on federal lands to, among other things, a growing population in close proximity to federal lands and the rising popularity of OHV recreation. In addition, officials at two field units we visited said they have seen an increase in OHV use on their units because of OHV closures on nearby state and private lands. For example, Park Service officials from Big Cypress National Preserve said that both private and public lands in South Florida have been closed to OHV use, leading to increased OHV use in the preserve. Similarly, Forest Service officials from the Tonto National Forest said that OHV use has increased since the state of Arizona closed lands near Phoenix to OHV use in an effort to reduce dust pollution. Most field unit officials reported that OHV use occurred on their lands from fiscal year 2004 through fiscal year 2008. Specifically, nearly all Forest Service and BLM field unit officials and a majority of Park Service field unit officials said that OHV use, whether authorized or unauthorized, occurred on the lands they manage. According to field unit officials from all three agencies, in an average year, OHVs were used on federal lands primarily for recreational activities such as trail and open-area riding. OHVs were also used on federal lands for hunting and game retrieval; to access particular areas, such as beaches and lakeshores; and for activities requiring a permit, such as geophysical exploration and ranching (see fig. 3). In addition, the amount of OHV use relative to other types of recreational activities on federal lands, such as fishing, hunting, hiking, and camping, varies by agency. For example, most Forest Service field unit officials said that OHV use constitutes less than half the recreational activity on their lands, while a majority of BLM field unit officials indicated that OHV use constitutes more than half the recreational activity on their lands. Most Park Service field unit officials, however, indicated that OHV use constitutes less than 10 percent of the recreation taking place on their lands, in part because OHV use is authorized only in certain Park Service field units. Environmental Impacts of OHV Use Occur on Less Than One-Fifth of Federal Lands Most field unit officials from all three agencies indicated that environmental impacts of OHV use occur on less than 20 percent of the lands they manage; a few field unit officials, however, reported that 80 percent or more of their lands are affected by OHV-related environmental impacts. Forest Service and BLM field unit officials were more likely to report greater percentages of land with environmental impacts than Park Service field unit officials. The OHV-related environmental impacts that field unit officials identified as most widespread were soil erosion, damage to vegetation, wildlife habitat fragmentation, and the spread of invasive species. For example, officials from the Tonto National Forest in Arizona noted that the main impact associated with OHV use in the forest has been soil erosion, particularly in areas with highly erodible soils (see fig. 4). Additionally, officials from BLM’s Phoenix District in Arizona noted that OHV use has fragmented desert tortoise habitat because the tortoise can be disturbed by OHV noise. Other reported environmental impacts included damage to riparian zones and harm to threatened or endangered species. The severity of certain OHV-related environmental impacts, such as soil damage, may also depend on the ecosystem in which OHV use occurs (see fig. 5). For example, BLM officials from the El Centro Field Office in southern California explained that the Imperial Sand Dunes are dynamic and soil damage from OHV use tends to be minimal, since most tracks are quickly erased by the wind. In contrast, certain desert ecosystems, including those in Arches National Park, have sensitive soils, and recovery from OHV-related disturbance to soils and plant life can be very slow. Additionally, Forest Service officials from the Manti-LaSal National Forest in central Utah stated that soil erosion is a major environmental impact associated with OHV use on their forest. Damage to the forest’s soils often occurs from OHV use in the late fall (after the first snow), when the ground is wet but not frozen. While officials at the Manti-LaSal National Forest said that these damaged areas could recover in about a year with rehabilitation efforts, the areas often take 4 to 5 years to recover because the forest lacks staff to rehabilitate the lands more quickly. Similarly, Park Service officials in Big Cypress National Preserve said that the environmental impacts primarily associated with OHV use include disturbance to soils and vegetation, as well as disruption to the hydrology of the wetland ecosystem. These officials further stated that while plant life regenerates fast, ruts from OHV use can persist for more than a decade. OHV Use on Federal Lands Occasionally Results in Social and Safety Impacts, Including Fatalities Social and safety impacts related to OHV use occasionally or rarely occur on federal lands; although, an annual average of about 110 OHV-related fatalities occurred nationwide from fiscal year 2004 through fiscal year 2008 according to data provided by field unit officials. Forest Service and BLM field unit officials reported a higher frequency of OHV-related social and safety impacts than did Park Service field unit officials. The most often reported of these social and safety impacts were conflicts between OHV and nonmotorized users, displacement of nonmotorized users, conflicts with private landowners, and irresponsible OHV operation. For example, Forest Service officials at the Manti-LaSal National Forest said that motorized recreationists have taken over trails managed for nonmotorized use, resulting in conflicts between motorized and nonmotorized users. Additionally, BLM officials at the Prineville District in central Oregon noted that private landowners adjacent to federal lands, frustrated with OHV users driving on their lands, have taken enforcement into their own hands by placing cables and rocks across trails to prevent unauthorized OHV use. BLM officials at the El Centro Field Office also said that many OHV-related violations are due to irresponsible behavior, such as failing to have a safety flag on an OHV or driving an OHV while under the influence of alcohol. Nearly all reported OHV-related fatalities occurred on Forest Service and BLM lands. Although a majority of field unit officials from all three agencies reported having no OHV-related fatalities from fiscal year 2004 through fiscal year 2008, some field unit officials did report fatalities—a maximum total of about 570 during that time frame at 117 field units. Specifically, Forest Service field unit officials reported about 250 fatalities at 68 field units, BLM about 320 fatalities at 45 field units, and Park Service 5 fatalities at 4 field units. While most field units that had OHV-related fatalities reported 5 or less, a few field unit officials reported between 10 and 75 fatalities. Agencies’ Plans for OHV Management Are Missing Some Key Elements of Strategic Planning At a national level, the Forest Service’s and BLM’s management of OHVs is broadly guided by department-level strategic plans, as well as by more- specific agency-level plans. These plans, however, are missing some key elements of strategic planning—such as results-oriented goals, strategies to achieve the goals, time frames for implementing strategies, or performance measures to monitor incremental progress—that could improve OHV management. The Park Service has no extensive planning or guidance for managing OHV use, but this absence seems reasonable given that Park Service regulations limit OHV use to only a few units and that OHV use is not a predominant recreational activity on Park Service lands. The Department of Agriculture’s strategic plan includes a goal to protect forests and grasslands. Within the context of this goal, the plan specifically mentions OHV management, identifying unmanaged motorized recreation as one of four key threats to national forests. The plan also identifies a performance measure to develop travel plans—which designate roads, trails, and areas that will be open to motorized travel—for all national forests, with a target of completing these plans by 2010. In addition to this department-level plan, the Forest Service has an agency-level strategic plan that identifies a goal of sustaining and enhancing outdoor recreation opportunities and, in particular, improving the management of OHV use. The Forest Service’s strategic plan also reiterates the performance measure identified by the department-level plan—to develop travel management plans for all forests that designate OHV roads, trails, and areas. While the agency plan includes a goal—improving the management of OHV use—and one strategy to achieve the goal—designating motorized roads, trails, and areas—the plan does not identify strategies to address— or time frames to implement—other important aspects of OHV management as identified in the executive orders, such as implementing motorized-travel designations on the ground, communicating with the public, monitoring OHV trail systems, or enforcing OHV regulations. Given that the Forest Service has identified unmanaged motorized recreation as one of the top four threats to national forests, the agency’s strategic plan provides insufficient direction on this management challenge. Similar to the Forest Service, BLM’s management of OHV use is guided by departmental planning. The Department of the Interior’s strategic plan identifies a broad goal of improving recreation opportunities for America, and BLM has two plans expanding on this goal for OHV-related activities. BLM’s first plan, the “National Management Strategy for Motorized Off- Highway Vehicle Use on Public Lands,” was published in 2001 as a first step in developing a proactive approach to on-the-ground management of OHVs. The second plan, BLM’s “Priorities for Recreation and Visitor Services,” was developed in 2003 and reconfirmed in 2007 as the agency’s plan for recreation management, including OHV management. This recreation plan identifies numerous goals for OHV management, as well as strategies the agency can use to achieve each goal. For example, the plan identifies a goal of improving on-the-ground travel management and identifies three strategies to achieve that goal—conducting trails surveys to determine maintenance needs; implementing best management practices such as signs, maps, and the presence of agency staff in the field; and monitoring social outcomes and environmental conditions along trails. Despite identifying numerous goals and strategies to achieve the goals, BLM’s recreation plan does not identify any time frames for implementing the strategies or any performance measures for monitoring incremental progress. For example, while the agency identifies a strategy of implementing best management practices, the agency identifies neither performance measures that could track the use of best management practices—such as the percentage of routes with signs or the number of field offices with up-to-date maps—nor time frames by which some of these best management practices should be implemented. Without performance measures and time frames, BLM cannot ensure that it is making progress on achieving its goals in a timely manner. Agencies’ Field Units Reported Taking Many Actions, but Additional Efforts Could Improve Communication and Enforcement; a Majority of Units Said They Are Unable to Sustainably Manage OHV Use Actions that agencies’ field units reported taking to manage OHV use include supplementing federal funds with authorized outside resources (such as state grants), communicating with and educating the public, enforcing OHV regulations, and engineering and monitoring OHV trail systems. Additional efforts could improve communication with the public about OHV trails and areas and enforcement of OHV regulations. In addition, a majority of field unit officials reported that they cannot sustainably manage existing OHV areas; sustainable management would include having the necessary human and financial resources available to ensure compliance with regulations, educate users, maintain OHV use areas, and evaluate the existing OHV program. Field Units Reported Supplementing Federal Funds with Authorized Outside Resources Authorized outside resources are being used to manage OHV use, including grants from states and other sources, partnerships with OHV and other user groups, or user fees. Specifically, Forest Service and BLM field unit officials were more likely than Park Service field unit officials to report using authorized outside resources. The most commonly identified sources of such resources for Forest Service and BLM units were grants from states and partnerships with OHV user groups; for the Park Service, the most commonly identified source was user fees or permits (see fig. 6). Of the field unit officials who reported supplementing federal funds with authorized outside resources, a majority indicated that additional funding sources amounted to more than 20 percent of their OHV management budgets, with some Forest Service and BLM field unit officials reporting that these sources amounted to more than half their OHV management budgets. At most of the field units we visited with authorized OHV use, agency officials emphasized that outside resources are vital to OHV management. For example, officials at the Cleveland National Forest said that they would not have an OHV management program without the grants they receive from the state of California. These grants funded the development of the current OHV management program and allowed the national forest to continue restoration, operations, and maintenance activities on their OHV routes. Similarly, Park Service officials at Assateague Island National Seashore in Maryland said that the fees they collect through their OHV permit program fund several year-round staff, and without the fees, they would not be able to support OHV use on Assateague Island. Officials at some of the field units we visited reported that obtaining and using authorized outside resources can require a significant investment of staff time. For example, BLM officials at the Phoenix District said that while volunteers can be a great source of outside resources, their labor is not free. Specifically, BLM officials spend significant time organizing and finding meaningful projects for volunteers that provide both a benefit to BLM and a rewarding experience for the volunteers. Similarly, Forest Service officials at the Cleveland National Forest said that applying for state grants is time-consuming for field unit staff, as some grant applications are about 150 pages long. While Field Units Reported Taking Actions to Communicate with and Educate the Public, Additional Efforts Could Improve Communication about OHV Areas and Trails All three agencies reported taking actions to communicate with and educate the public, including posting signs, providing maps, attending meetings with OHV user and other interest groups, and soliciting volunteers for maintenance and peer enforcement activities (see fig. 7). Field unit officials indicated that the actions taken most often were posting signs, attending meetings of OHV user groups and other groups, and providing maps of OHV use areas. Forest Service and BLM field unit officials were more likely than Park Service field unit officials to report taking actions to communicate with and educate the public. Few Park Service field unit officials reported taking similar actions because many actions—such as developing adopt-a-route programs or soliciting volunteers for maintenance—are only appropriate in areas with authorized OHV use. Most field unit officials indicated that they post signs on OHV routes to describe the types of travel permitted on the route. A majority of officials who post signs also said that it is an effective OHV management action. Figure 8 shows a BLM Moab Field Office sign that stopped a vehicle from entering a streambed closed to OHV use. Only a few field unit officials with authorized OHV use in their units indicated that at least 90 percent of their OHV routes have been signed. About half of the field unit officials whose units authorize OHV use indicated that more than 50 percent of their OHV routes have been signed. For example, at the BLM Moab Field Office, we observed that the Sand Flats Recreation Area was extensively signed, with signs at the entrance to the recreation area, at parking areas, and at trailheads (see fig. 9). By contrast, another OHV use area at the same field office had fewer signs identifying which routes were open or closed (see fig. 10). Officials at a few locations we visited also mentioned that, because of theft or vandalism, maintenance of signs has been difficult, and they have developed techniques to limit such vandalism (see fig. 11). For example, BLM Phoenix District officials said that putting American flags on their signs has significantly reduced vandalism. Furthermore, BLM El Centro Field Office officials mentioned that designing signs in conjunction with OHV user groups can also limit vandalism by giving OHV users a stake in maintaining the signs. Similarly, a BLM Prineville District official mentioned that OHV users often respond more positively to signs directing them to where they can ride than to signs saying trails are closed. Most field unit officials from the Forest Service and BLM, and some field unit officials from the Park Service, said that they provide maps of OHV routes or use areas. Nevertheless, only some field unit officials with authorized OHV routes in their units indicated that they have maps for more than 90 percent of their OHV routes or areas. About half of field unit officials with authorized OHV routes indicated that they have maps for at least 50 percent of their OHV routes or areas. Officials from two field offices we visited mentioned that developing maps is expensive. To help offset this expense, officials from the BLM Moab Field Office said they are working with private companies to develop maps of the OHV routes; they hope to apply for a state grant to help fund the production of those maps for the public. Field unit officials from the Forest Service were more likely than those from the BLM or Park Service to indicate that they have maps for at least 50 percent of their OHV routes, possibly because the Forest Service has been developing motor vehicle use maps in response to its 2005 travel management regulation. While the Forest Service has acknowledged that the motor vehicle use map is designed to display a national forest’s designated roads, trails, and areas for enforcement purposes, rather than as a visitor map, officials at three forests we visited expressed concerns that the public has difficulties with motor vehicle use maps. In addition, both OHV user groups and environmental groups have expressed similar concerns. Specifically, a motor vehicle use map does not display all the information that may be found on a visitor map, such as topographic lines; landscape features such as streams; or other trails users might encounter, such as trails closed to motor vehicles (see fig. 12). Also, although Forest Service headquarters officials acknowledged that on-the-ground route markers would be very helpful for OHV users’ navigation, they said that national forests have not necessarily erected these types of signs for all OHV routes. A majority of field unit officials indicated that they have developed partnerships with outside user groups. Specifically, officials at most field units we visited indicated they had solicited volunteers for OHV route maintenance or education activities. For example, officials at the BLM Phoenix District said they have used volunteers from environmental groups to help rehabilitate areas in the Lower Sonoran Desert National Monument, which is temporarily closed to OHV use. Similarly, officials from the BLM Moab Field Office mentioned partnerships they had developed with local OHV user groups. In assisting with route maintenance, the groups’ labor has accounted for more hours than those of the field office’s paid recreation staff. While Field Units Reported Taking a Number of Actions to Enforce OHV Regulations, Additional Efforts Could Improve Enforcement Forest Service, BLM, and Park Service field units reported taking a number of actions to enforce their OHV regulations. Most field unit officials indicated that they have taken a number of enforcement actions related to OHV use (see fig. 13). For example, nearly all Forest Service and BLM field unit officials and most Park Service officials said their units conduct occasional patrols of OHV routes or open areas. In addition, nearly all Forest Service field unit officials, and most BLM and Park Service officials, said their units issue written warnings or citations for OHV violations. Some field unit officials from all three agencies had also arrested individuals for OHV violations. Law enforcement officials at Forest Service headquarters mentioned that such arrests are often related to other violations, such as driving an OHV while under the influence of alcohol. Generally, field unit officials who took enforcement actions rated them as effective (see table 1). The most commonly used, but least effective, OHV enforcement action was conducting patrols of OHV routes or open areas occasionally. By contrast, the most effective action reported by field unit officials was conducting patrols of OHV routes or use areas routinely. Although three of the actions—requiring permits or fees for OHV access, arresting individuals for OHV violations, and revoking or suspending OHV use privileges—were used by only some field units, they were rated as more effective than the most commonly used action. For example, officials from Tonto National Forest said their experience with requiring OHV permits has been positive. The permits required for OHV use in certain areas of the forest are free and provide a lock combination allowing access into certain gated OHV areas for 6 months. Officials observed that requiring free permits increases user accountability, since users do not want to lose their riding privileges. The permits are also acceptable to the public because they are free. Only about half the field unit officials were satisfied with existing fines for OHV violations in their units. BLM field unit officials were less likely to be satisfied with their existing fines than Forest Service or Park Service officials. Additionally, about half the field unit officials indicated that existing fines were insufficient to deter illegal or unsafe OHV use. For example, one BLM official in Utah pointed out that the fine amount for driving in a closed area is $150. Although this fine is one of the highest fines for an OHV violation in the Moab area, the official said the amount is negligible when compared with the overall expense that most OHV enthusiasts invest in their sport, including the cost of an OHV, the trailer to transport it, and safety gear for the rider. Consistent with applicable laws, Forest Service and BLM maximum fine amounts for violations of OHV regulations are $500 and $1,000, respectively. But fine amounts for specific OHV-related violations are developed at the local level. Specifically, the 94 federal court districts throughout the country maintain fine schedules for violations of federal regulations. The U.S. Attorney in each federal court district is responsible for prosecuting individuals who violate OHV regulations within that district. Local judicial authorities, such as magistrates presiding in those federal court districts, have discretion to increase or decrease the existing fine schedules through local court rules. Consequently, fine amounts for similar OHV violations can vary substantially, depending on which federal court district the violation occurs in. For example, among California’s four federal court districts, the fine for disturbing land or wildlife while traveling off road in an OHV ranges from $50 in the central district up to $250 in the eastern district. To modify the fine schedule in a particular federal court district, agency officials must work with the relevant U.S. Attorney to petition the local magistrate within that district. In 2001, BLM proposed comparing fine amounts across various U.S. district courts to determine the range of fines for motorized OHV-related violations and then petitioning the courts to modify the fines where appropriate. BLM officials told us, however, that this analysis has not been conducted at a national level. In addition, officials at some of the field units we visited said they had recently petitioned to change the fine schedules or were planning such a petition in the future. For example, officials from the Forest Service and Park Service in Colorado said that they had successfully petitioned the local magistrate to raise the fines. An Uncompahgre National Forest official said that the new fine for riding an OHV off a designated route is $250, which he said is more appropriate. Some OHV violations are adjudicated in federal court, either because a law enforcement officer requires an OHV rider to make a court appearance or because the OHV rider decides to appeal a citation. Successful prosecution of OHV violations depends both on the availability and willingness of the U.S. Attorney’s Office to pursue the case and on the receptiveness of the local magistrate to hearing OHV-related violations. About half of field unit officials indicated that the local U.S. Attorney’s Office was responsive to OHV-related violations, and some indicated the same for federal magistrates. For example, a law enforcement officer from the Manti-LaSal National Forest said that he took a local magistrate on a tour of the forest and explained some of the problems the forest is having with unauthorized OHV use. After the tour, law enforcement officers successfully sought restitution payments from OHV violators to remediate OHV-related damage to the forest. By contrast, several officials at field units we visited mentioned that the U.S. Attorney’s Office in their area has little time to address OHV-related violations because the office is prosecuting cases involving, for example, terrorism or violent crimes. Field Units Reported Taking Actions to Engineer and Monitor OHV Trail Systems A majority of field unit officials indicated that, to help manage OHV use, they use engineering and monitoring actions, such as closing or relocating problematic OHV routes, providing separate motorized and nonmotorized recreational opportunities, monitoring the effects of OHV use, and designing trail systems (see fig. 14). Field unit officials from the Forest Service and BLM were more likely to use engineering and monitoring strategies than field unit officials from the Park Service. During our visits to field units, we observed several examples of officials’ efforts to close or relocate problematic OHV routes, such as putting up gates or lining OHV routes with rocks (see fig. 15). For example, Curecanti National Recreation Area in Colorado, managed by the Park Service, allows OHV use to access the lakeshore. In some areas, access points are near cultural resources, and officials built a barrier to protect these resources. In two other field units we visited, officials were temporarily closing large areas to remediate existing OHV-related damage. For example, BLM’s Phoenix District Office closed portions of the Lower Sonoran Desert National Monument to OHV use in June 2008. During the closure, officials said they intended to reseed with native plants to remediate OHV routes and reclaim areas disturbed by user-created routes. These officials indicated that much of the remediation work would be done by volunteers, including environmental groups and religious organizations. A majority of field unit officials also indicated that they have provided separate motorized and nonmotorized recreational opportunities. For example, the Siuslaw National Forest, which manages the Oregon Dunes National Recreation Area, has designated separate areas on the dunes for motorized and nonmotorized travel. When developing the boundaries between the motorized and nonmotorized areas, officials said they took advantage of natural barriers, such as roads and rivers, to make it easier for OHV riders to see which areas are designated as open or closed. About half of field unit officials indicated that they had designed OHV trail systems to provide varied opportunities, such as loops or training areas. For example, the Deschutes National Forest and BLM’s Prineville Field Office in central Oregon worked together to develop several OHV route systems, including the Millican Valley system, with 255 miles of OHV routes, and the East Fort Rock system, with 318 miles of OHV routes. To help OHV users select an appropriate trail, the Forest Service and BLM have also classified each of the trails in these areas on the basis of difficulty. Similarly, BLM’s Phoenix District Office developed the Boulders, a designated OHV trail system that includes a 22-mile OHV route through nearby mountains and a 10-acre staging area where OHV users can camp. To improve safety in the staging area, BLM officials developed a design that discourages riding OHVs within the staging area: they engineered the staging area in an irregular shape that reduces riding in that area and also provided a training area for children. A majority of field unit officials reported that they have monitored the effects of OHV use on their land, including the effects of noise or impacts on soils, water, air, and habitats. Only a few of the field units we visited, however, indicated that their procedures for monitoring went beyond casual observation of OHV impacts. For example, officials from the Manti- La Sal National Forest monitor OHV impacts by surveying the condition of existing trails, patrolling trail systems, and mapping new unauthorized trails. These officials are developing a database that will include qualitative information about user-created trails, such as type of off-road travel, related impacts, how officials addressed those impacts, and the measures officials would need to take to close an unauthorized route. These officials stated that compiling this information in a database will enable them to evaluate data, make decisions, and take appropriate action. A Majority of Field Units Indicated They Cannot Manage Existing OHV Areas in a Sustainable Manner Although field units are taking many management actions, a majority of field unit officials indicated that they cannot sustainably manage existing OHV areas; sustainable management would include having the necessary human and financial resources available to ensure compliance with regulations, educate users, maintain OHV use areas, and evaluate the existing OHV program. Most field unit officials who said they could not sustainably manage their existing OHV areas indicated that they have insufficient resources for equipment or staff for management and enforcement. Field unit officials from BLM were more likely than Forest Service or Park Service officials to indicate that they could not sustainably manage their existing OHV use (see fig. 16). About half the national forests that have published motor vehicle use maps, as required by the travel management rule, indicated that they could not sustainably manage the OHV route system that they designated. For example, an official from the Uncompahgre National Forest said that the forest’s designated system of trails cannot be sustainably managed. The official further stated that the public’s priority for OHV use is to maintain their long-established access to the forest, and they do not want the Forest Service to designate a sustainable system if doing so means losing long- established routes. A few field unit officials reported that their unit has a full-time OHV manager to, among other things, oversee OHV use, coordinate volunteers, and apply for state grants. Field units with a full-time OHV manager were more likely to report that they could sustainably manage their existing OHV use. Specifically, these field units reported taking more actions to manage OHV use compared with field units without a full-time OHV manager. For instance, field units with full-time OHV managers tend to leverage authorized outside resources, such as state grants, more extensively than units without full-time OHV managers. One BLM official said that dedicating staff to managing OHV use full-time could provide a benefit to overall land management. Specifically, he said the recreation planner at his unit has a wide range of responsibilities, including managing OHVs, permitting, signs, maintenance, campgrounds, and interpretation, and cannot do it all very effectively. He said that OHV management is a full-time position in itself, but since his unit has not been able to hire someone full-time, OHV management gets attention only as time allows. Agencies Reported Facing Many Challenges in Managing OHV Use Numerous issues, including insufficient staffing levels and financial resources, as well as enforcement of OHV regulations, were identified as challenges by field unit officials. Generally, a larger proportion of Forest Service and BLM field unit officials than Park Service field unit officials rated OHV management issues as great challenges (see table 2). Staff resources for enforcement, such as a limited number of officials and limited financial resources, were reported as a great challenge by most Forest Service and BLM field unit officials and by about half of Park Service officials. BLM headquarters officials explained that BLM has 195 uniformed law enforcement officers, which is an average of about 1 officer for every 1.2 million acres of land. For example, an official from BLM’s Grand Junction Field Office in Colorado told us that a single law enforcement officer patrols 1.3 million acres and that OHV users are aware of this minimal law enforcement presence. Although officials at some field units we visited said they would like to increase the number of law enforcement officers, they explained that even when they have approval for additional officers, they do not have enough funding to fill the positions. Officials from BLM’s Grand Junction Field Office also noted that law enforcement officers are the most expensive component of the workforce, because they require background checks, security clearances, extensive training, and expensive equipment such as firearms. Forest Service and BLM officials said they have attempted to mitigate their insufficient number of law enforcement officials. For example, the Forest Service has developed a Forest Protection Officer program, which allows non-law-enforcement staff to fulfill some law enforcement functions, such as issuing warnings and citations. Similarly, BLM officials said they attempted to mitigate enforcement challenges at particular BLM field offices by bringing in additional law enforcement officers from other BLM field offices, as well as from states and nearby counties. For example, BLM’s El Centro Field Office officials said that they try to bring in about 100 additional federal and local law enforcement officers for busy holiday weekends. On the other hand, a BLM law enforcement officer from the Grand Junction Field Office said that his deployment to the El Centro Field Office led to gaps in enforcement in Grand Junction during such weekends. A limited number of staff for OHV management was identified as a great challenge for a majority of Forest Service field unit officials, most BLM field unit officials, and some Park Service officials. Field staff who work on OHV issues work in various capacities, such as managing volunteers, creating route systems, maintaining routes, educating users, and writing state grant applications, but most units do not have such staff. For example, at BLM’s Phoenix District Office, OHV management staff maintain an ambassador program, which coordinates volunteers to educate users and promote safe, sustainable OHV use in the area. Managing this program requires one full-time manager plus 10 to 20 percent of the time of two additional staff. Officials from four field units we visited stated that although volunteers and partnerships can enhance OHV management, taking advantage of their labor requires a significant investment of management staff resources. Officials from two of the field units that we visited noted that, with additional OHV management staff, they could better leverage resources such as volunteers and state grants. Most BLM and Forest Service units reported insufficient financial resources as a great challenge to managing OHV use in their units, although only some Park Service units reported the same. Similarly, a majority of the field units we visited also cited insufficient financial resources as a challenge. For example, Forest Service officials from the Cleveland National Forest said that even though recreational OHV use has increased, funds allocated for recreation have failed to keep pace. In addition to staffing and financial challenges, a majority of field unit officials cited enforcement of OHV regulations as a great challenge as well. One reason for this challenge may be that law enforcement officers have many responsibilities including, among others, enforcing OHV regulations, controlling gang activity, preventing illegal drug activities, and responding to impacts on resources and public safety from illegal smuggling activities along the U.S. border. For example, BLM officials at the Lower Sonoran Desert National Monument said that border issues, including the smuggling of illegal drugs and people, have placed increased demands on law enforcement officers, reducing their capacity to deal with OHV recreation issues. Additionally, enforcement may be a challenge where a unit’s lands are difficult for law enforcement officers to reach. For example, Park Service officials from Assateague Island National Seashore said that getting to portions of their OHV area is difficult because law enforcement officers must travel 12 miles over sand. Similarly, BLM officials at the Moab Field Office stated that because of the distance a law enforcement officer must travel, it can take several hours just to get to certain OHV areas in their unit, making enforcement in those areas difficult. Another challenge reported by agency officials in managing OHV use is variation in laws pertaining to OHV safety. Specifically, while agencies set minimum safety standards in their regulations—for example, by requiring vehicles to have brakes, spark arresters, and lights for night use—the regulations provide that state safety laws, as well as licensing and registration laws, generally apply to motorized vehicles on federal lands. For example, federal Forest Service regulations specify that riders may not operate a vehicle (1) without a valid license as required by state law, (2) in violation of any state noise emission standard, or (3) in violation of any state law regulating the use of vehicles off roads. But state laws regulating the use of OHVs vary significantly. For example, Utah generally prohibits children under 8 years old from riding OHVs on public land and requires children 8 to 15 years old to successfully complete an education course. In contrast, neighboring Colorado has not set minimum age requirements for riding OHVs on public land. A few units have created their own, area- specific rules for OHV use that supersede state laws. For example, BLM’s El Centro Field Office has special rules for OHV riders on the Imperial Sand Dunes. These rules require that vehicles have a flag at least 8 feet from the ground so that other riders can more easily see oncoming vehicles. In addition, the rules set speed limits in camping areas and prohibit other dangerous activities. An additional challenge faced by a majority of BLM officials and about half of Forest Service officials is installing and maintaining signs. For example, field unit officials said that signs are often shot at, pulled out, or driven over and that signs must frequently be replaced (see fig. 17). Officials at Forest Service headquarters told us that signs at some units are vandalized or taken down less than 48 hours after installation. Other challenges identified by field unit officials include managing varied public expectations about how public lands should be used and altering long-established OHV use patterns. A majority of Forest Service and BLM field unit officials, and some Park Service field unit officials, reported that managing varying expectations about how federal lands should be used is a great challenge. For example, BLM officials from the Moab Field Office said they received public input at 11 meetings when developing their recently finalized resource management plan, with both OHV user groups and environmental groups opposing aspects of the plan. Generally, user groups sought to open more areas to cross-country travel, while environmental groups generally opposed the designation of routes in areas they contended were not suitable for OHV use. Additionally, even within user groups, expectations can vary. For example, a BLM official from the Grand Junction Field Office said that while some hunters expect to use their OHVs to retrieve game, other hunters prefer that OHVs not be used, so that game are not scared away by the sound of OHVs. Finally, a majority of BLM field unit officials, about half of Forest Service field unit officials, and some Park Service field unit officials reported that altering long-established OHV use patterns is challenging. For example, Park Service officials at Big Cypress National Preserve said that the use of swamp buggies predates the 1974 creation of the preserve. Swamp buggies have been used for generations to travel to in-holdings and hunting camps, which are otherwise inaccessible because of deep mud, water, and dense foliage. According to Park Service officials, as OHV use has become more popular in the preserve, officials have recognized the need for comprehensive OHV management, yet changing long-established use patterns has been difficult. Conclusions Over the past 5 years, OHV use has increased on federal lands and has emerged as a national issue. Federal land management agencies have only recently begun to respond to this trend by revising their plans and how they manage OHV use, but they are having to do so in an environment of constrained budgetary and staff resources and other competing management priorities. Although they reported taking a variety of actions to manage OHV use in this environment, agency field unit officials reported that they cannot sustainably manage their OHV route systems. The likelihood that the Forest Service and BLM, in particular, will succeed in their efforts to enhance management of OHV use could be increased by improving the agencies’ planning to include key strategic planning elements. Such enhancements could also help the agencies to more effectively address and manage some of the challenges that their field unit officials reported in managing OHV use on their lands, such as insufficient staffing levels and financial resources. In addition, developing more user- friendly maps and signs for their route systems and seeking more appropriate fines to deter violations of OHV regulations could provide all federal land users, including OHV users, a more enjoyable, quality experience while also potentially lessening environmental, social, and safety impacts resulting from OHV use. Recommendations for Executive Action To help provide quality OHV recreational opportunities while protecting natural and cultural resources on federal lands, we recommend that: the Secretary of Agriculture direct the Chief of the Forest Service to identify additional strategies to achieve the agency’s goal of improving OHV management, as well as time frames for carrying out the strategies and performance measures for monitoring incremental progress; and the Secretary of the Interior direct the Director of BLM to enhance the agency’s existing “Priorities for Recreation and Visitor Services” by establishing performance measures and time frames for carrying out its stated goals for OHV recreation. Additionally, to improve communication with the public and enhance law enforcement efforts regarding OHV use on federal lands, we recommend that the Secretaries of Agriculture and the Interior direct the Forest Service and BLM, respectively, to take the following actions: enhance communication with the public about OHV trails and areas through, for example, developing user-friendly signs and maps to improve visitors’ experiences; and examine fine amounts across various U.S. district courts to determine the range of fines for OHV-related violations and petition appropriate judicial authorities to make modifications where warranted. Agency Comments and Our Evaluation We provided the Departments of Agriculture and the Interior with a draft of this report for review and comment. The Departments of Agriculture and the Interior generally agreed with our findings and recommendations; their written comments appear in appendixes II and III, respectively. The departments also provided technical comments that we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretaries of Agriculture and the Interior, the Chief of the Forest Service, the Director of the Bureau of Land Management, the Director of the National Park Service, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to determine (1) the trends in and status of off-highway vehicle (OHV) use on federal lands managed by the Department of Agriculture’s Forest Service and the Department of the Interior’s Bureau of Land Management (BLM) and National Park Service (Park Service) from fiscal year 2004 through fiscal year 2008, as well as the reported environmental, social, and safety impacts of OHV use; (2) the agencies’ strategic planning for managing OHV use on federal lands; (3) actions taken by the agencies’ field units in managing OHV use on their lands; and (4) current OHV management challenges facing these agencies. For this report, we defined an OHV, also commonly referred to as an off- road vehicle, as any motorized vehicle capable of or designed for cross- country travel or travel immediately on or over land. Examples of OHVs include but are not limited to 4 x 4 street-legal vehicles; all-terrain vehicles such as three-wheelers, four-wheelers, and side-by-sides; rock crawlers; sand rails; dune buggies; swamp buggies; and off-road motorcycles. We did not include personal watercraft, snowmobiles, aircraft, official agency use of OHVs, or use of street-legal vehicles on paved roads. To address our objectives, we collected and analyzed OHV-related documentation, including applicable executive orders and agency plans, regulations, and guidance. We also interviewed officials from Forest Service, BLM, and Park Service headquarters. To gain external perspective, we interviewed national headquarters representatives of various OHV user and environmental groups, including the Blue Ribbon Coalition, National Off-Highway Vehicle Conservation Council, Motorcycle Industry Council, Off-Road Business Association, Tread Lightly!, The Wilderness Society, and Center for Biological Diversity. In addition, we visited selected Forest Service, BLM, and Park Service field units and interviewed agency officials, and OHV user and environmental group representatives near some of those units, to obtain a better understanding of ongoing agency OHV management efforts. These field units, located in Arizona, California, Colorado, Florida, Maryland, Oregon, and Utah, were selected, using a nonprobability sample, on the basis of their geographic and ecological diversity. Table 3 lists these sites and the groups we interviewed. Because of the lack of historical and nationwide information about OHV use on federal lands, we also developed and administered a Web-based survey to gather land managers’ perspectives on the management and use of OHVs from fiscal year 2004 through fiscal year 2008 on Forest Service, BLM, and Park Service lands. The survey was administered to the entire population of National Forests and BLM field office units and to Park Service field units most likely to have OHV use, whether authorized or unauthorized. The survey included questions about the perceived trends in OHV use; potential environmental, social, and safety impacts of OHV use; how OHVs are being managed; the enforcement of OHV regulations; and challenges facing federal land managers in addressing OHV use. To develop the survey questions, we reviewed several national studies and a related GAO report to identify issues pertaining to OHV use on federal lands. We also analyzed agency documentation to identify the proper terminology used by the Forest Service, BLM, and Park Service. Furthermore, on the basis of interviews with officials at field units we visited, we identified issues related to OHV management. Finally, we examined related surveys administered to these agencies to identify relevant issues pertaining to OHV use on federal lands. The survey was pretested with potential respondents from national forests, BLM field offices, and Park Service units to ensure that (1) the questions were clear and unambiguous, (2) the terms we used were precise, (3) the survey did not place an undue burden on the agency officials completing it, and (4) the survey was independent and unbiased. In addition, the survey was reviewed three times by two separate internal, independent survey experts. We took steps in survey design, data collection, and analysis to minimize nonsampling errors. For example, we worked with headquarters and field officials at all three agencies to identify the appropriate level of analysis—congressionally designated forests and grasslands, national park units, and BLM field offices—and the appropriate survey respondents—field-level OHV managers (or if there was no OHV manager, the field-level recreation manager). To minimize measurement error that might occur from respondents interpreting our questions differently from our intended purpose, we extensively pretested the survey and followed up with nonresponding units and with units whose responses violated certain validity checks. Finally, to eliminate data-processing errors, we independently verified the computer program that generated the survey results. Our results are not subject to sampling error because we administered our survey to all OHV-relevant units of all three agencies. The survey was conducted using self-administered electronic questionnaires posted on the World Wide Web. We sent e-mail notifications to 480 respondents (177 national forest units, 136 BLM field offices, and 167 selected Park Service units). We also e-mailed each potential respondent a unique password and username to ensure that only members of the target population could participate in the survey. To encourage respondents to complete the survey, we sent an e-mail reminder to each nonrespondent about 2 weeks after our initial e-mail message. The survey data were collected from October 2008 through February 2009. We received a total of 478 responses that accounted for the 480 units surveyed, for an overall response rate of 100 percent. This “collective perspective” obtained from each of the agencies helps to mitigate individual respondent bias by aggregating information across the range of different viewpoints. Additionally, to encourage honest and open responses, in the introduction to the survey, we pledged that we would report information in the aggregate and not report data that would identify a particular unit. For purposes of characterizing the results of our survey, we identified specific meanings for the words we used to quantify the results, as follows: “a few” means between 1 and 24 percent of respondents, “some” means between 25 and 44 percent of respondents, “about half” means between 45 and 55 percent of respondents, “a majority” means between 56 and 74 percent of respondents, “most” means between 75 and 94 percent of respondents, and “nearly all” means 95 percent or more of respondents. This report does not contain all the results from the survey; the survey and a more complete tabulation of the results are provided in a supplement to this report (see GAO-09-547SP). We conducted this performance audit from February 2008 to June 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Agriculture Appendix III: Comments from the Department of the Interior Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, David P. Bixler, Assistant Director; Kevin Bray; Ellen W. Chu; Melinda Cordero; Emily Eischen; Ying Long; Janice Poling; Kim Raheb; Matthew Reinhart; Chris Riddick; and Rebecca Shea made key contributions to this report.
Off-highway vehicle (OHV) use on lands managed by the Department of Agriculture's Forest Service and the Department of the Interior's Bureau of Land Management (BLM) and National Park Service (Park Service) has become popular over the past few decades. Some critics have asserted that OHV use causes adverse environmental, social, and safety impacts, while proponents have voiced concerns about retaining access to federal lands. GAO examined the (1) trends in and status of OHV use on federal lands, as well as reported environmental, social, and safety impacts; (2) agencies' strategic planning for managing OHV use; (3) actions taken by agency field units to manage OHV use; and (4) current OHV management challenges. GAO collected and analyzed related executive orders and agency OHV plans, regulations, and guidance; interviewed agency and interest group officials; and conducted a Web-based survey of all three agencies' field unit officials. OHV use on federal lands--both authorized and unauthorized--increased from fiscal year 2004 through fiscal year 2008, with varying environmental, social, and safety impacts, according to officials from all three agencies. All three agencies reported that OHVs are predominantly used on their lands for OHV recreation, such as trail and open-area riding. Most Park Service officials said that OHV use constitutes less than 10 percent of the recreation on their lands. Most officials from all three agencies also said that OHV-related environmental impacts occur on less than 20 percent of their lands, although a few said that such impacts occur on 80 percent or more of their lands. Most officials said that social and safety impacts, such as conflicts with nonmotorized users, occasionally or rarely occurred. Forest Service and BLM plans for OHV management are missing key elements of strategic planning, such as results-oriented goals, strategies to achieve the goals, time frames for implementing strategies, or performance measures to monitor incremental progress. For example, the Forest Service's strategic plan has no strategies to address key aspects of OHV management, such as communicating with the public or enforcing OHV regulations. Similarly, while BLM's recreation plan contains strategies addressing key aspects of OHV management, the agency has not identified time frames for implementing these strategies or performance measures for monitoring progress. The Park Service has no extensive planning for managing OHV use, but this absence seems reasonable given that its regulations limit OHV use to only a few units and OHV use is not a predominant recreational activity on its lands. While agencies' field units have taken many actions to manage OHV use, additional efforts could improve communication and enforcement. In particular, units have taken actions such as supplementing federal funds with outside resources like state grants, communicating with the public by posting signs and maps, and enforcing OHV regulations by occasionally patrolling OHV areas and writing citations for OHV violations. Few officials, however, indicated that their unit had signs and maps for nearly all of their OHV areas. Additionally, while most field unit officials said that they conduct enforcement activities, such as writing citations, about half indicated that fines are insufficient to deter illegal or unsafe OHV use. In addition, a majority of officials reported they cannot sustainably manage their existing OHV use areas; sustainable management would include having the necessary human and financial resources to ensure compliance with regulations, educate users, maintain OHV use areas, and evaluate the OHV program. Officials identified numerous challenges in managing OHV use, of which the most widely identified were insufficient financial resources, as well as staff for OHV management and enforcement. In addition, most officials cited enforcement of OHV regulations as a great challenge. Other challenges were maintaining signs, managing the public's varied expectations about how federal lands should be used, and changing long-established OHV use patterns.
Background According to USMC officials, the AAV has become increasingly difficult to operate, maintain, and sustain. As weapons technology and threat capabilities have evolved over the past four decades, the AAV is viewed as having capability limitations in the areas of water speed and land mobility, lethality, protection, and network capability. The AAV is a self- deployed tracked (non-wheeled) vehicle with three variants, each describing its intended function—Personnel, Command, and Recovery. The AAV has a water speed of approximately six knots, and needs to be deployed from within 7.4 nautical miles of the shore. This factor may represent a significant survivability issue not only for the vehicle’s occupants, but also for naval amphibious forces that must move closer to potential threats on shore to support the vehicle. Over time, emerging threats—such as next generation improvised explosive devices—have changed the performance requirements for a vehicle that moves from ship to shore. According to DOD, the need to modernize USMC’s capability of transitioning from ship to shore is essential. In response to the need for new and better capabilities, the USMC began development of the EFV in 2000. We reported on the EFV program in 2006 and 2010. The EFV was to travel at higher water speeds—around 20 knots—which would have allowed transporting ships to launch the EFV further from shore than the AAVs it was to replace. However, following the expenditure of $3.7 billion between fiscal years 1995 and 2011 and a 2007 breach of a statutory cost thresholdcanceled by DOD due to affordability concerns. DOD authorized the USMC to seek a new solution, emphasizing the need for cost- effectiveness and requiring the establishment of cost targets. The USMC was granted flexibility in tailoring its acquisition approach to achieve those goals. that program was restructured and subsequently, in 2011, In 2011, the USMC completed initial acquisition documentation providing the performance requirements of a new replacement amphibious vehicle called the ACV. The ACV would be self-deploying with a water speed of 8 to 12 knots—permitting deployment beyond visual range of the shore— and would provide for sustained operations on shore with improved troop protection. The ACV would not, however, be required to achieve high water speed. An analysis of alternatives was completed for the ACV in the summer of 2012. This analysis identified two potential solutions for the ACV performance requirements. However, USMC leadership then requested that an affordability analysis be completed to explore the technical feasibility of integrating high water speed into the development of the ACV. According to DOD officials, the analysis indicated that achieving high water speed was technically possible but required unacceptable tradeoffs as the program attempted to balance vehicle weight, capabilities, and cost. Meanwhile, the USMC retained a requirement to provide protected land mobility in response to the threat of improvised explosive devices—a requirement the AAV could not meet due to its underbody design. In 2014 we reported that, according to program officials, the program office was in the process of revising its ACV acquisition approach based on this affordability analysis. In addition, DOD officials reported that the 2012 ACV analysis of alternatives might need to be updated or replaced based on potential changes to required capabilities. ACV Acquisition Approach Prioritizes Protected Land Mobility over Amphibious Capabilities Since we reported on the ACV acquisition in 2014, the USMC has adopted a new ACV acquisition approach consisting of three concurrent efforts that emphasize the requirement for protected land mobility in the near term and seek improved amphibious capabilities over time. This approach is a significant change from the more advanced amphibious capabilities sought for the ACV in 2011. According to USMC officials, the first effort is the AAV Survivability Upgrade Program that plans to upgrade legacy AAV protection and mobility. The second effort subdivides into two increments, ACV 1.1 and 1.2. The third effort, referred to as ACV 2.0, focuses on technology exploration to attain high water speed capability. According to USMC officials, this acquisition approach was selected based on several factors, including (1) recognition that the ACV would spend much of its operating time on land, (2) shortfalls in the AAV’s ability to meet protected land mobility requirements once on shore, and (3) technical and affordability challenges that preclude the development of a high water speed vehicle in the near term. Figure 1 provides information on these three concurrent efforts. AAV Survivability Upgrade Program Intends to Mitigate Current AAV Fleet Shortfalls in Force Protection The AAV Survivability Upgrade Program expects to upgrade survivability and mobility capabilities for a portion of the existing AAV fleet, thereby providing increased protection against threats such as improvised explosive devices. The AAV is expected to remain in operation until 2035. According to DOD officials, planned upgrades include the addition of underbelly armor, blast resistant seats, external fuel tanks and other modifications, such as suspension upgrades, intended to maintain mobility given the weight of extra armor. The upgraded AAVs expect to retain the six knot water speed of the legacy AAV. The AAV Survivability Upgrade Program plans to upgrade 392 of the fleet’s 1058 AAVs. The upgrades will be made to the AAV Personnel variant—the version of the AAV used to transport infantry. The estimated average procurement unit cost for the upgrades—the total procurement cost divided by the number of units to be procured—is $1.7 million (fiscal year 2014 dollars). The program has passed Milestone B with anticipated initial operational capability—generally attained when some units in the force structure have received the upgraded vehicles and have the ability to employ and maintain it—in fiscal year 2019. ACV Acquisition Emphasizes Protected Land Mobility in the Near Term and Improved Amphibious Capabilities in the Second Iteration The first increment of ACV development—ACV 1.1—is a wheeled vehicle that is intended to provide enhanced protected land mobility and limited amphibious capability. ACV 1.1 is a continuation of the previously suspended Marine Personnel Carrier program.speed of five knots, ACV 1.1 intends to offer swim speeds comparable to the AAV and is expected to swim from shore to shore, crossing obstacles such as rivers, rather than from ship to shore. The ACV 1.1 is not planning to have a self-deployment capability and as a result, will rely on the assistance of connectors to move from ship to shore. The vehicle expects to feature a troop-carrying capacity of 10 infantry, with the objective of expanding this capacity to 13 infantry. According to program officials, the ACV acquisition will be informed by both a 2008 analysis of alternatives done for the Marine Personnel Carrier program as well as the 2012 ACV analysis of alternatives. USMC has recently completed an At an expected water update to these two analyses—the 2014 ACV analysis of alternatives— with a focus on cost and affordability that, according to DOD officials, needed to be updated to reflect the current approach. The ACV 1.1 will likely be used concurrently with upgraded AAVs that are expected to provide amphibious capabilities that complement the enhanced protected land mobility sought by ACV 1.1. Table 1 provides a summary of selected capabilities of the legacy AAV, upgraded AAV and ACV 1.1. The 2012 analysis of alternatives done to support the ACV acquisition considered a vehicle based on the Marine Personnel Carrier, with capabilities similar to the ACV 1.1, and concluded that the vehicle was not an effective alternative to fill the identified ACV water mobility capability gaps. The analysis found that the vehicle performed well in land-based scenarios, but as a non-amphibious armored personnel carrier, did not perform as well in amphibious assault scenarios. In addition, the vehicle’s reliance on a connector craft to travel from ship to shore would extend the time necessary to complete force landings and achieve objectives in amphibious scenarios. Since a connector craft, such as the ship to shore connector currently being developed by the Navy (see figure 2), would have to transport these vehicles as well as personnel and other vehicles and equipment, it would also increase the number of connector loads and connector crew time. The analysis found that the increased use of connectors would result in a significant delay relative to self-deploying alternatives. Finally, the vehicle had less capacity than the other vehicles assessed in the analysis. The vehicle held nine infantry—similar to ACV 1.1’s threshold capacity of 10—while the other assessed vehicles held 17. According to the analysis, reduced capacity would require a higher number of vehicles to transport an infantry battalion. The larger number of vehicles would then require additional space on transportation vessels, potentially displacing other cargo and impacting logistical support and manning, as well as increasing the number of these vehicles required in the field. However, according to the analysis, the resulting vehicle dispersion would reduce infantry exposure to improvised explosive devices and increase the number of vehicles available to support a counterattack. The USMC plans to acquire 204 ACV 1.1s and anticipates achieving initial operational capability in fiscal year 2020. According to program officials, the current estimated average procurement unit cost is between $3.8 million and $7.2 million (fiscal year 2014 dollars). The ACV 1.1 effort will enter the acquisition process at Milestone B, currently scheduled for the first quarter of fiscal year 2016. According to DOD officials, the planned use of existing, non-developmental technologies in ACV 1.1 reduces acquisition risk and facilitated the decision to enter the acquisition process at Milestone B with the goal of fielding a solution more quickly. The program office issued a Request for Proposal in the second quarter of fiscal year 2015 and plans to award contracts to two vendors at Milestone B and require each vendor to provide 16 prototype vehicles. DOD officials stated that the large number of prototypes will facilitate and expedite the testing process, allowing multiple tests to take place concurrently, and allowing testing to continue in the event of a prototype breakdown. They indicated that the USMC plans to begin testing the ACV 1.1’s swim capability and other factors in fiscal year 2017. The two contracts are to run through the engineering and manufacturing development phase of the acquisition process, at which point USMC anticipates potentially down selecting to a single contractor. Figure 3 provides a notional drawing of the ACV 1.1. The second increment of ACV development—ACV 1.2—aims to improve amphibious capability. Program officials anticipate that ACV 1.2 will demonstrate amphibious capability that matches the legacy AAV, including the ability to self-deploy and swim to shore without the assistance of connector craft. According to DOD officials, ACV 1.2 will be based on ACV 1.1 testing and some 1.1s will be retrofitted with ACV 1.2 modifications. The USMC plans to acquire approximately 490 ACV 1.2s with initial operational capability scheduled for fiscal year 2023. In addition, the USMC plans to complete fielding of all ACV 1.1s and 1.2s, as well as the upgraded AAVs between the years 2026 and 2028. According to DOD officials, the changes made for the ACV 1.2 increment may be done through improvements within the same program, or ACV 1.2 may be a separate program from ACV 1.1. This determination has not yet been made. In previous reports, we have found that managing weapon systems that are being developed in increments as separate acquisition programs with their own cost and schedule baselines facilitates management and testing and helps avoid unrealistic cost estimates. This practice can result in more realistic long-range investment funding and more effective resource allocation. Technology Exploration Is Underway for High Water Speed Capability for Use in a Possible ACV 2.0 The third effort, referred to as ACV 2.0, focuses on technology exploration to attain high water speed capability. According to DOD, high water speed remains a critical capability. Technology exploration efforts are pursuing design options that may enable high water speed capability without accruing unacceptable trade-offs in other capabilities, cost or schedule. According to USMC officials, vehicle weight is the key barrier to achieving high water speed. Current technology exploration efforts include some technology from the canceled EFV program and focus primarily on various approaches addressing this weight challenge, including improving the technology that lifts the vehicle body onto plane and reducing the vehicle weight. According to DOD officials, the results of this high water speed research, knowledge gained from fielding the ACV 1.1 and 1.2, and information from the naval surface connector strategy are expected to inform the development of a replacement for the AAV fleet. According to officials, ACV 2.0 is a conceptual placeholder for that future replacement decision, which is expected to occur in the mid-2020s. High water speed capability may ultimately be achieved through an amphibious vehicle or a connector craft that will provide high water speed for vehicles without that capability. Upcoming Activities Will Permit Further Analysis to Determine Use of Best Practices Our prior work on best practices has found that successful programs take steps to gather knowledge that confirms that their technologies are mature, their designs stable, and their production processes are in control.the right knowledge at the right time, enabling leadership to make informed decisions about when and how best to move into various acquisition phases. Successful product developers ensure a high level of knowledge is achieved at key junctures in development, characterized as The knowledge-based acquisition framework involves achieving knowledge points. During the initial stages of an acquisition process, referred to as Knowledge Point 1, best practices recommend ensuring a match between resources and requirements. Achieving a high level of technology maturity and preliminary system design backed by robust systems engineering is an important indicator of whether this match has been made. This means that the technologies needed to meet essential product requirements have been demonstrated to work in their intended environment. In addition, the developer has completed a preliminary design of the product that shows the design is feasible. Figure 4 further describes the three knowledge points and identifies the ACV 1.1 acquisition’s status within the DOD acquisition process. The ACV 1.1 acquisition has yet to reach the first knowledge point, limiting our ability to determine how fully the acquisition will adopt the best practices knowledge-based framework. However, our review of the planned acquisition approach for ACV 1.1 has identified both the use of— and a deviation from—best practices. The ACV acquisition’s incremental approach to development is consistent with best practices. We have previously reported that adopting a more evolutionary, incremental strategy that delivers proven and operationally suitable capabilities when available—but acknowledges that more time is needed to deliver the full capabilities—can enable the capture of design and manufacturing knowledge as well as increase the likelihood of success in providing timely and affordable capability. The ACV acquisition demonstrates this evolutionary approach, seeking smaller increases in capability with improvements planned over time. In contrast, the canceled EFV program sought significant increases in capability in a single development process. The adoption of an incremental approach has helped the program progress towards striking the balance between customer needs and resources (e.g., technologies, cost and schedule) that is sought at Knowledge Point 1. The ACV program has demonstrated a willingness to trade customer needs—such as high water speed in the near term—and utilize mature technologies in order to identify an affordable solution that is available in the necessary time frames. The ACV acquisition’s pursuit of high water speed capabilities via technology exploration is also aligned with best practices. In previous reports, we have found that DOD should separate technology development from product development, and fully develop technologies before introducing them into the design of a system. A science and technology environment is more conducive to the ups and downs normally associated with the discovery process. This affords the opportunity to gain significant knowledge before committing to product development and has helped companies reduce costs and time from product launch to fielding. The ACV 1.1 acquisition is planning to hold its preliminary design review 90 days after the Milestone B decision. According to program officials, the program office is seeking a waiver to permit this approach. Best practices recommend that the preliminary design review is held prior to Milestone B to increase the knowledge available to the agency at development start. In 2012, we reported that beginning product development and setting the acquisition baseline before completing this review increases technical risks and the possibility of cost growth by committing to product development with less technical knowledge than recommended by acquisition best practices and without ensuring that requirements are defined, feasible, and achievable within cost and schedule constraints. According to DOD officials, the review will be held after Milestone B because no contracts will have been awarded prior to that time. In addition, they stated that the use of non-developmental technology will reduce acquisition risks and result in a high level of knowledge prior to the Milestone B decision. However, it is the program office’s intent that the engineering and manufacturing development phase be contracted under a hybrid contract that includes cost-plus-fixed-fee elements. Cost-plus-fixed-fee contracts are appropriate when uncertainties in requirements or contract performance do not permit the use of fixed-price contract types. These contracts are considered high risk for the government because of the potential for cost escalation. The selection of this contract type may denote some program risk; however, we will not be able to determine the extent of the risk and its potential impacts to the acquisition process until further information is available. As the acquisition moves forward, we will continue to monitor the ACV effort by assessing its use of acquisition best practices. According to program officials, a number of program documents, including a final report on the recent ACV 2014 analysis of alternatives update, were finalized to support a key program meeting that took place in March 2015. We have identified a number of best practices for the development of analyses of alternatives. These analyses can vary in quality, which can affect how they help position a program for success. In September 2009, we concluded that many analyses of alternatives do not effectively consider a broad range of alternatives for addressing a need or assess technical and other risks associated with each alternative. We have begun preliminary analysis on the existing 2008 Marine Personnel Carrier and 2012 ACV analyses of alternatives, including assessment of the analyses against our previously identified best practices and cost We have recently received the final report, the 2014 estimation criteria.ACV analysis of alternatives, reflecting how the prior analyses have been updated. Other documents completed recently include an acquisition strategy, results of a system requirements review, and the finalized document providing key acquisition requirements. These documents will permit us to conduct a more robust analysis and assessment of the ACV acquisition’s use of best practices. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. The comments are reprinted in appendix I. In commenting on a draft of this report, DOD stated that it believes its efforts on this program are aligned with our best practices and it will continue to monitor the program and ensure that mitigations are in place to address potential risk areas. Given that we have not been able to conduct a robust analysis of key documents, including the analysis of alternatives, we cannot yet assess how well the program is aligned with best practices. DOD also provided technical comments that were incorporated, where appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretary of the Navy; and the Commandant of the Marine Corps. This report also is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements Key contributors to this report were Bruce H. Thomas, Assistant Director; Betsy Gregory-Hosler, analyst-in-charge; David Richards; Marie Ahearn; Brian Bothwell; Susan Ditto; Jennifer Echard; Dani Greene; Jennifer Leotta; Kenneth Patton; Karen Richey; and Ozzy Trevino.
The National Defense Authorization Act for Fiscal Year 2014 mandated GAO to review and report annually to the congressional defense committees on the ACV program until 2018. In April 2014 GAO produced the first of the mandated reports describing the status of the Marine Corps' efforts to initiate an ACV program. This second mandated report discusses (1) the current ACV acquisition approach and (2) how the ACV acquisition approach compares to acquisition management best practices. To conduct this work, GAO reviewed program documentation and other materials for the ACV acquisition, including Acquisition Decision Memorandums, relevant analyses of alternatives, and a briefing on the most recent analysis of alternatives update. GAO also reviewed documentation and budget information for the AAV Survivability Upgrade Program and the Marine Personnel Carrier program. GAO identified acquisition best practices based on its prior body of work in that area and DOD guidance. GAO also interviewed program and agency officials. Since GAO reported on the Amphibious Combat Vehicle (ACV) acquisition in 2014, the Marine Corps has adopted a new ACV acquisition approach consisting of three concurrent efforts that emphasize the requirement for improved protection from threats such as improvised explosive devices in the near term with improved amphibious capabilities over time. The first of the three efforts, the Assault Amphibious Vehicle (AAV) Survivability Upgrade Program, plans to upgrade legacy AAV protection and mobility. The second effort subdivides into two increments, ACV 1.1 and ACV 1.2. ACV 1.1 is a continuation of a previously suspended Marine Personnel Carrier program that intends to provide enhanced protected land mobility and limited amphibious capability. Testing on the ACV 1.1 will inform the development of the ACV 1.2, with the intent that the ACV 1.2 will demonstrate improved amphibious capability and at a minimum, achieve parity with the legacy AAV. The third effort, referred to as ACV 2.0, focuses on technology exploration to attain high water speed capability. Results of this high water speed research are intended to further inform the development of a replacement for the AAV fleet. GAO's analysis of the ACV 1.1 planned acquisition approach has demonstrated the Marine Corps’ use of, and deviation from, best practices; however, ACV 1.1 is still in the initial stages of the acquisition process, limiting our ability to determine how fully this approach will adopt a best practices knowledge-based framework. GAO’s prior work on best practices has found that successful programs take steps to gather knowledge that confirms that their technologies are mature, their designs stable, and their production processes are in control. The knowledge-based acquisition framework involves achieving the right knowledge at the right time, enabling leadership to make informed decisions about when and how best to move into various acquisition phases. Specifically, the Marine Corps' incremental approach for the ACV acquisition is consistent with best practices and can increase the likelihood of success. The adoption of an incremental approach has helped the program progress towards achieving the balance—that is sought in accordance with best practices—between customer needs and resources (e.g., technologies, cost, and schedule). In addition, the ACV acquisition’s pursuit of high water speed capabilities via technology exploration is also aligned with best practices. In previous reports, GAO has found that DOD should separate technology development from product development, and fully develop technologies before introducing them into the design of a system. In contrast, the program plans to hold the ACV 1.1 preliminary design review after Milestone B—the decision point allowing entry into system development—which is a deviation from best practices that can increase technical risk. According to DOD officials, this approach was selected because no contracts will have been awarded prior to Milestone B and the use of non-developmental technology will reduce acquisition risks and result in a high level of knowledge prior to the Milestone B decision. The recent completion of key documents—including an updated analysis of alternatives—will permit a more robust analysis and assessment of the ACV program’s use of additional acquisition best practices.
Background The U.S. railroad industry consists mostly of freight railroads but also serves passengers. Freight railroads are divided into classes that are based on revenue. Class I freight railroads earn the most revenue and generally provide long-haul freight service, while the smaller freight railroads— those in Classes II and III—earn less revenue and generally haul freight shorter distances. Amtrak provides intercity passenger rail service, while commuter railroads serve passengers traveling within large metropolitan areas. Freight railroads own most of the track in the United States, with a notable exception being the Northeast Corridor between Washington, D.C., and Boston, Massachusetts, which Amtrak predominantly owns. Railroads grant usage rights to one another, and passenger trains share track with freight railroads. While freight and passenger railroads share many characteristics, there are also key differences in their composition and scope (see table 1). The railroad industry also includes companies that produce railroad supplies, including locomotives, train cars, track, signal equipment, and related components, and national associations that work with and represent railroads. AAR, which primarily represents freight railroads (including all seven Class I freight railroads), as well as Amtrak and some other railroads, develops standards for the implementation of technology, manages the implementation of industrywide technological programs, and assesses the railroads’ needs for safety and technological development. It also works to develop new technologies at TTCI near Pueblo, Colorado, an FRA-owned railroad research facility operated by AAR through a contract. The American Short Line and Regional Railroad Association represents Class II and Class III freight railroads in legislative and regulatory matters. The American Public Transportation Association represents commuter railroads and develops standards for their use of technology. The U.S. railroad environment consists of train vehicles (rolling stock) and infrastructure, such as track, bridges and tunnels, switches and signals, and centralized offices with dispatchers (see fig. 1). Railroad accident rates have generally declined from 2000 to 2009. During that time, human factors and problems with track were the leading causes of rail accidents, according to our analysis of FRA data (see fig. 2). These problems can lead to train derailments or collisions, which can result in significant damage and loss of life. For example, the 2005 accident in Graniteville, South Carolina, was attributed to a switch being left in the wrong position, an example of human error, while the 2008 collision between freight and passenger trains in the Chatsworth neighborhood of Los Angeles, California, was the result of a commuter train going through a red signal it should have stopped at, which was likely caused by human error. Track-related causes of accidents include irregular track geometry, which occurs when rail is misaligned or too far apart; breaks in the rail or joints that connect rail segments; and damage to railroad bridges, among other causes. Such defects can lead to train derailments. Although the rate of accidents has decreased from 2000 through 2009, injuries and fatalities have fluctuated, with the largest spikes being tied to specific incidents. For example, injuries increased dramatically in 2002 due to one accident in North Dakota in which 1,441 people were injured from a derailment caused by track problems that resulted in the release of hazardous materials (see fig. 3). The number of fatalities per year from 2000 through 2009 ranged from a low of 4 in 2003 and 2009 to a high of 33 in 2005, the year of the accident in Graniteville, South Carolina, that killed 9 people. The second-highest year for fatalities was 2008; that year, there were 27 fatalities, including 25 fatalities from the accident in Los Angeles, California. In its role as federal regulator and overseer of railroad safety, FRA prescribes and enforces railroad safety regulations and conducts R&D in support of improved railroad safety and rail transportation policy. Within the agency, FRA’s Office of Railroad Safety promulgates and enforces railroad safety regulations, including requirements for track design and inspection; signal and train control systems; grade-crossing warning device systems; mechanical equipment, such as locomotives and freight cars; and railroad operating practices. For example, FRA’s regulations for track and equipment include detailed, prescriptive minimum requirements, such as wheel safety requirements and formulas that determine the maximum allowable speeds on curved track. In developing most of its regulations, FRA seeks input from the railroad industry and other organizations through its Railroad Safety Advisory Committee. FRA’s Office of Research and Development sponsors and conducts R&D of new rail safety technologies in support of FRA’s safety mission. This work contributes information used to support FRA’s development of regulations, standards, and best practices as well as encourages the development and use of new safety technologies. FRA’s R&D work is done collaboratively with industry and universities and is also supported by the Volpe Center, which is DOT’s transportation research center in Cambridge, Massachusetts. Although its role has traditionally been that of a regulatory agency, recently enacted laws have expanded FRA’s role in other areas. The Passenger Rail Investment and Improvement Act of 2008 authorized over $3.7 billion for three federal programs for high-speed rail, intercity passenger rail congestion, and capital grants, while the American Recovery and Reinvestment Act of 2009 appropriated $8 billion for these three programs. By creating a significant grant-making role for funding the development of high-speed passenger rail, these laws effectively transformed what was essentially a rail safety organization to one that is making multibillion-dollar investment choices while also carrying out its safety mission. Regarding rail safety technologies, the Rail Safety Improvement Act of 2008 directs FRA to oversee railroads’ implementation of PTC and other technologies. Specifically, the act requires passenger and major freight railroads to implement PTC by the end of 2015, with FRA playing a role as overseer of the industry’s implementation through rulemaking and review of railroads’ implementation plans. The act also directs FRA to require railroads to improve safety through the development of risk-reduction programs that include plans for implementing new rail safety technologies and to create a grant program to fund the deployment of rail safety technologies, authorized at $50 million per fiscal year from 2009 through 2013 (see table 2). PTC is a communication-based system designed to prevent some accidents caused by human factors, including train-to-train collisions and derailments caused by exceeding safe speeds. Such a system is also designed to prevent incursions into work zones and movement of trains through switches left in the wrong position. PTC achieves these capabilities via communication with various components, namely locomotive computers, devices along the track (known as wayside units), and dispatch systems in centralized office locations (see fig. 4). New data radios are being developed to enable wireless communication between locomotives and wayside units. Centralized offices and locomotives have access to a track database with information about track routes and other data, including speed restrictions, track configuration and topography, and the location of infrastructure such as switches and signals that indicate places where a train’s speed may need to be enforced by PTC. Using this information, locomotive computers can continuously calculate a train’s safe speed. If the train exceeds that speed, the PTC system should enforce braking as necessary. By preventing trains from entering a segment of track occupied by another train or from moving through an improperly aligned switch, PTC would prevent accidents such as those mentioned above that occurred in Los Angeles, California, and Graniteville, South Carolina. While the law does not require railroads to implement the same PTC system, it does require that railroads’ PTC systems be interoperable, which means that the components of different PTC systems must be able to communicate with one another in a manner to provide for the seamless movement of trains as they cross track owned by different railroads that may have implemented different PTC systems. Train control systems similar to PTC already exist in other countries. For example, a system to automatically stop trains if a train operator fails to stop a train at a stop signal has been widely used in Japan since the 1960s, although this system has been upgraded over time to provide advanced warning of the need to slow a train and automatically apply train brakes in such situations. A more advanced system to continuously calculate a train’s safe speed—similar to the capability that PTC is designed to achieve—is being implemented on the country’s high-speed passenger rail lines. In Europe, countries use various signal and train control systems, presenting technical and logistical challenges for trains that travel between countries. To establish interoperability among these systems, the European Union has embarked on an effort to implement the European Rail Traffic Management System, a common signaling and train control system, as well as a radio communications network, that would overlay countries’ existing signal and train control systems to establish interoperability among them. Like PTC, this system relies on a locomotive computer to calculate a train’s safe speed and enforce that speed on the basis of certain information, such as a train’s movement authority, the track speed limit, and the position of signals ahead of the train. In addition to the implementation plans outlined in the Rail Safety Improvement Act of 2008, FRA’s subsequent PTC regulations also require railroads to submit PTC development plans and PTC safety plans. These three plans are related, and FRA requires different information for each of them: PTC development plan: To get approval for the type of PTC system a railroad intends to install, the railroad must submit to FRA a plan describing the PTC system the railroad intends to implement and the railroad operations the PTC system will be used with. Following FRA’s review of this plan, if approved, the agency would issue the system described in the plan a “type approval,” which is a number assigned to a particular PTC system indicating FRA agreement that the system could fulfill the requirements of the PTC regulations. PTC implementation plan: This plan describes the functional requirements of the proposed PTC system, how the PTC system will achieve interoperability between the host railroad (the railroad that owns the track) and the tenant railroads (those railroads that operate on the host’s track), how the PTC system will be installed first on track routes with greater risk, the sequence and schedule for installing PTC on specific track segments, and other information about PTC equipment to be installed on rolling stock and along the track. The law required railroads to submit these plans by April 16, 2010, and FRA to review and approve or disapprove them within 90 days. PTC safety plan: This plan must include information about planned procedures for testing the system during and after installation, as well as information about safety hazards and risks the system will address, among other requirements. By approving a safety plan, FRA certifies a railroad’s PTC system, which must happen before a railroad can operate a PTC system in revenue service. FRA set no specific deadline for railroads to submit this plan. In its PTC rulemaking, FRA also included requirements for implementing PTC on high-speed passenger rail lines, with trains operating at or above 90 miles per hour, that specify additional safety functions for PTC systems installed for trains operating at these higher speeds. FRA’s High-Speed Rail Safety Strategy, released in November 2009, acknowledges the importance of implementing PTC for high-speed passenger rail operation and also calls for the evaluation of other specific technologies to determine their suitability for reducing risk for high-speed rail. Railroad Industry Has Made Progress in Developing PTC, but Key Tasks Remain to Completing Implementation Railroad Industry Has Made Progress in Developing PTC Components, and Railroads Are Preparing for Widespread Implementation Amtrak and the four largest Class I freight railroads have led PTC development efforts and most other railroads plan to implement PTC systems developed by these railroads. Amtrak worked with suppliers to develop PTC for the Northeast Corridor and began installation in 2000. Since that time, Amtrak has made improvements to this system, and FRA certified Amtrak’s PTC system on the Northeast Corridor in May 2010—the first PTC system FRA certified under the PTC rules it issued in January 2010. Amtrak has also installed a different PTC system on a portion of track in southern Michigan. The four largest Class I freight railroads have identified suppliers of PTC technology and are working with these suppliers to develop PTC components; however, they have not yet installed PTC, except for some limited pilot installations. Although there are differences between the PTC systems being installed by Amtrak and those being installed by the freight railroads, they are designed to achieve the same basic functions. The PTC systems being developed by the four largest Class I freight railroads differ from PTC systems that exist in other countries and on some Amtrak routes. According to AAR officials, existing PTC systems were designed specifically for passenger rail operations and would not address the needs of the U.S. freight railroads. For example, the system that Amtrak uses on the Northeast Corridor combines PTC speed enforcement capabilities with an existing onboard system that provides track status information, such as signal status, to the locomotive engineer. Not all of the freight railroads currently use such an onboard track information system, and such a system would not be feasible to use on segments of track that lack signals, which accounts for about 13,000 miles of track owned by Class I freight railroads that requires PTC. Additionally, in developing new PTC systems, railroads must ensure that their systems are interoperable among the many different railroads that plan to use them. To achieve interoperability, the four largest Class I freight railroads created the Interoperable Train Control Committee to develop system specifications and standards for interoperability, including protocols for how PTC components should function and communicate with each other as part of an overall system. To achieve interoperability with the Class I freight railroads’ systems, Amtrak will equip its locomotives that operate on freight-owned track with PTC radios capable of operating on the same frequencies as those used by the freight railroads. Components of PTC systems being developed by Class I freight railroads are in varying stages of development, with some components currently being produced; however, these components cannot be used or fully tested without software, which remains under development: Wayside units: These units consist of devices installed at signals, switches, and other locations along the track. The units will monitor the status of signals and switches and communicate that information to locomotives directly or through railroads’ centralized office systems. Hardware for these units is currently available and being tested by railroads. Locomotive computers: These computers will provide centralized offices information on the train’s location. Based on the status of upcoming signals or switches—which will be communicated to the locomotive by the wayside units—the locomotive computer will calculate the train’s braking distance and enforce braking, if needed, to slow or stop a train to comply with speed restrictions and ensure it does not enter a segment of track occupied by another train or a work crew. Locomotive computers are available for railroads to install on newer locomotives. However, railroad associations told us that older locomotives that lack electronic systems will have to be upgraded before such computers and other PTC components can be installed on them. Data radios: The freight railroads’ PTC systems require the use of new data radios installed on locomotives and wayside units to enable PTC communication. Prototype specifications for these radios are still under development, and the railroad industry estimates that these radios will be in production starting in early 2012. The four largest Class I freight railroads share ownership in the company that is developing PTC data radios and jointly purchased radio spectrum to enable PTC communications. For these components to operate as a system, PTC software is necessary to perform all train control functions, including determining a train’s location and calculating a train’s braking distance. Complete PTC systems cannot be tested and implemented until software is finalized. PTC software is still under development, and railroad industry officials told us they expect it to be available sometime in 2011. Forty-one railroads submitted their required PTC implementation plans to FRA in 2010, comprising the 7 Class I freight railroads, 2 Class II freight railroads, 9 Class III freight railroads, Amtrak, and 22 commuter railroads. In these plans, railroads were required to provide information about the extent to which they will implement PTC, provide a schedule for progressive implementation, and prioritize implementation on the basis of risk. Railroads have begun implementing PTC in some locations. Amtrak has installed PTC on just over 200 miles of the 363 miles it owns along the Northeast Corridor and plans to expand its system along the corridor and its connections. It has also installed PTC on about 60 miles of track in southern Michigan and will extend this system along the full 97 miles of track it owns in that area. Class I freight railroads have selected the PTC systems they intend to implement and have informed FRA of their selections by submitting PTC development plans. Some freight railroads and commuter railroads that operate on the Northeast Corridor are already equipped with Amtrak’s PTC system. Commuter railroads that connect with the corridor will equip their additional rail lines with this system. Other freight and commuter railroads that are required to implement PTC have not yet begun implementation. Many of these commuter railroads and Class II and Class III freight railroads plan to implement the same systems being developed by the Class I freight railroads. As we have previously stated, components for PTC systems being developed by the Class I freight railroads are not yet available. Officials from the American Public Transportation Association and the American Short Line and Regional Railroad Association—which represent commuter railroads and Class II and Class III freight railroads, respectively—told us that those railroads are awaiting these components to begin installation of PTC. While only a small number of Class II and Class III freight railroads are required by the Rail Safety Improvement Act of 2008 to implement PTC on their property, FRA regulations require some additional Class II and Class III freight railroads to install PTC on their locomotives if they operate on track equipped with PTC and share that track with passenger trains. Key Steps Remain to Implement PTC by 2015, with a Potential for Delay By law, the rail industry must complete development, testing, and full implementation of PTC on most major routes within 5 years. Progress has been made by railroads and suppliers in preparing to implement PTC, but many actions must still be taken to achieve full implementation of PTC, and they must be completed in a specific sequence (see fig. 5). Since PTC implementation requires the completion of a specific sequence of steps, any delay in one step could affect the entire implementation schedule, potentially resulting in railroads missing the implementation deadline, which would delay achieving the intended safety benefits of PTC. As we have previously discussed, all PTC components for the Class I freight railroads’ systems are not yet developed. In addition, the development of PTC software and new data radios requires the development of interoperability standards, which the four largest Class I freight railroads and AAR have not yet finalized. Specifically, AAR officials told us that the Interoperable Train Control Committee had expected to complete all of these standards by July 2010, but as of August, only 3 of the approximately 40 standards needed were ready. Furthermore, AAR officials told us in September that although the committee continues to make progress in developing these standards and has consolidated some standards to cut down the total needed, it has not set a new date for when it expects to complete this effort. AAR officials explained that delays are due to the complexity and amount of work that must be completed. FRA officials monitoring this effort told us in September that they do not know when the standards will be completed, and that they have some concerns about the potential for the delay in developing these standards to impact railroads’ ability to procure PTC components in a timely manner. FRA officials also said that although it is their understanding that the remaining standards have been drafted and are undergoing industry review, they expect this process to last at least through the first quarter of calendar year 2011. System complexity was a factor that led to delays in an earlier PTC development effort. In 2001, FRA, Amtrak, the Union Pacific Railroad, AAR, and the State of Illinois created the North American Joint Positive Train Control Project, an objective of which was the development of interoperable PTC standards. However, this objective was not achieved by the time the project came to a close in 2006. Specifically, system testing revealed that a significant amount of software development would be required for the PTC system to be compatible with normal railroad operations, which FRA concluded would require several additional years to complete. Railroads currently expect that key PTC components will be available by 2012, but there is uncertainty regarding whether this can be achieved, given the delays in developing the interoperability standards and current lack of software for PTC components. Any delays in component development would consequently delay pilot installations for field testing. The lack of developed components raises questions about the technological maturity of the Class I freights’ PTC systems. If the railroad industry is unable to develop fully functional components within the expected time frame, it is possible that testing and installation of these components could not be completed by the 2015 deadline. Our prior work examining the development of military weapon systems has shown that demonstrating a high level of maturity before allowing new technologies into product development programs increases the chance for successful implementation, and that, conversely, technologies that were included in a product development program before they were mature later contributed to cost increases and schedule delays. Once PTC components are developed, railroads must test them in the field to ensure that PTC systems function properly and that components of PTC systems are able to communicate with each other regardless of railroad ownership. Any problems that are identified during the field-testing process will need to be addressed to ensure the PTC systems function as required. AAR officials told us that PTC tests have only been conducted in very controlled environments, as opposed to a truly operational environment where the systems could experience stress. For example, railroads must ensure that PTC systems provide reliable communication among centralized offices, wayside units, and locomotives. However, it is uncertain how well system communication will fare in densely populated areas, such as Chicago, Illinois, where many railroads—both passenger and freight—operate simultaneously. Furthermore, railroad industry officials have expressed concern that all electrical components associated with PTC contain inherent failure rates. Since PTC implementation requires the installation of a large number of devices, the possibility of failure must be addressed and railroads must ensure that any possible failures do not negatively affect railroad safety or operational capacity. Any problems identified during field testing, if they cannot be quickly addressed, could contribute to missing the PTC implementation deadline. Conversely, implementing an immature system to meet the deadline could pose serious safety risks. After railroads complete PTC field tests, they must submit safety plans to FRA for review, and FRA must certify PTC systems before railroads can begin operating them in revenue service. Given the extent to which railroads must implement PTC, installation will require a considerable amount of work, since it will include the installation of thousands of physical devices on both track and locomotives. Class I freight railroads, for example, must implement PTC on over 70,000 of the approximately 94,000 miles over which they operate, which is about 75 percent of their network. The railroad industry estimates that about 50,000 wayside units must be installed along track, and data radios must be installed on each wayside unit. Class I freight railroads also expect to install PTC computers and data radios on over 17,000 locomotives, which represent about 70 percent of their fleet that is used for mainline operations. Additionally, commuter railroads must install PTC on their vehicles, even if the railroads do not own track, which FRA estimates will mean equipping about 4,100 vehicles. As we have previously stated, PTC computers are available for installation on new locomotives, but some older locomotives need to be upgraded first before PTC can be installed. Officials at some Class I freight railroads and commuter railroads have expressed concern that a limited number of companies are currently responsible for supplying PTC components to railroads, and that the availability of equipment could impact railroads’ ability to complete implementation on time. While rail supply companies told us they expect to meet the demand for PTC components, some also acknowledged that they may need to expand to do so. Completing implementation will be costly for the railroad industry and could make it difficult for commuter and smaller freight railroads to meet the 2015 deadline. In 2009, FRA estimated that developing, purchasing, installing, and maintaining PTC would likely cost railroads between $9.5 billion and $13.1 billion. However, because these costs are still uncertain, the agency acknowledged that costs could be as low as $6.7 billion or as high as $22.5 billion. The large amount of equipment needed to complete implementation before the deadline will create a temporary increase in demand for suppliers. FRA has acknowledged that having multiple railroads purchasing the same equipment at the same time could cause the prices of PTC equipment to rise and, therefore, could raise the overall cost of implementation. Among passenger railroads, the cost of PTC could be especially problematic. For example, Amtrak officials expressed concern about the cost of PTC implementation on Amtrak routes supported with state funding, since some states may not be able to fund the additional costs associated with PTC implementation. Commuter railroads are publicly funded, and some are facing funding shortfalls that are leading them to increase fares or reduce service levels. In their implementation plans, some commuter railroads stated that funding for current operations is already at risk due to stress on their state funding partners, and officials from other commuter railroads told us that they are unsure how they will be able to pay for PTC implementation. The American Public Transportation Association has estimated that PTC implementation will cost the commuter railroad industry at least $2 billion. Although the cost of implementation will be spread over a number of years, it could still strain the budgets of some commuter railroads. For example, a transit agency in San Diego, California, told us that implementing PTC for its commuter railroad could cost as much as $60 million to $90 million, while the annual capital budget for the agency, which also provides bus service, is about $10 million. In its PTC implementation plan, this agency stated that it did not have any significant approved funding available for implementation, and that its funding plan assumed receipt of both federal and state funding. Furthermore, the Federal Transit Administration (FTA) has estimated that commuter railroads face a $12.6 billion backlog to attaining a state of good repair, indicating that these railroads must make significant capital investments to improve the condition of their current assets. The cost of PTC could further delay commuter railroads making such investments. Class II and Class III freight railroads may also have difficulty in paying for PTC implementation. These freight railroads earn much less revenue than Class I freight railroads, and officials from the American Short Line and Regional Railroad Association expressed concern about the ability of these railroads to cover the costs of PTC. Class II and Class III freight railroads tend to have older equipment, for which the costs of PTC installation will be higher since, as we have previously discussed, some older locomotives will require electronic upgrades to enable the installation of PTC components. According to officials at the American Short Line and Regional Railroad Association, the cost of installing PTC on some locomotives could exceed the total value of those locomotives. The four Class II and Class III freight railroads that included a description of implementation risks in their PTC implementation plans included cost as a risk factor, with one railroad noting that paying for PTC will require it to divert funding from its routine maintenance requirements. Even the larger freight railroads acknowledged that paying for PTC could have implications on their budgets. Specifically, officials from Class I freight railroads and AAR have indicated that paying for PTC could result in the diversion of funds from capital investments, such as capacity-improving projects, and could impact their ability to invest in other safety technologies. The uncertainties that we discuss regarding when the remaining tasks to implement PTC can be completed, as well as the cost of doing so, raise certain risks to the successful completion of PTC by the deadline. Potential delays in developing PTC components, software, and interoperability standards, as well as delays that could occur during the subsequent testing and implementation of PTC systems, raise the risk that railroads will not meet the implementation deadline and that the safety benefits of PTC will be delayed. Furthermore, the extent to which commuter railroads and small freight railroads have difficulty in covering the costs of PTC implementation raises the risk that these railroads could miss the deadline if funding is not available or that other critical needs may go unmet if money is diverted to pay for PTC. As we noted, commuter railroads are already facing challenges in funding current operations, and paying for PTC could impact the ability of these railroads, as well as smaller freight railroads, to make the necessary investments in maintenance. Other Rail Safety Technologies Hold Promise for Preventing or Mitigating Collisions and Derailments, but Face Implementation Challenges Rail Safety Technologies to Inspect Track, Improve or Monitor Rolling Stock, Protect Occupants, and Improve Switches Hold Promise for Addressing Key Causes of Accidents While PTC addresses some accidents caused by human factors, other technologies being developed can address other causes of accidents, such as problems with track or equipment that account for a significant portion of accidents and would not be addressed by PTC. According to experts and other stakeholders from the railroad industry and government, a number of rail safety technologies under development hold promise for improving safety. In particular, some of these technologies may be essential for addressing the safety of high-speed passenger rail or areas of track that lack signals or PTC. We identified four broad categories of technologies that current development efforts are focused in. Figure 6 shows where such technologies can be integrated into the existing rail environment to improve safety. Track inspection: New technologies have the potential to better inspect track for cracks in the rail that could lead to breakage as well as measure the track’s alignment to ensure that rails are laid at the proper angle and distance apart. About one-third of rail accidents are caused by track defects, such as broken or misaligned rail that could cause a train to derail. Experts and other stakeholders noted that some of these technologies have the potential to allow railroads to better manage track risks by providing more accurate data about the size and nature of track defects. Railroads could then monitor such defects over time and make risk-based track maintenance decisions. Such technologies could be particularly useful for high-speed passenger rail operations, since track that carries high-speed trains must be maintained to a higher standard. Switch improvement: These technologies address the problem of track switches left in the wrong position, which could lead a train onto the wrong track and cause an accident. Several experts observed that technology to monitor and indicate the position of a switch would provide particular benefit for sections of track that lack signals, and two experts told us the technology would have prevented the 2005 accident in Graniteville, South Carolina. This technology is among those that the Rail Safety Improvement Act of 2008 suggests DOT include when prescribing the development and implementation of rail safety technologies in areas of track that lack signals or train control systems. Rolling stock improvement and monitoring: New technologies to improve the function or design of rail vehicles, as well as devices to inspect them, can provide safety benefits by improving the safe operation of trains and better identify when train components develop problems that could cause an accident. For example, experts and other stakeholders noted that technology to provide real-time monitoring of certain wheel assembly components is an important technology for high-speed trains, since overheating of these components can quickly lead to failure. European officials from an association of rail supply companies told us this technology is used for European high-speed passenger trains. Occupant protection: Incorporating new designs into passenger rail vehicles, such as crash energy management—a design concept that incorporates parts designed to crumple under stress to absorb collision energy to mitigate impact forces—represents a new way of thinking about crashworthiness, which has traditionally involved designing vehicles with hard exteriors to resist deformation. European rail officials told us this technology is used in European passenger trains. FRA’s crashworthiness regulations have included standards for incorporating crash energy management into rail vehicles since 1999 and require crash energy management for high-speed passenger trains operating up to 150 miles per hour. Among the technologies we examined, we identified some as being more promising, based on experts’ views about the technologies’ potential to improve safety, their worth in doing so compared with their additional cost for development and implementation, and their being in a later stage of product development (see table 3). Regarding their stage in product development and implementation, experts mostly viewed these technologies as having some deployment, except for wayside detectors, which experts viewed as more widely deployed; however, this may vary depending on the type of detector. Some of these most promising technologies are also deployed in other countries; however, differences in the nature of rail systems in those countries as compared with the United States could mean that the benefits of a particular technology may not be the same. As we have previously discussed, the U.S. rail system consists mostly of freight railroads; however, in Europe and Japan, passenger rail, including high-speed rail, is more predominant. Such differences in the rail systems may lead to differences in how new rail safety technologies are implemented. For example, although foreign stakeholders told us that electronically controlled pneumatic brakes are common on passenger trains in Europe, they are not used on freight trains. Because European freight trains are generally lighter and shorter than American freight trains, they can stop in a shorter time and distance than longer, heavier American freight trains can stop. Consequently, a European freight railroad would realize less benefit from the improved stopping efficiency that this technology offers. Additionally, unlike in the United States, there is not a significant amount of European track miles that lack signals, so the challenge of addressing safety for unsignaled areas with technologies such as switch position monitors/indicators is generally not an issue. Additionally, philosophical differences in approaches to railroad safety may affect how rail safety technologies are implemented. Specifically, foreign rail officials and academics with knowledge of rail practices in Europe and Japan, as well as FRA officials, told us that safety efforts in Europe and Japan are driven more by a desire to avoid accidents, rather than to mitigate their effects. Cost, Uncertainty about Effectiveness, Regulations, and Lack of Interoperability Create Challenges to Implementing New Rail Safety Technologies Experts and other stakeholders identified costs, uncertainty about effectiveness, regulations, and lack of interoperability with existing systems and equipment as key challenges to implementing new rail safety technologies: Cost: Most experts indicated that cost was a major challenge for implementing rail safety technologies in all four technology categories, including for some of the most promising technologies—specifically electronically controlled pneumatic brakes, crash energy management, and switch position monitors/indicators. Additionally, according to some experts, other stakeholders, and FRA officials, because of the costs they are incurring to implement PTC, railroads are not looking to spend capital to implement other rail safety technologies. Commuter railroads and short line railroads also lack the capital budgets to invest in new technologies. Some experts and other stakeholders, as well as FRA officials, also told us there is sometimes a disconnect between who would pay for a particular technology and who would benefit from it. For example, one of the experts and representatives from a railroad association we interviewed told us that electronically controlled pneumatic brakes would most benefit the railroads, while the cost of installing them would fall on the car owner, which could be a shipping company and not a railroad. Uncertainty about a technology’s effectiveness: Several of the experts and other stakeholders we interviewed identified uncertainty about a technology’s effectiveness as a key implementation challenge and noted that proving the effectiveness of a new technology is critical to gaining its acceptance for use by the industry. In particular, most experts noted that uncertainty about effectiveness was a challenge to implementing several of the track inspection and measurement technologies, presumably because of their lack of maturity, since the experts also tended to indicate that these technologies were in the early stages of development. The reluctance by railroads to implement a technology due to cost is also affected by uncertainty about a technology’s effectiveness. According to FRA officials, railroads will not adopt a new technology unless they know it will deliver a positive return on their investment. Regulations: Experts and other stakeholders reported a disincentive under current regulations to use new track inspection technologies. Specifically, they were concerned that such technologies identify track defects perceived as too insignificant to pose a safety risk, but which nonetheless require remedial action under current regulations once such defects are identified. Regulations were generally not cited by experts and other stakeholders as a major challenge to implementing the other new technologies. Lack of interoperability with existing systems and equipment: Most experts indicated in our questionnaire that lack of interoperability was a major implementation challenge for electronically controlled pneumatic brakes. Specifically, they told us that for such brakes to function properly, all cars on a train would have to be equipped with them, which, although practical for a passenger train or a train that does not exchange cars with another train—such as a train that carries one type of cargo, like coal—would not be practical for a mixed-freight train whose cars are exchanged with other trains, which is common in rail operations. Additionally, some stakeholders said that crash energy management is difficult to retrofit into existing rolling stock. Experts did not agree that lack of interoperability was a major challenge for the other technologies. FRA Has Taken Actions to Fulfill the PTC Mandate and Promote Other Technologies, but Opportunities Exist to Inform Congress of Risks and Improve Monitoring To Date, FRA Is Taking the Necessary Steps to Fulfill the PTC Mandate To fulfill the PTC mandate, FRA (1) has developed regulations regarding the implementation of PTC systems, (2) is monitoring PTC implementation efforts, and (3) is managing funding programs to support PTC implementation. Development of Regulations In January 2010, FRA issued final regulations on PTC implementation on the basis of requirements in the Rail Safety Improvement Act of 2008. These regulations were developed in collaboration with the railroad industry and other stakeholders through FRA’s Railroad Safety Advisory Committee. Among other things, the regulations describe the requirements of a PTC system; require railroads to submit PTC development, implementation, and safety plans and FRA to review and approve them; require railroads to implement PTC by December 31, 2015; and establish a schedule of civil penalties for violations. Oversight of Railroads’ PTC Implementation Efforts To oversee railroads’ progress in implementing PTC, FRA has provided guidance and is monitoring implementation, including by reviewing railroads’ PTC-related plans and directly observing railroads’ PTC-related activities. Specifically, FRA has provided guidance to the railroad industry on PTC implementation by speaking at industry conferences, meeting with railroads to discuss PTC implementation plans, and providing railroads with a template for drafting their PTC implementation plans. The Rail Safety Improvement Act of 2008 and FRA’s regulations require the agency to provide timely review and approval of PTC development, implementation, and safety plans. FRA must review and approve PTC development plans before railroads can submit their PTC safety plans, receive PTC system certification from FRA, and begin operating PTC systems (see fig. 7). FRA reviewed PTC implementation plans before completing its review of all PTC development plans, since the implementation plans had a review deadline set by statute, whereas development plans did not. As of July 2010, FRA completed its first review of all 41 of the PTC implementation plans railroads submitted. As of December 3, 2010, according to FRA officials, 21 plans were fully approved and 13 were provisionally approved. The remaining 7 plans were disapproved; the agency returned these plans to railroads with requests to make technical corrections or provide more detailed information and resubmit them to FRA for subsequent approval. FRA has since been reviewing PTC development plans. According to the PTC final rule, FRA, to the extent practicable, will approve, approve with conditions, or disapprove these plans within 60 days of receipt. In March 2010, three of the four largest Class I freight railroads jointly submitted a PTC development plan. In a May 2010 letter to those railroads, FRA stated it would not complete review of the plan within the 60-day time frame specified in the final rule because agency personnel were needed to review the large number of implementation plans FRA received, which had a review deadline set by statute. FRA completed an initial review of the development plan in July 2010 and sent a letter to the railroads asking them to (1) revise the development plan and resubmit it after making some corrections and (2) provide FRA with specific details on the magnitude of the risk the delay in FRA’s review and approval of the development plan would have on the timely implementation of PTC. FRA officials told us they met with representatives from these railroads in August and October 2010 to discuss resolution of FRA’s remaining issues and concerns and are working with the railroads on an ongoing basis to do so. Several experts and other stakeholders told us that if development or implementation plan approvals were delayed, railroads’ PTC implementation schedules could, in turn, be delayed, possibly resulting in railroads not meeting the PTC implementation deadline. In this specific case, the three Class I freight railroads noted in a July 2010 letter to FRA that a delay in approving their PTC development plan could delay PTC development and implementation time frames. Other railroads could also be affected, since three other Class I freight railroads, three smaller freight railroads, Amtrak, and nine commuter railroads are relying on the approval of this plan, because they are also implementing the same PTC system. FRA plans to monitor railroads’ progress in implementing PTC by requiring railroads to provide periodic information on implementation progress and by directly observing railroads’ testing and implementation of PTC. In its final PTC rule, FRA requires that railroads report annually on the percentage of their trains that are PTC-equipped and operating on PTC-equipped track. FRA officials told us that the intent of this reporting is to monitor railroads’ implementation of PTC so that railroads gradually implement this technology in the years leading to the 2015 deadline. Members of the newly established PTC branch within FRA’s Office of Safety will conduct further monitoring of PTC implementation. According to FRA officials, these 11 new staff members in headquarters and regional offices will monitor railroads’ work to verify the accuracy of information in PTC track databases; observe testing conducted by railroads prior to PTC system certification; and, if needed, advise railroads to conduct more tests or different tests to establish that the PTC system complies with FRA regulations. Additionally, FRA is required to report to Congress in 2012 on the progress railroads have made in implementing PTC. Financial Assistance FRA manages two funding programs to assist with PTC implementation. First, as required by the Rail Safety Improvement Act of 2008, FRA manages a grant program to fund the deployment of rail safety technologies. This program is authorized to offer up to $50 million in grants to railroads each year for fiscal years 2009 through 2013. Congress did not appropriate funding for this program in fiscal year 2009 and provided $50 million in fiscal year 2010. The law stipulates that funding under this program be prioritized for implementation of PTC over other rail safety technologies. In November 2010, FRA awarded grants totaling $50 million to seven projects for fiscal year 2010, six of which were related to PTC, while the seventh was awarded for implementation of a risk management system. FRA received 41 applications seeking over $228 million in funding for the fiscal year 2010 grants. This grant program is particularly popular, but its funding as authorized will cover only a small portion of the estimated costs of PTC implementation, which FRA has acknowledged could range from $6.7 billion to $22.5 billion. Second, FRA also manages the Railroad Rehabilitation and Improvement Financing Program, which authorizes FRA to provide loans and loan guarantees up to $35 billion ($7 billion of which is reserved for non-Class I freight railroads). Funding awarded under this program may be used for several purposes, including implementation of PTC and other rail safety technologies, but can also be used for more general improvements to infrastructure, including track, bridges, and rail yards. FRA staff told us that as of September 2010, no railroads have applied to this loan program for PTC implementation and speculated that the program’s requirement to demonstrate creditworthiness may have deterred some railroads from applying. It may also be too soon in the PTC implementation time frame for most railroads to need loans, if they are not yet purchasing PTC equipment. Officials from the American Short Line and Regional Railroad Association told us that using these loans to pay for PTC would help smaller freight railroads meet the implementation mandate. In addition, FRA officials said that the agency is working with FTA to see whether FTA could provide financial assistance to commuter railroads for PTC implementation. FRA officials said that to provide this financial assistance, FTA would need to seek additional funds in its annual budget request to Congress. FTA did not request such funds for fiscal year 2011 and is currently developing its budget request for fiscal year 2012. FRA Has an Opportunity to Identify and Report to Congress on PTC Implementation Risks and Potential Mitigation Actions As we have previously discussed, there are uncertainties regarding when the remaining tasks to implement PTC can be completed, which raise certain risks to the successful completion of PTC by the 2015 deadline. FRA officials told us they are aware of some of these risks, but they said that it is too early to know whether they are significant enough to jeopardize successful implementation by the 2015 deadline. However, as FRA moves forward with monitoring railroads’ implementation of PTC, the agency will have more information regarding the risks previously discussed. In particular, the agency should have a clearer picture of whether it is likely railroads will meet the 2015 implementation deadline and what the associated implications would be. For example, by the time FRA reports to Congress in 2012 on PTC implementation progress, it will be clearer whether the state of PTC component maturity poses a risk to timely implementation, since the railroad industry currently expects components will be available by 2012. Additionally, the cost to implement PTC should be more certain, and therefore it will be clearer whether problems in financing PTC—particularly for commuter and smaller freight railroads—could lead to delays or whether the costs of PTC could result in other operational needs, such as maintenance, going unmet due to the diversion of funds to pay for PTC. Our past work has shown that the early identification of risks and strategies to mitigate them can help avoid negative outcomes for the implementation of large-scale projects. For example, our 2004 report examining an Amtrak project to improve the Northeast Corridor noted that early identification and assessment of problems would allow for prompt intervention, increasing the likelihood that corrective action could be taken to get the project back on track. Furthermore, for our work examining the transition from analog to digital television broadcasting, we pointed out how such efforts are particularly crucial when the implementation of a large-scale project relies on private organizations to achieve public benefits. Such is the case with the implementation of PTC, which was mandated for reasons of public safety but is largely the responsibility of railroads to accomplish. FRA’s 2012 report to Congress presents the agency with an opportunity to inform Congress of the likelihood that railroads will meet the 2015 implementation deadline, as well as potential implementation risks and strategies to address them. Such information would help Congress determine whether the railroad industry is on track to successfully implement PTC by 2015 or whether there are major risks associated with this effort that require intervention by Congress, FRA, railroads, or other stakeholders. FRA officials told us they have not yet determined what information will go in their report. FRA Has Taken Some Actions to Encourage the Implementation of Other Technologies, but Does Not Fully Use Best Practices In keeping with its mission of promoting safety throughout the national railroad system, FRA has taken a number of actions to encourage the use of rail safety technologies other than PTC—such as electronically controlled pneumatic brakes or switch position monitors/indicators—by (1) collaborating with industry on R&D efforts, (2) supporting demonstration and pilot projects, (3) analyzing technology costs related to benefits, and (4) issuing or revising regulations. Collaboration with Industry on R&D FRA has worked with members of the railroad industry—through the Railroad Safety Advisory Committee, AAR, and TTCI—to prioritize and select technologies to be included in FRA’s R&D program. FRA and AAR collaborate extensively on R&D projects at TTCI, a DOT-owned, AAR- operated research facility. Additionally, FRA’s Office of Research and Development may select a railroad partner when beginning a new R&D project. For example, FRA partnered with one of the largest Class I freight railroads to demonstrate a new technology that measures the interaction between rail cars and the track—known as vehicle/track interaction technology. According to a senior FRA official, these devices are now widely deployed, and FRA continues to study ways to model vehicle/track interaction. Each year, FRA also presents information about its completed and ongoing R&D projects to the Transportation Research Board—a body that includes railroad industry representatives—which then conducts an evaluation of FRA’s R&D program. Additionally, the Rail Safety Improvement Act of 2008 called for FRA to develop a railroad safety strategy, which the agency issued in 2010 with its fiscal year 2011 budget request. Although this plan does not include any efforts to encourage implementation of specific rail safety technologies, it does state that FRA’s Office of Research and Development has expanded its use of grants and partnerships with railroads and suppliers to improve stakeholder participation in its R&D and support the demonstration of results as soon as possible. Support of Demonstration and Pilot Projects FRA has conducted and provides support for a number of demonstration and pilot projects that examine technologies aimed at improving rail safety and help to demonstrate to railroads the effectiveness of these technologies. According to FRA staff, the agency has put a focus on funding technology demonstration projects and has a cooperative agreement with AAR to do this work. Based on our review of FRA’s list of 143 current R&D projects for fiscal year 2010, 49 of these projects appear to involve demonstrations of new technologies or existing technologies used in new ways to improve safety. For example, there is a current demonstration project examining the use of electronically controlled pneumatic brakes. Past demonstration projects have examined a variety of rail safety technologies, including devices that measure track—known as gage restraint measurement systems—vehicle/track interaction technology and automated inspection devices. Additionally, an FRA risk- reduction grant program supports several ongoing pilot projects with railroads, two of which are examining technologies aimed at continuously testing track to collect data on the track’s performance as well as to identify defects. FRA produces summary reports of some of its R&D efforts and publishes these reports on its Web site. Analysis of Technology Costs and Benefits FRA has taken recent actions to analyze the potential costs and benefits to railroads of implementing new rail safety technologies. When issuing the final rule on electronically controlled pneumatic brakes, FRA conducted a cost-benefit analysis and included this information in the rule. Additionally, FRA analyzed potential return on investments for vehicle/track interaction technology to demonstrate to freight railroads potential cost-savings that could be achieved from implementing this technology by preventing derailments and reducing the need for emergency repairs or slow speed orders on sections of track with defective rail. FRA staff noted that railroads generally will not adopt a new technology unless it can be demonstrated to have a positive return on investment within 1 to 2 years. FRA staff also noted that because the agency demonstrated a positive return on investment for a new vehicle/track interaction system, a major Class I freight railroad adopted the technology. Issuance and Revision of Regulations FRA has also issued or revised regulations and is planning further regulatory changes in an attempt to encourage the use of new rail safety technologies. For example: FRA issued final regulations promoting the use of electronically controlled pneumatic brakes in October 2008. The regulations create an incentive for installing this technology by allowing railroads that install these brakes and comply with the regulations to conduct less frequent brake inspections, thereby decreasing the railroads’ inspection costs and potentially allowing for more frequent train operations. Prior to the establishment of these regulations, railroads were not permitted to use these specialized braking systems without first applying for an exemption from existing FRA regulations. FRA will provide an exemption from existing regulations on a case-by-case basis to railroads that seek such approval. For example, before PTC was required by law, FRA issued regulatory exemptions and eventually established regulations promoting the use of PTC. FRA has also issued regulatory exemptions allowing for the use of unmanned track inspection machines to monitor track conditions and crash energy management designs in passenger rail vehicles. FRA is currently working with the Railroad Safety Advisory Committee to revise its track inspection regulations, which, according to some experts and stakeholders we spoke with, create a disincentive for railroads to implement new track inspection technologies. As previously discussed, current FRA regulations generally require railroads to take remedial action, such as limiting train speeds or replacing track, when a track defect is found. Stakeholders we spoke with noted that using newer track inspection technologies would detect a greater number of small, relatively minor defects that pose little to no safety risk, along with more significant defects. However, stakeholders stated that FRA’s current track inspection regulations could create a situation in which railroads using newer inspection technologies might find more small defects than they could practically examine and fix in a timely manner, and could be held liable for identifying defects they did not quickly repair. To account for these newer technologies, FRA staff said they are considering changes to the remedial actions railroads must take in response to identified rail defects. FRA expects to issue a notice of proposed rulemaking on this and other changes to its track inspection regulations in the spring of 2011. Additionally, pursuant to its safety strategy for high-speed rail, FRA officials said they are considering revisions to FRA’s passenger vehicle regulations to encourage the implementation of technologies that monitor the condition of rail vehicles, although the agency has not yet identified these specific requirements. The Rail Safety Improvement Act of 2008 also requires FRA to take action in two specific ways to encourage the use of rail safety technologies in addition to PTC. First, the act requires FRA to prescribe standards, regulations, guidance, or orders by October 2009 for railroads to implement rail safety technologies in areas of track without signals or PTC. FRA officials began this effort in September 2010 by proposing that the Railroad Safety Advisory Committee establish a task force to develop a proposed rule. This proposal was accepted; however, the task force will delay meeting until representatives serving on another task force involved in PTC issues are available. FRA staff stated that the agency has delayed meeting the October 2009 requirement because FRA gave priority to the PTC rulemaking. Second, by October 2012, FRA must develop regulations requiring Class I freight railroads, Amtrak, commuter railroads, and other railroads that FRA determines have an inadequate safety record to develop a risk-reduction program that includes a technology implementation plan describing railroads’ efforts to implement new rail safety technologies. FRA issued an advanced notice of proposed rulemaking on December 8, 2010, seeking comment on the possible requirements of this program. The National Academies’ Transportation Research Board has identified a number of best practices for encouraging the implementation of new technologies. Of these best practices, those most applicable to FRA’s efforts fall into four key areas: Early involvement of users: Involving potential users of a technology early on in its development, such as seeking information from users about their needs and enlisting their assistance, can help ensure that products developed respond to users’ requirements. Demonstrating technology effectiveness: Agency efforts aimed at demonstrating the effectiveness of a technology can help other potential users decide whether to implement the technology. Activities that can help to demonstrate a technology’s effectiveness include supporting demonstrations or pilot projects and conducting cost/benefit or similar analyses. Offering incentives: Activities to provide financial assistance and efforts to revise regulations to create other incentives can help encourage the implementation of new technologies. Monitoring and reporting on technology adoption: Careful monitoring of the acceptance, adoption, refinement, and satisfaction among users of the technologies being promoted can provide lessons learned about agency efforts to encourage technology implementation. Reporting this information can help demonstrate program results and build support for the agency’s efforts. The actions we previously discussed that FRA has taken to encourage the implementation of rail safety technologies align with most of these practices and help to address some of the implementation challenges experts identified, including uncertainty about technology effectiveness and regulatory disincentives. Specifically, FRA’s collaboration with the railroad industry in its R&D efforts involves potential technology users early and helps to ensure its efforts address industry needs while also expediting the potential adoption of new technologies. FRA’s sponsorship of demonstration and pilot projects and its analyses of technology costs and benefits help to demonstrate the effectiveness of new technologies. FRA’s current efforts to revise some track inspection regulations may address the disincentives in these regulations that discourage railroads from implementing new inspection technologies. Additionally, FRA has a grant program to provide funding for implementing new rail safety technologies, although, at present, the program has been prioritized for PTC and is not being used to fund implementation of other types of rail safety technologies. Although FRA has taken actions that align to most of the best practices previously identified, the agency lacks a method to effectively monitor implementation of new rail safety technologies that would allow it to better demonstrate the results of its efforts. Specifically, FRA officials stated that the agency does not have a method to track the extent to which the railroad industry implements technologies that FRA’s R&D efforts contributed to developing. FRA staff said they have some information about the use of such new technologies, but this information is not comprehensive. For example, FRA officials said they would be aware of a railroad adopting a new safety technology if the railroad is required to seek regulatory exemption from FRA for its use. Our past work looking at the R&D program of DOT’s Office of Pipeline Safety—now within the Department’s Pipeline and Hazardous Materials Safety Administration— has shown that agencies that monitor and report on industry adoption of technologies supported by the agency’s R&D efforts can better assess the effectiveness of those R&D efforts. Specifically, the Pipeline and Hazardous Materials Safety Administration monitors and reports on its Web site the number of technologies supported by the agency’s R&D efforts that have been commercialized. Without a similar method to monitor and report on the adoption of technologies supported by FRA’s R&D efforts, the agency lacks information it could use to refine future R&D efforts or help demonstrate the results of its R&D program, an important consideration because FRA is currently in the process of updating its R&D strategic plan. FRA’s last R&D strategic plan included the goal to expedite widespread deployment of new technologies that have the potential for significant improvement in track safety—a goal for which information about the industry’s adoption of new technologies could be useful for demonstrating results. Additionally, 15 of the 20 experts we spoke with indicated that FRA could do more to encourage technology implementation and suggested actions that align with the Transportation Research Board’s best practices. Specifically, 3 experts said that FRA should conduct more demonstration or pilot projects, and 4 experts said that FRA should do more to identify the costs and benefits of implementing new technologies—actions that align with the best practice of demonstrating technology effectiveness. Also, 8 experts said that FRA should offer more financial assistance, and 6 experts said that the agency should revise its regulations to provide incentives for the introduction of new technologies—actions that align with the best practice of offering incentives. While additional use of the best practices identified by the Transportation Research Board could better encourage the implementation of rail safety technologies, we are not making a recommendation at this time because FRA has other efforts that it needs to give priority to, such as overseeing investment in high- speed passenger rail and reforming its hours of service regulations. Conclusions Although the safety of U.S. rail continues to improve, recent railroad accidents prompted the enactment of the Rail Safety Improvement Act of 2008, including the requirement to implement PTC. Other recently enacted laws indicate significant interest in expanding passenger rail services, particularly high-speed passenger services, which will change the nature of the mode and introduce new safety risks. The strategic development and implementation of PTC and other new rail safety technologies can help FRA and the industry address these risks while ensuring that rail remains a safe form of transportation. The railroad industry is making progress in developing and implementing PTC, but much remains to be accomplished to develop, test, and install fully functional PTC systems in time to meet the 2015 implementation deadline. At present, it is unclear whether various issues—such as the lack of mature PTC components and the cost of implementation, particularly to commuter and smaller freight railroads—could result in railroads missing this deadline or lead to other operational impacts for railroads. However, the PTC implementation deadline is still 5 years away, so it is too soon to determine for certain whether the industry will be able to meet it. This timing presents an opportunity to look ahead at what risks lie in wait that could jeopardize successful implementation and identify potential strategies to address them, rather than wait and see what problems develop and were not addressed. FRA will have the chance to publicly identify such risks, as well as potential ways Congress, the agency, or other stakeholders could address them, when it reports to Congress on PTC implementation progress in 2012. Identifying and mitigating risks sooner, rather than later, would better ensure a reliable PTC system can be fully implemented to provide the intended safety benefits of this technology without resulting in unintended consequences. While recent laws have expanded FRA’s role, its mission to promote safety remains a core responsibility. Much focus has been placed on implementing PTC to address accidents caused by human factors, but technologies besides PTC hold promise for improving safety by addressing other accident causes, such as problems with track or equipment. While FRA has employed several key best practices for encouraging the use of new technologies, employing a method to monitor and report on the industry’s adoption of new technologies that FRA was involved in developing could provide useful information for demonstrating the results of its R&D program and refining future efforts. Importantly, such efforts could help the agency better fulfill its mission to promote safety throughout the national rail network. Recommendations for Executive Action We recommend that the Secretary of Transportation take the following two actions: To support the effective identification and mitigation of risks to the successful fulfillment of PTC requirements by 2015, direct the Administrator of FRA to include in FRA’s 2012 report to Congress an analysis of the likelihood that railroads will meet the PTC implementation deadline; the risks to successful implementation of PTC; and actions Congress, railroads, or other stakeholders can take to mitigate risks to successful PTC implementation. To better encourage the implementation of rail safety technologies other than PTC, direct the Administrator of FRA to develop and implement a method for monitoring and reporting information on the adoption of technologies supported by FRA’s R&D efforts. Agency Comments We provided a draft of this report to the Department of Transportation for review and comment. DOT provided technical clarifications, which we incorporated into the report as appropriate. DOT also said that it would consider our recommendations. We also provided a draft of this report to Amtrak for its review and comment. Amtrak provided a technical comment, which we incorporated. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report discusses (1) the progress railroads have made in developing and implementing positive train control (PTC) and the remaining steps to implement PTC systems; (2) the potential benefits of other rail safety technologies under development as well as the challenges to implementing them; and (3) the extent of the Federal Railroad Administration’s (FRA) efforts to fulfill the PTC mandate and encourage the implementation of other rail safety technologies. To obtain information about railroads’ progress in developing and implementing PTC and the steps remaining to implement PTC, we interviewed representatives of the four largest Class I freight railroads (BNSF Railway, CSX Corporation, Norfolk Southern, and Union Pacific); Amtrak; five selected commuter railroads (Massachusetts Bay Transportation Authority (Boston, Massachusetts), Metra (Chicago, Illinois), North County Transit District (San Diego, California), Tri-Rail (Miami and Fort Lauderdale, Florida), and Virginia Railway Express (Washington, D.C.)); selected rail supply companies (ENSCO, MeteorComm, and Ansaldo); railroad industry associations (the Association of American Railroads (AAR), the American Short Line and Regional Railroad Association, and the Railway Supply Institute); and FRA. We selected the commuter railroads to represent a range of geographic locations and levels of ridership, while selecting railroads that had relationships with all four of the largest Class I railroads and included a mix of railroads that both owned and leased track. We selected the railroad supply companies on the basis of recommendations from railroad industry associations and railroads and included all of the major suppliers for key components of the freight railroads’ PTC systems. We reviewed PTC development and implementation requirements in the Rail Safety Improvement Act of 2008 and FRA regulations. We also reviewed PTC implementation plans that Class I freight railroads and Amtrak submitted to FRA. In addition, we visited and met with officials at the Transportation Technology Center, Inc. (TTCI), near Pueblo, Colorado, where some PTC components are being tested. To obtain information about the benefits of other rail safety technologies under development, as well as the challenges to implementing them, we compiled a list of rail safety technologies currently under development in the United States on the basis of interviews with railroads, railroad associations, FRA, and the Department of Transportation’s Volpe National Transportation Systems Center (Volpe Center). We organized these technologies into four categories and refined this list during the course of our work as we obtained additional information from other stakeholders. We sought periodic feedback on the list from FRA, the Volpe Center, AAR, and TTCI. We limited the scope of these technologies to those that would prevent or mitigate train-to-train collisions and derailments and excluded technologies that addressed other risks or that experts indicated were widely deployed and therefore no longer under development. We identified, with assistance from the National Academies’ Transportation Research Board, a group of 20 rail safety technology experts from railroads, rail suppliers, federal agencies, labor organizations, and universities (see app. II for a list of these experts). We interviewed these experts about their knowledge of the benefits of the rail safety technologies within the scope of this engagement, as well as their views on the challenges to implementing them, and surveyed them with a standardized assessment tool seeking information about the benefits, maturity, and implementation challenges of all the technologies in our scope. We received completed assessments from 19 of the 20 experts (see app. III for complete assessment results). Based on the rail safety technology experts’ responses to our questionnaire, we identified some technologies as being more promising than others. In our questionnaire, we asked experts about their views of these technologies’ potential to improve safety, the value of funding additional research and development (R&D) and implementation, and the technologies’ current stages of product development. For the purposes of this analysis, we defined a technology as being more promising if it has a higher potential to improve safety, is most worth additional R&D and implementation costs, and is in a later stage of development, which presumably would mean it could be implemented sooner than a technology that is in an earlier development stage. By assigning values to the experts’ responses, we determined which of the technologies in our scope most satisfied these three criteria—in other words, which technologies the experts viewed as having the most potential to improve safety, being most worth additional costs, and being in the later stages of product development. We also interviewed government officials, railroad industry representatives, and academics from the European Union, Japan, and Taiwan about rail safety technologies implemented in other countries, seeking insights about potential differences in implementation. We identified these stakeholders on the basis of input from FRA, the Volpe Center, the Transportation Research Board, and suggestions from foreign officials. To obtain information about the extent of FRA’s efforts to fulfill the PTC mandate and encourage the implementation of other rail safety technologies, we reviewed documentation obtained from FRA officials— including information on R&D projects, technology pilots, guidance, strategic planning, and technology implementation grants—and interviewed FRA officials responsible for the agency’s rail safety technology R&D, safety regulatory efforts, and efforts to meet the PTC mandate. We also reviewed FRA’s requirements in the Rail Safety Improvement Act of 2008 and related FRA regulations to fulfill the PTC mandate and encourage the implementation of other rail safety technologies. Additionally, we interviewed the experts and other railroad industry stakeholders that we have previously named about their views on FRA’s efforts to fulfill the PTC mandate and encourage the implementation of other rail safety technologies. We focused our review on FRA efforts related to the implementation of these technologies and did not attempt to comprehensively review FRA’s R&D program. We identified best practices for encouraging the implementation of new technologies by reviewing reports from the National Academies’ Transportation Research Board and prior GAO reports. Appendix II: List of Rail Safety Technology Experts Appendix III: Detailed Results of Experts’ Assessment of Rail Safety Technologies Following is the tool used to assess experts’ views about rail safety technologies under development, complete with detailed results. We do not include the responses for open-ended questions. The U.S. Government Accountability Office (GAO) is an independent, non- partisan agency that assists Congress in evaluating federal programs. We are interested in your expert professional opinions on a number of technologies for potentially improving railroad safety. We have identified the technologies included in this assessment tool through our first round of interviews with you, other experts and stakeholders, and a review of available literature. These technologies are separated into four categories – Remote Control and Switches, Rolling Stock and Condition Monitoring, Occupant Protection, and Track Inspection and Measurement. For the purposes of this review, we have limited our scope to reviewing only those technologies that would potentially increase safety by preventing or mitigating train-to-train collisions and derailments. We ask that you please assess the technologies across several factors, providing comments where appropriate. In addition, we are also interested in your thoughts about possible actions that the U.S. Department of Transportation could take to encourage the implementation of new technologies. Lastly, we are interested in your opinion on the extent to which specific issues may pose a challenge to implementing positive train control by the December 31, 2015 deadline. Instructions for Completing This Tool You can answer most of the questions easily by checking boxes or filling in blanks. A few questions request short narrative answers. Please note that these blanks will expand to fit your answer. Please use your mouse to navigate throughout the document by clicking on the field or check box you wish to fill in. Do not use the “Tab” or “Enter” keys as doing so may cause formatting problems. To select or deselect a check box, simply click or double click on the box. To assist us, we ask that you complete and return this document by June 15, 2010. Please return the completed survey by e-mail. Simply save this file to your computer desktop or hard drive and attach it to your e-mail. Thanks in advance for taking the time to share your expertise with GAO. If you have any questions about this tool, please contact us. You may direct questions to Andrew Huddleston, Senior Analyst. Thank you for your help. Part 1: Remote Control and Switch Technologies In this section we refer to Remote Control and Switch Technologies. Please use the following descriptions as a guide when thinking about these specific technologies. SKIP TO PART 2 (QUESTION #10) SKIP TO PART 2 (QUESTION #10) CONTINUE TO QUESTION #2 CONTINUE TO QUESTION #2 CONTINUE TO QUESTION #2 2. How much potential, if any, does further development and implementation of the following remote control and switch technologies have for improving rail safety? 3. Considering the potential for additional safety benefits and likely research and development (R&D) costs—regardless of funding source—do you believe further R&D of the following remote control and switch technologies would be worth the investment? 4. Considering the potential for additional safety benefits and likely implementation costs—regardless of funding source—do you believe the procurement, operation, and maintenance of the following remote control and switch technologies would be worth the investment? 5. At what product development stage are the following remote control and switch technologies in the United States? 6. How much of a challenge, if any, do the following issues present for the implementation of remote-control locomotives? 7. How much of a challenge, if any, do the following issues present for the implementation of remote-control switches? 8. How much of a challenge, if any, do the following issues present for the implementation of switch position monitors/indicators? 9. What other challenges, if any, that are not listed above impede the implementation of remote control and switch technologies in the United States? Part 2: Rolling Stock and Condition Monitoring Technologies In this section we refer to Rolling Stock and Condition Monitoring Technologies. Please use the following descriptions as a guide when thinking about these specific technologies. Systems installed on rail cars that continuously monitor mechanical components including bearing temperature, bearing and wheel defects, and longitudinal impacts Condition monitoring systems installed along tracks that can identify defects in various rolling stock components as trains drive by. For example, acoustic bearing detectors, wheel impact load detectors, truck performance detectors, cracked wheel detectors, wheel profile measurement. 10. How would you rate your overall level of knowledge of increasing railroad safety through the development and use of the following rolling stock and condition monitoring technologies? SKIP TO PART 3 (QUESTION #21) SKIP TO PART 3 (QUESTION #21) CONTINUE TO QUESTION #11 CONTINUE TO QUESTION #11 CONTINUE TO QUESTION #11 11. How much potential, if any, does further development and implementation of the following rolling stock and condition monitoring technologies have for improving rail safety? Rolling stock and condition monitoring technology b. 12. Considering the potential for additional safety benefits and likely research and development (R&D) costs—regardless of funding source—do you believe further R&D of the following rolling stock and condition monitoring technologies would be worth the investment? Rolling stock and condition monitoring technology b. 13. Considering the potential for additional safety benefits and likely implementation costs—regardless of funding source—do you believe the procurement, operation, and maintenance of the following rolling stock and condition monitoring technologies would be worth the investment? Rolling stock and condition monitoring technology b. 14. At what product development stage are the following rolling stock and condition monitoring technologies in the United States? 15. How much of a challenge, if any, do the following issues present for the implementation of electronically controlled pneumatic brakes? 16. How much of a challenge, if any, do the following issues present for the implementation of improved design of tank cars and other hazardous material cars? 17. How much of a challenge, if any, do the following issues present for the implementation of high performance wheel steels? 18. How much of a challenge, if any, do the following issues present for the implementation of on-board condition monitoring systems? 19. How much of a challenge, if any, do the following issues present for the implementation of wayside detectors? 20. What other challenges, if any, that are not listed above impede the implementation of rolling stock and condition monitoring technologies in the United States? Part 3: Occupant Protection Technologies In this section we refer to Occupant Protection Technologies. Please use the following descriptions as a guide when thinking about these specific technologies. Rail car designs with crumple zones that absorb energy from a collision in order to maintain occupant volume and reduce secondary impact velocities b. 21. How would you rate your overall level of knowledge of increasing railroad safety through the development and use of the following occupant protection technologies? SKIP TO PART 4 (QUESTION #29) SKIP TO PART 4 (QUESTION #29) CONTINUE TO QUESTION #22 CONTINUE TO QUESTION #22 CONTINUE TO QUESTION #22 22. How much potential, if any, does further development and implementation of the following occupant protection technologies have for improving rail safety? a. Crash energy management b. 23. Considering the potential for additional safety benefits and likely research and development (R&D) costs—regardless of funding source—do you believe further R&D of the following occupant protection technologies would be worth the investment? a. Crash energy management b. 24. Considering the potential for additional safety benefits and likely implementation costs—regardless of funding source—do you believe the procurement, operation, and maintenance of the following occupant protection technologies would be worth the investment? a. Crash energy management b. 25. At what product development stage are the following occupant protection technologies in the United States? 26. How much of a challenge, if any, do the following issues present for the implementation of crash energy management? 27. How much of a challenge, if any, do the following issues present for the implementation of improved design of interior passenger car fixtures? 28. What other challenges, if any, that are not listed above impede the implementation of occupant protection technologies in the United States? Part 4: Track Inspection and Measurement Technologies In this section we refer to Track Inspection and Measurement Technologies. Please use the following descriptions as a guide when thinking about these specific technologies. 29. How would you rate your overall level of knowledge of increasing railroad safety through the development and use of the following track inspection and measurement technologies? SKIP TO PART 5 (QUESTION #44) SKIP TO PART 5 (QUESTION #44) CONTINUE TO QUESTION #30 CONTINUE TO QUESTION #30 CONTINUE TO QUESTION #30 30. How much potential, if any, does further development and implementation of the following track inspection and measurement technologies have for improving rail safety? Track inspection and measurement technology c. Ultrasonic phased array rail e. Portable ride quality meters h. 31. Considering the potential for additional safety benefits and likely research and development (R&D) costs—regardless of funding source—do you believe further R&D of the following track inspection and measurement technologies would be worth the investment? 32. Considering the potential for additional safety benefits and likely implementation costs—regardless of funding source—do you believe the procurement, operation, and maintenance of the following track inspection and measurement technologies would be worth the investment? 33. At what product development stage are the following track inspection and measurement technologies in the United States? 34. How much of a challenge, if any, do the following issues present for the implementation of machine vision-based automated track inspection? 35. How much of a challenge, if any, do the following issues present for the implementation of laser-based non-contact ultrasonic rail inspection? 36. How much of a challenge, if any, do the following issues present for the implementation of ultrasonic phased array rail defect imaging? 37. How much of a challenge, if any, do the following issues present for the implementation of rail longitudinal stress detection systems? 38. How much of a challenge, if any, do the following issues present for the implementation of portable ride quality meters? 39. How much of a challenge, if any, do the following issues present for the implementation of autonomous track measurement systems? 40. How much of a challenge, if any, do the following issues present for the implementation of track modulus measurement systems? 41. How much of a challenge, if any, do the following issues present for the implementation of intrusion detection systems? 42. How much of a challenge, if any, do the following issues present for the implementation of bridge integrity monitoring systems? 43. What other challenges, if any, that are not listed above impede the implementation of track inspection and measurement technologies in the United States? Part 5: Government Actions 44. What further actions, if any, could the U.S. Department of Transportation take to encourage the implementation of new rail safety technologies? Part 6: Positive Train Control SKIP TO QUESTION #49 SKIP TO QUESTION #49 CONTINUE TO QUESTION #46 CONTINUE TO QUESTION #46 CONTINUE TO QUESTION #46 46. How much of a challenge, if any, do the following issues present to meeting the December 31, 2015 deadline for implementing positive train control (PTC)? a. Achieving interoperability among b. Refining braking algorithms c. Acquisition of adequate spectrum in the 220 MHz frequency, specifically in dense, metropolitan areas d. Development of new high e. Technological maturity of other f. Ability of suppliers to meet demand for PTC products g. Cost to larger railroads (Amtrak and Class I freights) h. Cost to smaller railroads (short lines, regionals, commuters) i. 47. What other issues, if any, that are not listed above may present a challenge to meeting the December 31, 2015 deadline for implementing positive train control? 48. What further actions, if any, could the U.S. Department of Transportation take to facilitate the implementation of positive train control in order to meet the December 31, 2015 deadline? Part 7: Additional Comments 49. What other comments, if any, do you have about the topics covered in this assessment tool? Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the individual named above, Judy Guilliams-Tapia, Assistant Director; Amy Abramowitz; Katie Berman; Matthew Butler; Aglae Cantave; Bess Eisenstadt; Colin Fallon; Kathy Gilhooly; Andrew Huddleston; Sara Ann Moessbauer; Josh Ormond; Daniel Paepke; Madhav Panwar; and Terry Richardson made key contributions to this report.
Positive train control (PTC) is a communications-based train control system designed to prevent some serious train accidents. Federal law requires passenger and major freight railroads to install PTC on most major routes by the end of 2015. Railroads must address other risks by implementing other technologies. The Department of Transportation's (DOT) Federal Railroad Administration (FRA) oversees implementation of these technologies and must report to Congress in 2012 on progress in implementing PTC. As requested, this report discusses railroads' progress in developing PTC and the remaining steps to implement it, the benefits of and challenges in implementing other safety technologies, and the extent of FRA's efforts to fulfill the PTC mandate and encourage the implementation of other technologies. To conduct this work, GAO analyzed documents and interviewed FRA and rail industry officials. GAO also interviewed and surveyed rail experts. The four largest freight railroads and Amtrak have made progress in developing PTC and are preparing for implementation, but there is a potential for delays in completing the remaining sequence of steps to implement PTC in time for the 2015 deadline. For example, although railroads have worked with suppliers to develop some PTC components, the software needed to test and operate these components remains under development. As a result, it is uncertain whether components will be available when needed, which could create subsequent delays in testing and installing PTC equipment. Additionally, publicly funded commuter railroads may have difficulty in covering the $2 billion that PTC is estimated to cost them, which could create delays if funding for PTC is not available or require that railroads divert funding from other critical areas, such as maintenance. The uncertainties regarding when the remaining steps to implement PTC can be completed, as well as the related costs, raise the risk that railroads will not meet the implementation deadline, delaying the safety benefits of PTC. Additionally, other critical needs may go unmet if funding is diverted to pay for PTC. Other technologies hold promise for preventing or mitigating accidents that PTC would not address, but face implementation challenges. Experts identified technologies to improve track inspection, locomotives and other rail vehicles, and switches as having promise to provide additional safety. But challenges to implementing these technologies include their costs, uncertainty about their effectiveness, regulations that could create disincentives to using certain technologies, and lack of interoperability with existing systems and equipment. For example, electronically controlled pneumatic brakes are a promising technology to improve safety by slowing or stopping trains faster, but are expensive and not compatible with some common train operations. FRA has taken actions to fulfill the PTC mandate and has the opportunity to provide useful information on risks and mitigation strategies to Congress in its 2012 report. FRA has developed PTC regulations, hired new staff to monitor implementation of PTC, and created a grant program to provide funding to railroads. Going forward, as it monitors railroads' progress, FRA will have additional information for determining whether the risks previously discussed are significant enough to jeopardize successful implementation of PTC by the 2015 deadline. Prior GAO reports have noted that the identification of risks and strategies to mitigate them can help ensure the success of major projects. Including such information in FRA's 2012 report would help Congress determine whether additional actions are needed to ensure PTC is implemented successfully. Additionally, FRA's actions to encourage the implementation of other rail safety technologies align with some, but not all, best practices for such efforts. For example, FRA has followed the best practice of involving the industry early in developing new technologies, but it does not monitor the industry's use of technologies that it helped develop. Monitoring and reporting on the industry's adoption of new technologies could help the agency better demonstrate the results of its efforts.
Background Several players have important roles in implementing the CAP goals, as required by GPRAMA: OMB. OMB provides guidance to agencies and CAP goal teams in Circular A-11 on how to implement GPRAMA, including implementing and reporting on the CAP goals. Beginning in March 2014, OMB and the PIC also developed more detailed guidance, which they have updated periodically, to help ensure CAP goal teams are complying with GPRAMA requirements, including developing CAP goal implementation plans, identifying contributors, and reporting quarterly progress toward their goals on the public website, Performance.gov. PIC. The PIC is chaired by OMB’s Deputy Director for Management and is composed of agency Performance Improvement Officers, other agency performance staff, and senior OMB staff to facilitate the exchange of useful practices to strengthen agency performance management, such as through cross-agency working groups. The PIC is supported by an Executive Director and a team of 8 full-time staff, that conduct implementation planning and coordination on crosscutting performance areas, including working with OMB, other government-wide management councils, and agencies on the CAP goals. CAP Goal Leaders. CAP goal leaders are responsible for coordinating efforts to implement each goal. As shown in figure 1 above, the current CAP goals have at least two goal leaders—one from the Executive Office of the President and the other from a key responsible agency. OMB directs CAP goal leaders to engage officials from contributing agencies by leveraging existing interagency working groups, committees, and councils. GPRAMA has a number of requirements for reporting on CAP goals (see text box). These reporting requirements are designed to ensure that relevant performance information is used to improve performance and results of the goals, that OMB and CAP goal leaders actively lead efforts to engage all relevant participants in collaborative performance improvement initiatives, and hold them accountable for progress on identified goals and milestones. GPRA Modernization Act of 2010 Requirements for Reporting on Cross-Agency Priority Goal Progress Make available on the website Performance.gov: 1. A brief description of each of the federal government priority goals required by section 1120(a) of this title. 2. An identification of the lead government official for each federal government performance goal. 3. An identification of the agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each federal government performance goal. 4. Plans to address major management challenges that are government-wide or crosscutting in nature, and describe plans to address such challenges, including relevant performance goals, performance indicators, and milestones. 5. Performance goals to define the planned level of performance for each goal for the year in which the plan is submitted and the next fiscal year for each of the federal priority goals. 6. Common federal government performance indicators. 7. Quarterly targets for each common performance indicator.assessing the overall progress toward each federal government performance goal. 11. How performance indicators with quarterly targets are being used in measuring or assessing whether relevant agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities are contributing as planned. 12. Results achieved during the most recent quarter. 13. Overall trend data compared to the planned level of performance. OMB established the first set of CAP goals for a 2-year interim period in February 2012. We issued reports in 2012 and 2014, evaluating implementation challenges faced during the interim CAP goal period. In May 2012, we reported that the interim CAP goals did not leverage all relevant parties, and therefore important opportunities for achieving these goals may have been missed. OMB responded by identifying and including additional relevant departments, agencies, and programs for the interim CAP goals. In June 2014, we identified a number of reporting and accountability gaps with the interim CAP goals. Specifically, we found that some goals did not report on progress towards a planned level of performance because the goals lacked either a quantitative target or the data needed to track progress. We also found that quarterly progress updates published on Performance.gov listed planned activities and milestones contributing to each goal, but some did not include relevant information, including time frames for the completion of specific actions and the status of ongoing efforts. We concluded that the incomplete information in the updates provided a limited basis for ensuring accountability for the achievement of targets and milestones. As a result, we made seven recommendations to OMB and the PIC to improve the reporting of performance information for CAP goals and quarterly progress reviews. OMB responded to these recommendations by updating its guidance to CAP goal teams to address the reporting and accountability gaps we identified. We closed these recommendations as implemented based on OMB’s updated guidance to CAP goal teams, our assessment of the quarterly progress updates, and interviews with CAP goal teams. In March 2014, OMB issued the current set of CAP goals, which GPRAMA requires to be updated, at a minimum, every 4 years. As shown in figure 2, the current set of 15 CAP goals includes 7 related to crosscutting mission areas and 8 related to management. OMB reported that it worked to select CAP goals that represent high-priority issue areas with outcomes that could be enhanced through improved cross-agency implementation. OMB will issue the next set of 4-year CAP goals in February 2018 along with the President’s fiscal year 2019 budget. OMB staff told us that they expect the last progress update posted on Performance.gov for this set of CAP goals will reflect progress from the second quarter of fiscal year 2018, which ends in March 2018. Each quarterly progress update for this set of CAP goals is available on Performance.gov. OMB Incorporated Lessons from Interim CAP Goals into Governance of Current CAP Goals OMB Updated Guidance to Improve Public Reporting on Implementation of CAP Goals As we reported in September 2015, OMB improved CAP goal reporting and accountability by addressing recommendations we made in June 2014. Specifically, OMB developed updated guidance and a new reporting template for CAP goal teams that provide a consistent reporting format across all goal teams, and helps goal teams meet GPRAMA reporting requirements. According to OMB staff managing the CAP goals, this revised reporting structure and template is intended to promote engagement from goal leaders, facilitate performance monitoring, ensure transparency of the CAP goal initiative, and provide a framework for CAP goal teams to articulate cross-agency goals using milestones and measures. The template outlines information that GPRAMA requires goal teams to report on Performance.gov, such as goal leaders; contributing agencies, organizations, and programs; performance measures and targets; key milestones; and plans to address government-wide management challenges. The updated guidance directs CAP goal teams to include additional performance information—not required by GPRAMA—in their quarterly progress updates, such as the status of milestones, associated time frames for their completion, performance indicators (or measures) under development, and an organizational chart depicting the goal’s governance team. According to the seven CAP goal teams we spoke with and OMB staff, the quarterly progress updates provide useful information for goal leaders to track progress over time and to make timely management decisions that affect goal implementation. PIC guidance states that the update should provide succinct, high-level information that is understandable to all involved in implementation (i.e., program staff to senior leaders). OMB staff managing the CAP goals told us that the quarterly progress updates summarize the progress made on each CAP goal. They added that, at this point in the goal period, reviewing the milestone status for each CAP goal is the best way to determine whether goals are at risk or sufficient progress is being made quarterly. For example, the Lab-to-Market CAP goal leader in the Department of Energy told us that the progress updates enables the goal team to track progress on activities that support and expand technology between the national laboratories and the private sector. The goal leader also told us the quarterly reports are useful to implementation teams because they reinforce the need to coordinate across agency lines, and drive discussion about what can be done to achieve goals and milestones. OMB Developed Strategies to Build CAP Goal Teams’ Capacity for Goal Implementation OMB and the PIC have also implemented strategies to build agency capacity to work across agency lines. For example: Assigned agency goal leader. For the current set of CAP goals, OMB changed the CAP goal governance structure to include agency co-leads for each CAP goal, in addition to entities within the Executive Office of the President, such as OMB, that were already serving as goal leaders (see figure 2 above). According to OMB staff, this new governance structure reflects agency leadership and expertise in CAP goal subject areas, more effectively leverages agency resources for crosscutting efforts, and promotes greater coordination across multiple agencies. For example, Science, Technology, Engineering and Mathematics (STEM) Education CAP goal team staff from the Office of Science and Technology Policy (OSTP) told us that they found contributing agencies to be more receptive to directives and efforts for implementing the CAP goal because they come jointly from the National Science Foundation (NSF) and are not solely based on OSTP’s policy perspective. They also told us that NSF’s leadership of CAP goal activities and its ability to secure agency buy-in, among other things, makes it an effective CAP goal leader. Provide ongoing guidance and assistance to CAP goal teams. The seven CAP goal teams we spoke with told us that OMB and the PIC staff are available regularly to provide them with ongoing support, such as assisting with the regular collection of performance data, leading seminars to develop useful milestones and priority actions, and improving teams’ ability to track and report progress. The PIC plans to continue to assist teams with tracking their progress and ensuring accountability across the goals, including supporting the quality and completeness of the regular quarterly progress updates through Performance.gov. For example, in August 2014, PIC staff met with the STEM Education CAP goal team to assist it in developing milestones and performance measures and to define actionable next steps. The STEM Education CAP goal leader from NSF told us the assistance provided by the PIC helped the goal team improve its implementation plan and develop relevant performance measures. Holding senior-level reviews. Another way OMB and the PIC are increasing leadership attention of CAP goal implementation is by committing to hold regularly scheduled senior-level meetings to review CAP goal progress. OMB’s Deputy Director for Management leads implementation-focused meetings for the eight management CAP goals approximately three times a year, and OMB’s Deputy Director for Budget leads meetings to review the seven mission-focused CAP goals as necessary. According to OMB, as of March 2016, it held the senior-level review meetings for all 15 CAP goals as planned. Obtained a means of funding cross-agency CAP goal activities. As part of the fiscal year 2016 consolidated appropriations act, Congress provided authority for the heads of executive departments and agencies, with OMB approval, to transfer up to $15 million for purposes of improving coordination, reducing duplication, and other activities related to the implementation of the CAP goals. OMB staff told us that they proposed this means of funding crosscutting activities in response to lessons learned from the interim CAP goal process, feedback from agencies, and our work on enhancing collaboration in interagency groups. Table 1 lists the fiscal year 2016 interagency transfers for selected CAP goals. For example, OMB reported that the Lab-to-Market CAP goal team would use $1.9 million of transferred funds to develop an interface for all 17 DOE national laboratories to directly interact with external stakeholders, such as the investment community, and allow the public to access specific capabilities across the national laboratory network. Launched a government-wide White House Leadership Development Program. OMB staff told us that leadership fellows were selected and assigned to one of the CAP goal teams and other cross-agency initiatives to expose emerging agency leaders to cross- agency issues and address the need for strong leadership on the CAP goals, while leveraging existing resources. In November 2015, 16 fellows began their 1-year rotations. Goal Teams Reported CAP Goal Designation Led to Increased Leadership Attention and Collaboration CAP goal leaders and their teams told us that the CAP goal designation has resulted in increased leadership attention within their agencies and access to management and performance expertise at OMB, the PIC, and government-wide councils such as the President’s Management Council (PMC) and the Chief Human Capital Officers Council. CAP goal teams we spoke with told us that the increased focus on CAP goal implementation enabled CAP goal teams to request and receive agency resources that helped sustain progress toward goals. For example, the Smarter IT Delivery CAP goal team worked with OPM to obtain a new hiring authority for digital service experts to assist with production of information technology and digital services. As of December 2015, five agencies and OMB reported hiring digital service experts. Goal team staff told us that obtaining the hiring authority was a direct result of the CAP goal. According to OMB and PIC staff, the CAP goal process is a management tool to encourage interagency collaboration. OMB staff told us that they selected CAP goals, in part, that would focus on implementation challenges that could benefit from greater collaboration between multiple agencies. According to OMB staff, some CAP goal efforts are relatively new and agencies do not have much experience working together, whereas other CAP goal issues have long-standing relationships and interagency mechanisms in place. All of the CAP goal teams described approaches they are using to work across agency lines. For example, CAP goal teams established formal interagency groups, used information- sharing tools to communicate, and developed standardized guidance. Leveraging expertise across agencies. CAP goal teams report that they are working with pre-existing interagency working groups or establishing new working groups or communities of practice to leverage expertise and experience across agencies. For example, the Customer Service CAP goal team established an interagency Community of Practice (COP) which meets monthly to share practices and provide feedback to CAP goal leadership. For example, the goal team reports that a group within the COP led the development of a draft customer service toolkit, which includes customer service principles and a model for gauging program-level progress toward those principles. Likewise, the Open Data CAP goal team established the Interagency Open Data Working Group, which includes the 24 Chief Financial Officers (CFO) Act agencies and smaller entities government-wide. The group meets biweekly and its members represent a diverse range of levels of employees, such as IT contractors and chief data officers. According to the goal team, the establishment of this working group is the best outcome from the CAP goal so far. The group shares best practices, allows agencies to learn from other agencies that may be farther along, and shares tools. For example, the goal team told us that it hosted a series of webinars on Data.gov explaining a new method to convert data from common format to machine readable. The goal team told us that this process was a challenge for agencies, but through the webinars and the working group, every agency now knows how to produce machine- readable documents, and is currently validating their processes. Developing common reporting framework. The Job-Creating Investment goal team told us that the CAP goal designation has helped the Department of Commerce (Commerce) headquarters staff to engage with overseas staff and better leverage their resources and data. For example, the CAP goal team told us that they launched an interagency effort between State and Commerce to provide training and update guidance to U.S. overseas staff on standardized methods and procedures for reporting information on their activities to attract foreign direct investment. Overseas staff are now submitting updates that reflect Commerce’s activities abroad. The goal team reported that it now holds quarterly reviews, in part, to ensure the data is being collected and reported consistently by staff from multiple agencies and programs. Outreach to the public and nonfederal stakeholders. The Open Data CAP goal team reported a number of White House-sponsored events to encourage public participation in using government data and to elicit feedback on Open Data CAP goal efforts. For example, the goal team reported that agencies hosted Open Data Roundtables to highlight success stories and get feedback from its stakeholders, including nonfederal data users. In another example, the STEM Education CAP goal team developed an online portal for graduate students to locate federally-funded STEM research opportunities. CAP goal team staff told us they recently completed a similar resource for undergraduate students. Selected CAP Goal Teams Have Met Many GPRAMA Reporting Requirements, but Most Have Not Established Quarterly Targets Based on GPRAMA requirements, we identified five broad actions that CAP goal teams follow to implement a goal: (1) establish the goal, (2) identify goal leaders and contributors, (3) develop strategies and performance measures, (4) use performance information to make decisions and improve performance, and (5) report results. The 13 GPRAMA requirements for reporting on CAP goal progress each relate to an action for goal implementation. These actions do not necessarily occur sequentially. For example, in the quarterly progress updates published on Performance.gov, CAP goal teams reported developing new strategies and performance measures, while simultaneously using performance information to report results and develop new strategies. Action 1: Establish the Goal Example of a Goal Statement: Lab-to- Market CAP Goal The first action for goal implementation is to articulate the cross-agency goal. GPRAMA requires that Performance.gov contain a description of each CAP goal. To meet this requirement, CAP goal teams list the overall goal statement and provide other contextual information, such as the problem being addressed and a vision for what the goal might achieve in each quarterly progress update published on Performance.gov. As shown in figure 3, all of the selected CAP goal teams met this reporting requirement. Action 2: Identify Goal Leaders and Contributors Definition of common term: Contributors –The agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each CAP goal. The second action for goal implementation is to identify leaders and contributors that are accountable for progress toward the goal, and to establish the cross-agency team that will monitor and oversee implementation. GPRAMA requires that Performance.gov provide information on the lead government official for each CAP goal and the agencies, organizations, policies, and other activities—including tax expenditures—that contribute to the goal. As depicted in figure 4, we found that all of the selected CAP goals identified and clearly reported the goal leaders accountable for implementation of the CAP goal. Also shown in figure 4, we determined that all of the selected CAP goal teams fully met the requirement to identify and report key contributors to the goal. To make this determination, we reviewed the CAP goal statements and related activities, and evaluated whether selected goal teams identified key contributors to the goal. For example, the Open Data CAP goal team identified contributing agencies and offices within the Executive Office of the President and the General Services Administration in addition to policies governing open data efforts at federal agencies, such as agency requirements to protect personal and confidential data. For this review, we also evaluated whether OMB and the goal teams considered the extent to which tax expenditures contribute to the selected CAP goals. This issue relates to a recommendation we made in 2013 that OMB should review whether all relevant tax expenditures that contribute to a CAP goal have been identified, and as necessary, include any additional tax expenditures in the list of federal contributors for each goal. In September 2015, OMB staff told us that OMB had determined that there were no tax expenditures that were critical to support achievement of the current CAP goals. Goal teams we spoke with during our review told us that they did not identify any relevant tax expenditures related to their goals. As a result, we have recently closed this recommendation as implemented. Action 3: Develop Strategies and Performance Measures The third action for goal implementation is developing strategies and performance measures. GPRAMA requires that each CAP goal team publish on Performance.gov plans to address government-wide major management challenges, and establish clearly defined quarterly milestones and performance indicators (which we refer to as performance measures), and targets to measure or assess overall progress. GPRAMA also requires that teams develop strategies for performance improvement as they regularly review implementation and results. As shown in figure 5, all CAP goal teams reported plans to address major management challenges that are government-wide or crosscutting in nature as reflected in the quarterly progress updates published on Performance.gov. In addition, all the goal teams’ quarterly progress updates established clearly defined quarterly milestones. Specifically, we found that all CAP goal teams we reviewed established a work plan that included a brief description of planned milestones, milestone due dates, and the agency or interagency group responsible for implementing milestones. If clearly linked to the outcome, milestones can be a way for goal teams to report and track progress quarterly. We determined that all but one of the selected goal teams fully reported strategies for performance improvement, meaning that each goal team updated or revised its strategies or major actions from one quarter to the next, or identified challenges or barriers to the goal or a specific milestone. The Lab-to-Market CAP goal team partially met this requirement because while the team updated some of its milestones in the first quarter of fiscal year 2016, it did not report any associated barriers or challenges, nor did it identify any milestones for the sub-goal to assess the economic impact of federal technology transfer activities. Definition of common terms: Performance measure - A measurable value that indicates the state or level of something (also called “indicator”). Sub-goal - A goal that contributes to the achievement of the broader CAP goal. Five of the seven CAP goal teams reported common performance measures for each of their sub-goals. The Lab-to-Market CAP goal team reported a number of performance measures for three of its four sub- goals, partially meeting this requirement. The Customer Service CAP goal does not have any performance measures at this time, but the team reported a number of performance measures that are under development. According to OMB and PIC staff, some of the CAP goal teams may be working to develop cross-agency performance measures for the first time, or may have measures in place that require improvement. Because the scope of our review was to examine the implementation of quarterly reporting requirements, we did not evaluate whether these measures were appropriate indicators of performance, were sufficiently ambitious, or met other dimensions of quality. Definition of common term: Target - Level of performance expressed as a tangible, measurable objective, against which actual achievement can be compared, including a goal expressed as a quantitative standard, value, or rate. Five of seven goal teams fully or partially met the requirement to report on the planned level of performance because they reported an annual target or a target for the 4-year goal period for at least one sub-goal. The two CAP goal teams that fully met this requirement reported targets for all performance measures. For example, for one of its sub-goals, the People and Culture CAP goal team established a target that, by the end of the CAP goal period in 2018, there would be an increase in hiring managers’ satisfaction with the quality of applicants from 60 percent to 70 percent, as measured by the Chief Human Capital Officers Management Satisfaction Survey. The Customer Service CAP goal team did not report any targets because it does not yet have established performance measures. The Science, Technology, Engineering, and Mathematics (STEM) Education goal team did not identify a target—expressing both magnitude and direction—for 12 of its 15 performance measures. The most recent data available for these measures is from 2013, prior to the start of the current CAP goal period. Of this set of actions, the selected goal teams had the most difficulty meeting the requirement to report quarterly targets for each performance measure. Five of the seven selected CAP goal teams did not report quarterly targets. The Open Data goal team reported a quarterly target for 1 of its 16 performance measures, partially meeting this requirement. The STEM Education goal team also partially met this requirement by reporting a 1 percent quarterly target increase in the number of views, downloads, and international downloads of a June 2015 webinar focused on implementing evidence-based practices in undergraduate STEM instruction. OMB and PIC staff told us that they would like to work toward requiring quarterly targets, as appropriate, going forward. However, they also noted it may not be possible or appropriate for a goal to have a quantitative quarterly target, as relevant data may not be available quarterly, particularly for new initiatives. In such a case, a qualitative target might be more appropriate. CAP goal teams we spoke with expressed similar concerns and told us that they are instead relying on milestones to track progress quarterly. For example, the People and Culture CAP goal team relies on results from the Federal Employee Viewpoint Survey to measure progress on employee engagement efforts. Since the survey is administered annually, the goal team reports annual targets for this measure. The Lab-to-Market CAP goal team told us that its goal to accelerate and improve the transfer of new technologies from the laboratory to the commercial marketplace can take years and is not measured quarterly. Action 4: Use Performance Information The fourth action related to goal implementation involves using performance information during regular progress reviews to assess if major actions and contributors are collectively achieving the desired results. We have long reported that agencies are better equipped to address management and performance challenges when managers effectively use performance information for decision making. OMB’s guidance to CAP goal teams states that frequent reviews provide a mechanism for CAP goal leaders to keep contributing agencies focused on crosscutting priorities. For public reporting, GPRAMA requires that CAP goal teams report on how they use performance measures (or indicators) and quarterly targets to assess overall progress toward the goal, and whether relevant agencies, organizations, program activities, and tax expenditures are contributing as planned. Figure 6 shows that for six of the seven selected CAP goals, the teams partially met the requirement to report how they are using performance measures with quarterly targets to assess overall progress. This assessment is largely a result of the fact that while six of the seven goal teams have established and reported performance measures, they generally have not established quarterly targets for the measures, as shown in figure 5 above. OMB and PIC staff managing the CAP goals told us that they are aware that CAP goal teams can improve how they are using performance measures and targets. Four of the seven selected CAP goals partially met the requirement to report how they are using performance measures and quarterly targets to assess key contributors to the goal—such as relevant agencies and programs—because some of their performance measures track contributors’ implementation of certain activities or requirements. For example, the Smarter IT Delivery CAP goal team reports an indicator tracking the number of agencies that have created “buyers’ clubs” to promote more efficient information technology (IT) contracting practices. The other three CAP goals did not meet this requirement because they either did not have measures in place, or did not have performance measures that assess contributions by agencies or programs responsible for implementing the goal. For example, some of the measures for these goals are focused on an outcome or output that is external to the government, such as number of STEM degrees as a performance measure for the STEM Education CAP goal. In such cases, goal teams told us they are tracking progress and key contributors toward the goal through milestones. Action 5: Report Results Reporting results is the next action related to goal implementation. GPRAMA requires that CAP goal teams report the most recent quarterly results on Performance.gov. They are also required to publish overall trend data compared to the planned level of performance or target. Our prior work has demonstrated the need to regularly report results to promote accountability and provide agency leaders with information they can use to inform their decisions. By adding trend information, OMB and decision makers can more easily track the goal teams’ quarterly progress. As shown in figure 7, we found that the seven selected CAP goal teams reported results achieved during the most recent quarter by reporting on the status of milestones, data on implementation, and narrative progress updates. As of the first quarter of fiscal year 2016, six of the seven selected CAP goal teams are able to report overall trend data for at least some, if not all, of their performance measures. In some cases, goal teams reported new performance measures, noting that they are still working to establish a baseline and trend data. The Open Data, Smarter IT Delivery, and People and Culture CAP goal teams reported overall trend data compared to the planned level of performance for each performance measure. The Customer Service CAP goal team did not report trend data because it stated that it does not have a method to collect and report customer service performance information government-wide. Selected CAP Goal Teams Reported Completing Milestones, but Could Improve Transparency by Reporting on the Development of Performance Measures Selected CAP Goal Teams Reported Completing Quarterly Milestones CAP goals are designed to focus on longer term or complex outcomes involving multiple agencies, programs, or entities. In such cases, there may not be a common measure of the desired outcome, the outcome is not frequently observed, or is influenced by external factors. As a result, determining if the goal team is making progress each quarter can be challenging. In such cases, the goal team may need to break the goal into pieces that can be more easily measured or assessed (e.g., sub-goal and milestone). All of the CAP goal teams we examined identified two or more sub-goals with associated milestones and performance measures, where available. All of the selected CAP goal teams and OMB staff managing the goals told us that they are using milestones to track and report on progress quarterly. However, they are still working to improve the collection and reporting of performance information for the CAP goals. Our review of selected CAP goal quarterly progress updates—from March 2014 through December 2015—found that the seven CAP goal teams demonstrated progress in their ability to meet GPRAMA reporting requirements. Specifically, we found that in the most recent quarterly progress update, the selected CAP goal teams all reported prospects and strategies for performance improvement, and reported more performance information in the form of new performance measures, targets, baseline data, and trend information, if available. However, because most of the targets established are annual or 4-year targets or data for certain performance measures were unavailable, CAP goal teams and OMB staff told us that milestones provided the most relevant information on quarterly progress for the CAP goals. According to OMB and PIC staff, milestones have also helped the goal teams reach agreement on respective roles and responsibilities, and have helped agencies align their activities with the strategies to implement the goal. Within the first two years of implementation, CAP goal teams reported having completed milestones that cover a range of activities, including those related to project management, piloting new programs, development of information technology, and other tools to support implementation, as shown in the examples below. Smarter IT Delivery CAP goal. In October 2015, the CAP goal team launched a pilot to develop a Digital Service Contracting Professional Training and Development Program to improve the process of IT acquisition and digital services. Lab-to-Market CAP goal. In the third quarter of fiscal year 2015, the Department of Energy launched the Small Business Voucher pilot, creating a single point of access for small business to access the agency’s laboratory resources for clean energy projects. People and Culture CAP goal. In October 2015, OPM launched the web-based Hiring Toolkit, which provides policy and technical guidance, information on roles and responsibilities, and an inventory of hiring flexibilities for human resource staff and managers in federal agencies. As directed by OMB, goal teams are also reporting milestones that are at risk of not meeting scheduled due dates or face barriers to completion. During the CAP goal period, goal teams reported delays for various reasons, including coordination across agency boundaries, availability of staffing, and challenges with information technology. Teams also reported changes in strategy or approach that altered time frames or required revisions to existing milestones. For example, in the first quarter of fiscal year 2015, the Open Data CAP goal team missed a milestone to refresh its metrics on data quality and machine readability due to technical challenges that delayed the launch of critical tools. The team reported completing this milestone in the following quarter. In another example, the Job-Creating Investment CAP goal team revised a milestone due date to allow more time to evaluate the effectiveness and impact of its processes to assist potential investors in meeting their objectives. The goal team reported in the fourth quarter of fiscal year 2015 that it completed this milestone according to the revised timeline. Selected CAP Goal Teams Have Emphasized Alignment of Milestones with Strategies and Results The use of milestones is a recognized approach for tracking interim progress toward a goal as long as they are clearly linked to the desired outcome. OMB Circular A-11 states that if milestones are used as a performance goal, they must be described in a way that makes it possible to discern if progress is being made toward the goal. Figure 8 provides examples of a performance measure and a milestone for two different CAP goals. According to OMB and PIC staff managing the CAP goals, they are working with goal teams to develop strategies to improve performance on crosscutting issue areas and also develop concrete milestones. In its guidance to CAP goal teams, OMB and the PIC directed goal teams to design and report a high-level, goal-to-strategy crosswalk, highlighting the key elements of the goal or sub-goal, such as the related major actions to achieve impact, and key performance measures, if available. According to the guidance, once the team has identified sub-goals and developed a high-level strategy, the goal team should identify and report the quarterly status of related milestones, which should be linked to a sub-goal or major action to illustrate how the milestones contribute to the achievement of the goal. Our review of the selected CAP goal quarterly progress updates found that all seven CAP goal teams followed the guidance and identified sub-goals with major actions and performance measures, where possible. As designed, the reporting template provides flexibility for the CAP goal teams to organize and report this information, and the CAP goal teams generally reported on how their milestones were linked with their related sub-goals. CAP goal teams that used their quarterly progress updates to clearly articulate a strategy to achieve the sub-goal, and any barriers or challenges along with associated actions and milestones, are better positioned to demonstrate progress. As shown in figure 9, the People and Culture CAP goal quarterly progress update demonstrated the link between the milestones, and the result it intends to achieve, by identifying specific milestones or major actions and performance measures for each sub-goal or outcome. OPM staff for the People and Culture CAP goal told us that aligning sub-goals and milestones allows them to see how all milestones are contributing to the overall needs of the CAP goal. It also allows them to review the information at the sub-goal level and roll the information up to the CAP goal level when necessary. Taken together, the sub-goals, milestones, and performance measures, if available, should demonstrate how the goal team is measuring progress related to the broader CAP goal. An effective tool to demonstrate that the goal team’s activities are contributing to the goal’s desired long-term outcome is to develop a model that describes the logical relationship between an agency’s inputs, activities, outputs, and outcomes. By making those linkages explicit, decision makers can have more focused and meaningful discussions for how proposed strategies or milestones are tied to desired results, and how to measure the success of strategy execution and impact. Identifying, agreeing with, and understanding the logical relationship between the goal team’s activities and the desired outcome is an important step in developing a sound and cohesive plan for implementation. As we have previously reported, the relationship between activities—such as milestones—and outcomes should be periodically assessed through program evaluations or other methods to determine if outcomes are being achieved as expected, and should be revised as necessary. The quarterly progress reviews of CAP goals required by GPRAMA is one vehicle for assessing implementation of the goals and adjusting strategies as needed. As directed by OMB, any adjustments to the goal strategy or next steps should be clearly reported in the subsequent quarterly progress updates published on Performance.gov to allow the public and decision makers to easily follow the implementation strategy and goal progress. Selected CAP Goal Teams Can Be More Transparent about the Status of Their Efforts to Develop Performance Measures All of the goal teams we reviewed reported efforts to develop performance measures as part of their strategy to improve performance of these complex and crosscutting issues. Performance goals and measures allow CAP goal teams and other stakeholders to track the progress they are making toward their goals and provide critical information on which to base decisions for improving programs or activities. OMB and PIC staff overseeing the CAP goals told us they are aware that improvement can be made in goal teams’ tracking and reporting of progress using performance measures, and they hope to continue to improve goal teams’ capacity to track performance. OMB and the PIC directed CAP goal teams to select or develop performance measures that are relevant, well defined, timely, reliable, and capable of being influenced by the actions of contributing organizations. We found that three of the seven selected goal teams reported specific actions they have completed, or are completing, to develop performance measures. The actions included identifying existing and needed data, collecting new data, testing and validating the data for reliability, and setting the baseline for measurement. For example: Throughout the goal period, the STEM Education CAP goal team consistently reported on the steps they were taking to develop performance measures, and we were able to track their efforts from one quarter to the next. For example, the goal team identified the need to collect additional data to develop performance measures through a long-term survey. In the fourth quarter of 2015, the goal team reported they had completed some milestones—and others were on track—related to conducting in-depth testing to add a new item on undergraduate mathematics instruction for the High School Longitudinal Survey (see figure 10). The goal team reported three related milestones that, according to the goal team, should provide survey results by December 2017. At that point the team expects to be able to test and validate a new performance measure and establish a baseline and targets for future measurement. The Job-Creating Investment CAP goal team reported that they are collecting baseline data and trends to ensure the methodology supporting future performance measures is valid. The goal team reported that it conducted a feasibility study to capture data on foreign direct investment in the U.S. through a new evaluation system and database. It also reported that it is working to collect data on client satisfaction with the services it received. The CAP goal team plans to use these sources of data to measure the impact of its work. The Customer Service CAP goal team reported steps it plans to take to develop a measure tracking the percentage of customer-facing federal programs showing improvements in their customer feedback data. For example, the goal team identified milestones for implementation into fiscal year 2017 to identify the core customer- facing federal programs that have a plan to improve customer service and collect customer service feedback data. The goal team reported that these programs would conduct a self-assessment and develop strategies to improve customer service followed by assessment of the data to identify its baseline customer feedback data. For the other four CAP goals we reviewed, the goal teams told us about the actions they are taking to develop performance measures. However, their quarterly reports on Performance.gov did not contain this information. For example, the Lab-to-Market CAP goal leaders told us that they are conducting public literature reviews and commissioning studies that may help the goal team identify potential quantitative and qualitative performance measures to assess the economic impact of federal technology transfer activities. The goal team reported having completed milestones related to this effort in 2014 and 2015, but has not reported additional steps it has taken, or plans to take, to test or validate the results of the studies. As a result, we could not determine if the goal team is making progress towards establishing performance measures. Conclusions The CAP goals we reviewed are intended to drive progress in important and complex areas, such as improving information technology and customer service interactions across government, coordinating federal STEM education efforts, and improving federal hiring practices. We found that OMB’s updated guidance to CAP goal teams and the quarterly reporting template have assisted goal teams in managing implementation of the goals and in meeting the GPRAMA reporting requirements. Further, OMB and the PIC’s efforts to build the capacity of the CAP goal teams to implement the goals has resulted in increased leadership attention and improved interagency collaboration for these goals. We found that CAP goal teams are meeting a number of GPRAMA reporting requirements, including identifying contributors and reporting quarterly results and milestones. However, most of the CAP goal teams we reviewed have not established quarterly targets for all performance measures. In an effort to assist each team with addressing their reporting gaps, OMB and the PIC are working to improve collection and reporting of performance information. We found that during the first two years of goal implementation, OMB and CAP goal teams made improvements to their quarterly progress updates by adding new performance measures, providing additional information on performance trends, and reporting on new prospects for performance improvement, among other improvements. CAP goal teams told us that they are using milestones to track and report progress quarterly. We generally found that the goal teams are aligning their quarterly activities with longer-term strategies to achieve the desired goal outcomes. Further, all of the selected CAP goal teams reported that they are working to develop performance measures, and are at various stages of the process. But, they are not consistently reporting on their efforts to develop these measures. Given OMB’s, the PIC’s, and CAP goal teams’ emphasis on developing measures that are relevant and well defined, greater transparency is needed to track goal team’s efforts quarterly. Because the CAP goal teams are working to implement policies and activities that span multiple agencies, and in some cases government-wide, it is important that the goal teams clearly communicate the steps they are taking to develop performance measures to ensure that the measures will be aligned with major activities and clearly understood by contributors to the goals. With improved performance information, the CAP goal teams will be better positioned to demonstrate the progress that they are making, and will help ensure goal achievement at the end of the 4-year goal period in 2018. Recommendation for Executive Action To improve the transparency of public reporting on CAP goal progress, we recommend that the Director of OMB, working with the PIC, take the following action: Report on Performance.gov the actions that CAP goal teams are taking, or plan to take, to develop performance measures and quarterly targets. Agency Comments and Our Evaluation We provided a draft of this report for review and comment to the Director of OMB, the Director of the Office of Science and Technology Policy (OSTP), and the Secretaries of the Departments of Commerce, Energy, State, Transportation, and Veterans Affairs, and the Directors of the National Science Foundation and the Office of Personnel Management, and the Commissioner of the Social Security Administration. On May 3, 2016, OMB staff provided us with oral comments on the report. OMB staff generally agreed with the recommendation in the report, and provided us with technical clarifications, which we have incorporated as appropriate. Officials from the Departments of Energy and Commerce provided technical clarifications, which we incorporated, as appropriate. OSTP and the Departments of State, Transportation, Veterans Affairs, the National Science Foundation, the Office of Personnel Management, and the Social Security Administration had no comments on the report. The written response from the Social Security Administration is reproduced in appendix III. We are sending copies of this report to the Director of OMB and the heads of the agencies we reviewed as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Selected Cross-Agency Priority (CAP) Goals and Related Sub-goals Selected CAP Goal Customer Service Increase citizen satisfaction and promote positive experiences with the federal government by making it faster and easier for individuals and businesses to complete transactions and receive quality services. Job-Creating Investment Encourage foreign direct investment, spurring job growth by improving federal investment tools and resources, while also increasing interagency coordination. Lab-to-Market Increase the economic impact of federally- funded research and development (R&D) by accelerating and improving the transfer of new technologies from the laboratory to the commercial marketplace. Open Data Fuel entrepreneurship and innovation, and improve government efficiency and effectiveness by unlocking the value of government data and adopting management approaches that promote interoperability and openness of these data. People and Culture Innovate by unlocking the full potential of the workforce we have today and building the workforce we need for tomorrow. Sub-goals 1. Improve top customer service interactions 2. Develop and implement standards, practices, and tools 3. Feedback and transparency 4. Focus on the frontline 1. Promote and market the United States as the premier investment destination 2. 3. Smarter IT Delivery Eliminate barriers and create new incentives to enable the federal government to procure, build, and provide world-class, cost-effective information technology (IT) delivery for its citizens, and hold agencies accountable to modern IT development and customer service standards. STEM Education Improve Science, Technology, Engineering, and Mathematics (STEM) Education by implementing the Federal STEM Education 5- Year Strategic Plan, announced in May 2013. Appendix II: Objectives, Scope, and Methodology The GPRA Modernization Act of 2010 (GPRAMA) included a provision for us to periodically report on the implementation of the act, and this report is part of that series. The objectives of this report are to assess (1) the extent to which lessons learned from implementing the interim cross- agency priority (CAP) goals were incorporated into the governance of the current CAP goals; (2) the extent to which GPRAMA requirements for reporting on CAP goal progress are included in the selected CAP goal quarterly progress updates; and (3) the initial reported progress in implementing the selected CAP goals. In September 2015, we reported on lessons learned from the interim CAP goal period—which ended in March 2014—and provided an assessment of CAP goal teams’ initial reported progress implementing the current set of 2014-2018 CAP goals. To conduct our assessment, we selected 7 of the 15 CAP goals for more in-depth review. We randomly selected four of the seven CAP goals including Open Data, STEM Education, Job-Creating Investment, and Lab-to-Market. We also included three goals in our review because we have completed and ongoing work related to the Customer Service, People and Culture, and Smarter IT Delivery CAP goals. Our sample of goals is nongeneralizable and is not representative of all CAP goals. We interviewed Office of Management and Budget (OMB) and Performance Improvement Council (PIC) staff responsible for the management and implementation of the current CAP goals, as well as agency officials and staff, including CAP goal leaders and members of the seven CAP goal teams. We requested to meet with representatives from the Presidential Personnel Office and National Economic Council who have responsibility for leading the People and Culture and Job-Creating Investment CAP goals. Staff from those offices directed us to obtain feedback on these goals from responsible staff at OMB. We reviewed OMB and PIC guidance, relevant documentation, our prior related work, and quarterly progress updates published on Performance.gov from the second quarter of fiscal year 2014 through the first quarter of fiscal year 2016, published in March 2016. To determine the extent to which selected CAP goal quarterly progress updates reflect GPRAMA requirements for assessing and reporting CAP goal progress, we conducted a content analysis—using NVivo software and Excel—of the quarterly updates from the second quarter of fiscal year 2015 through the first quarter of fiscal year 2016. Specifically, we reviewed each quarterly progress update and coded the content into one or more of the five major actions (as described below) to implement a CAP goal, which we developed from an initial review of the selected quarterly progress updates, and OMB and PIC guidance to CAP goal teams. Purpose - establishing the goal; Governance - identification of leaders and contributors; Planning - development of strategies and performance measures; Implementation - execution of plans and use of performance information; and Results - reporting results on Performance.gov. To improve the validity of results, each plan was coded independently by two of our analysts, who then compared their analysis in three stages. In cases where there was disagreement, the two coders met and either came to agreement on a code, or a third coder made a judgment about correct coding. In the first stage of content analysis, the coders independently assigned the content in each of the select CAP goal quarterly progress updates into one or more of the five actions of goal implementation described above. For the second stage of content analysis, the coders assigned the relevant GPRAMA requirements to one or more of the five actions of goal implementation. During the first two stages of content analysis, we determined if an element or requirement was present in the plan (one or more coded entries). For the third and final stage of content analysis, we assessed the extent to which the progress update met the reporting requirements. To determine the extent to which the seven selected CAP goals’ quarterly progress updates reflect GPRAMA requirements, we conducted a content analysis of the four quarterly updates published from March 2015 through December 2015. To fully meet each requirement, the quarterly progress updates had to provide all information required by GPRAMA. We analyzed 13 requirements – 6 that we determined were relevant at the CAP goal level, and 7 that we determined were relevant at the sub-goal level. For those requirements analyzed at the sub-goal level, we determined a goal had partially met a requirement if the required information was present in the quarterly progress update for at least one of the goal’s sub-goals. For requirements that were not met, we determined the required information was not present in the quarterly progress update. We shared our analysis with each of the selected CAP goal teams and interviewed agency staff to collect their feedback on our analysis. We also collected and reviewed relevant documentation provided by agencies, OMB, and the PIC. One of the GPRAMA reporting requirements is for OMB, in coordination with CAP goal teams, to identify and publish the key agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each CAP goal on Performance.gov. To determine the extent to which selected CAP goals met this requirement, we reviewed the published lists of contributors in the quarterly progress updates, and interviewed CAP goal teams and OMB staff responsible for managing the CAP goals. We did not evaluate the process that goal teams used to identify the contributors. To assess the extent to which the selected CAP goal teams have made progress during the first 2 years of implementation from March 2014 through December 2015, we reviewed the quarterly progress updates published on Performance.gov and relevant agency documents, interviewed CAP goal team staff to discuss challenges, efforts to mitigate challenges, and key achievements. Because the scope of our review was to examine the implementation of quarterly reporting requirements, we did not evaluate whether these goals were appropriate indicators of performance, were sufficiently ambitious, or met other dimensions of quality. We conducted our work from January 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Social Security Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact J. Christopher Mihm, (202) 512-6806 or mihmj@gao.gov. Staff Acknowledgements In addition to the contact named above, Sarah E. Veale, Assistant Director, and Peter Beck, Analyst-in-Charge, supervised the development of this report. Margaret Adams, Jehan Chase, Jillian Feirson, Jennifer Felder, and Steven Putansu made significant contributions to all aspects of this report. Lisette Baylor, Adam Cowles, Robert Gebhart, David Hinchman, Donna Miller, Christopher Murray, Nyree Ryder Tee, and MaryLynn Sergent provided additional assistance.
Given the significance and complexity of the CAP goals, it is important that CAP goal contributors, Congress, and the public are able to track how CAP goal teams are making progress. This report is one in a series in response to a statutory provision to review GPRAMA implementation. It assesses (1) the extent to which lessons learned from implementing the interim CAP goals were incorporated into the governance of the current CAP goals; (2) the extent to which GPRAMA requirements for reporting on CAP goal progress are included in the selected CAP goal quarterly progress updates; and (3) the initial progress in implementing the selected CAP goals. GAO selected 7 of the 15 CAP goals to review (Customer Service, Job-Creating Investment, Lab-to-Market, Open Data, People and Culture, Smarter IT Delivery, and STEM Education). GAO assessed those goals' quarterly progress updates against relevant GPRAMA requirements, reviewed OMB and PIC guidance to CAP goal teams, and interviewed OMB, PIC, and CAP goal staff. The GPRA Modernization Act of 2010 (GPRAMA) requires the Office of Management and Budget (OMB) to coordinate with agencies to develop cross-agency priority (CAP) goals, which are 4-year outcome-oriented goals covering a number of complex or high-risk management and mission issues. For the current set of CAP goals covering the period from 2014-2018, OMB and the interagency Performance Improvement Council (PIC) incorporated lessons learned from the 2012-2014 interim CAP goal period to improve the governance and implementation of these cross-cutting goals. For example, OMB and the PIC changed the CAP goal governance structure to include agency leaders, and holds regular senior-level reviews on CAP goal progress. They also provide ongoing assistance to CAP goal teams, such as by helping teams develop milestones and performance measures. Based in part on prior GAO recommendations, OMB and the PIC updated its guidance to assist CAP goal teams in managing the goals and in meeting GPRAMA reporting requirements. CAP goal teams told GAO that the CAP goal designation increased leadership attention and improved interagency collaboration on these issues. GAO's assessment of the selected CAP goals' quarterly progress updates—published on Performance.gov—determined that CAP goal teams are meeting a number of GPRAMA reporting requirements, including identifying contributors, reporting strategies for performance improvement and quarterly results. However, most of the selected CAP goal teams have not established quarterly targets as required by GPRAMA, but are consistently reporting the status of quarterly milestones to track goal progress. GAO found that the selected goal teams are aligning their quarterly milestones with strategies to achieve the desired goal outcomes. Further, all of the selected CAP goal teams reported that they are working to develop performance measures, and are at various stages of the process. However, the selected CAP goal teams were not consistently reporting on their efforts to develop performance measures. Given OMB, the PIC, and CAP goal teams' emphasis on developing measures that are relevant and well-defined, greater transparency is needed to track goal team's efforts on a quarterly basis. With improved performance information the CAP goal teams will be better positioned to demonstrate goal progress at the end of the 4-year goal period.
Addressing Serious Financial Management Problems IRS prepares separate sets of financial statements showing the results of its operations for (1) administrative operations, which include $8 billion in payroll and other expenses, and (2) custodial functions, which reflect $1.4 trillion in tax collections. IRS began preparing these annual statements starting with those for fiscal year 1992 as part of a pilot program under the CFO Act of 1990. We have been unable to express an opinion on the reliability of these financial statements for any of the 4 fiscal years from 1992 through 1995. We identified fundamental problems with both the administrative and the financial statements and IRS has not yet fully corrected them. Until resolved, they will continue to prevent us from expressing an opinion on IRS’ financial statements in the future. The following sections outline these problems and IRS’ improvement plans and progress. Accounting for Administrative Operations Has Improved but Problems Remain Each year, IRS spends billions of dollars in operating expenses to (1) process tax returns, provide taxpayer assistance, and manage tax programs, (2) enforce tax laws, and (3) develop and maintain information systems. For fiscal year 1995, IRS reported $8.1 billion in operating costs, including $5.3 billion for payroll and other personnel costs and $2.8 billion for the cost of goods and services, such as rent, printing, and acquiring and maintaining automatic data processing equipment. Our initial financial audits identified serious problems in accounting for and reporting on IRS administrative operations, which has resulted in IRS making improvement in these areas. For example, IRS has successfully implemented a financial management system (which according to Treasury, conforms to the government’s Standard General Ledger) to account for its appropriated funds, which has helped IRS to correct some of its past transaction processing problems that diminished the accuracy and reliability of its cost information, and transferred its payroll processing to the Department of Agriculture’s National Finance Center and, as a result, improved its accounting for payroll expenses. These improvements have made IRS’ accounting for its administrative operations much better today than it was 4 years ago. For example, we are now able to substantiate IRS’ payroll expenses of about $5 billion. However, the following two major problems still need to be fully corrected. A significant portion of IRS’ reported $3 billion in nonpayroll operating expenses for goods and services could not be verified. The amounts IRS reported as appropriations available for expenditure for operations could not be reconciled fully with Treasury’s central accounting records showing these amounts, and in the past, hundreds of millions of dollars in gross differences had been identified. Receipt of Goods and Services We found several problems in attempting to substantiate amounts IRS reported as having been spent for goods and services. IRS did not have support for when and if certain goods or services were received and, in other instances, did not have support for reported expense amounts. For example, IRS accepts Government Printing Office (GPO) bills as being accurate and records an expense in its financial records without first verifying that the printing goods and services being billed were actually delivered and accepted. Also, in instances where IRS could provide information showing proper receipt and acceptance of goods and services, expenses were often recorded in the wrong fiscal year. This problem occurs because (1) IRS offices that receive and accept goods and services do not always forward to IRS accounting offices evidence supporting these actions and (2) IRS accounting offices used inconsistent, and in some cases incorrect, policies and procedures for recording expenses. Ensuring that goods and services have been received and properly accounted for are fundamental accounting steps and controls. Over the past 4 years, we have recommended that IRS revise its procedures to incorporate the requirements that accurate receipt and acceptance data on invoiced items be obtained prior to payment and that supervisors ensure that these procedures are carried out, and revise its document control procedures to require IRS units that actually receive goods and services to promptly forward receiving reports to accounting offices so that these transactions can be properly accounted for. IRS believes the core issue for correcting its receipt and acceptance problems relate to properly accounting for transactions with other federal agencies. IRS plans to address this issue by completely and accurately documenting its current accounting systems and control procedures for procuring, receiving, accepting, and paying for goods and services through other federal agencies, such as GPO and the General Services Administration, and recording the related budgetary, expense, and cash disbursement transactions; identifying and evaluating the reliability of available documentary evidence and systems, which until this point have been developed and utilized primarily to meet operational rather than financial reporting objectives; working with other federal agencies to explore ways to improve the timeliness, nature, and extent of documentation supporting interagency payments that would allow IRS to properly account for these interagency transactions; and developing both short- and long-term improvements to its accounting systems and control procedures, including modifications to its automated systems to allow for direct interfaces between its operating systems and its general ledger accounting system. IRS is now beginning to deal with this problem in a comprehensive way. To that end, it has engaged an accounting firm for assistance in carrying out this plan. We are closely monitoring IRS’ and its contractor’s progress because, only through an intense, concerted effort, will the proposed solutions be implemented on time for the fiscal year 1996 audit. Fund Balance Reconciliation Issues Also, we could not verify the accuracy of IRS’ Fund Balance with Treasury accounts that are related to IRS’ appropriation accounts for its operations. The Fund Balance accounts are used to record cash receipts and cash disbursements for these appropriations. These accounts are much like checking accounts with a bank, and their balances represent the amount of appropriations available to IRS for expenditure. Accordingly, like bank checking accounts, each month, these accounts must be reconciled with the bank’s records, and any differences reported to the bank. In this case, the banker is the Treasury and the differences are great. These accounts have been unreconciled in each of the years we have audited IRS’ financial statements. The net reconciling differences are made up of gross differences in the hundreds of millions of dollars. For example, we reported last year that IRS was researching $13 million in net differences that consisted of $661 million of increases and $674 million of decreases. We have recommended that IRS promptly resolve differences between IRS and Treasury records of IRS’ cash balances and adjust accounts accordingly and promptly investigate and record suspense account items to appropriate appropriation accounts. In fiscal year 1995, IRS hired a contractor to provide information on the differences between IRS and Treasury records through fiscal year 1995 and established a task force to resolve the differences the contractor identified. IRS found that documentation was no longer available to resolve prefiscal year 1993 differences, which resulted in $10 million of net positive cash reconciling differences being written off. IRS has not yet completed the research necessary to resolve fiscal year 1993, 1994, and 1995 differences. Further, additional research is required to resolve differences held in IRS’ Suspense Accounts and Budget Clearing Accounts at Treasury. To this end, IRS has developed plans to complete its posting of adjustments to its appropriation accounts for fiscal year 1995 based on our review of these adjustments, and engage a contractor to assist in completing its reconciliation of balances remaining in its Budget Clearing Accounts and Suspense Accounts. IRS plans to complete the necessary adjustments to its records and Treasury’s records prior to the closing of its books for fiscal year 1996. In addition to completing this research, IRS must ensure that effective processes and procedures are in place to routinely reconcile its Fund Balance with Treasury accounts. In this regard, IRS has created a unit to manage the reconciliation of these accounts on an ongoing basis. Overall, IRS’ success in resolving the basic accounting and control issues involving its administrative operations will be indicative of its commitment and ability to resolve larger and more complex issues involving accounts receivable and revenue accounting. Accounts Receivable Could Not Be Verified We could not verify the validity of either the $113 billion of accounts receivable or the $46 billion of collectible accounts receivables that IRS reported on its fiscal year 1995 financial statements. In our audit of IRS’ fiscal year 1992 financial statements, after performing a detailed analysis of IRS’ receivables as of June 30, 1991, we estimated that only $65 billion of about $105 billion in gross reported receivables that we reviewed was valid for financial reporting purposes and that only $19 billion of the valid receivables was collectible. At the time, IRS had reported that $66 billion of the $105 billion was collectible. In our audit of IRS’ fiscal year 1992 financial statements, we recommended that IRS take steps to ensure the accuracy of the balances reported in its financial statements by, in the long-term, identifying which assessments currently recorded in the masterfile represent valid receivables and designating new assessments that should be included in the receivables balance as they are recorded. We recommended also that, until these capabilities are implemented, IRS rely on statistical sampling to determine what portion of its assessments represent valid receivables. Subsequently, we helped IRS develop a statistical sampling method that, if properly applied, would allow it to reliably estimate and report valid and collectible accounts receivable on its financial statements. We evaluated and tested IRS’ use of the method as part of our succeeding financial audits and found that IRS made errors in carrying out the statistical sampling procedures, which rendered the sampling results unreliable. For the fiscal year 1995 audit, for the first time, IRS tried, also without success, to specifically identify its accounts receivable. Further, IRS’ accounting and reporting for accounts receivable is hampered by the limitations of its financial management system. IRS’ system is not designed to specifically identify and separately track from detailed taxpayer records those owing taxes reportable as accounts receivable. To mitigate this system’s limitation in fiscal year 1995, IRS reported accounts receivable by using the uncollected assessment information from its computer system’s master files, which were automatically sorted into either compliance assessments or financial receivables. In this way, IRS planned to identify the amount specifically related to financial receivables and report it as valid accounts receivable as of September 30, 1995. However, when we tested a sample of the automated sorting results, we found cases in which the financial management system’s data were incorrect, and thus, did not properly segregate compliance assessments from financial receivables. We identified instances in which compliance assessments were classified as financial receivables, and thus, incorrectly included as accounts receivable; and other cases in which financial receivables were classified as compliance assessments, and thus, improperly excluded from accounts receivable. Based on the testing results, we concluded that the process IRS used in 1995 was unreliable for projecting the total inventory of outstanding assessments. Consequently, the accounts receivable reported on the fiscal year 1995 financial statements could not be relied on. IRS’ plans call for improving accounts receivable reporting in the short term by analyzing, by September 30, 1996, its inventory of uncollected assessments to determine ways to resolve issues concerning the financial management system’s underlying data limitations and reliably determining, by January 6, 1997, the estimated amount of accounts receivable that is collectible. Also, IRS needs to review and update current policies and procedures for maintaining documentation supporting accounts receivable, and when necessary, train employees to properly record detailed taxpayer transactions. Currently, IRS is reviewing its policies for retaining documentation supporting accounts receivable. In addition, IRS will be challenged to fully meet the federal accounting standards for accounting for accounts receivable, which become effective for fiscal year 1998. IRS will need to design its financial management system to analyze all outstanding amounts to properly identify and report valid accounts receivable and the amount expected to be collected; track all activity affecting IRS’ accounts receivable balance, including collections as a result of enforcement efforts, tax abatements, and aging of receivables; and provide dollar information about its compliance assessments. Accounting for Revenue Our audit of IRS’ fiscal year 1995 financial statements found that the amounts of total revenue (reported to be $1.4 trillion for fiscal year 1995) and tax refunds (reported to be $122 billion for fiscal year 1995) could not be verified or reconciled to accounting records maintained for individual taxpayers in the aggregate and the amounts reported for various types of taxes collected (social security, income, and excise taxes, for example) could not be substantiated. Our financial audits have found that IRS’ financial statement amounts for revenue, in total and by type of tax, were not derived from its revenue general ledger accounting system or its master files of detailed individual taxpayer records. The revenue accounting system does not contain detailed information by type of tax, such as individual income tax or corporate tax, and the master file cannot summarize the taxpayer information needed to support the amounts identified in the system. As a result, IRS relied without much success on alternative sources, such as Treasury schedules, to obtain the summary total by type of tax needed for its financial statement presentation. To substantiate the Treasury figures, our audits attempted to reconcile IRS’ master files—the only detailed records available of tax revenue collected—with Treasury records. For fiscal year 1994, for example, we found that IRS’ reported total of $1.3 trillion for revenue collections taken from Treasury schedules was $10.4 billion more than what was recorded in IRS’ master files. Because IRS was unable to satisfactorily explain— and we could not determine—the reasons for this difference, the full magnitude of the discrepancy remains uncertain. In addition to the difference in total revenues collected, we also found large discrepancies between information in IRS’ master files and the Treasury data used for the various types of taxes reported in IRS’ financial statements. For fiscal year 1994, for example, some of the larger reported amounts in IRS’ financial statement for which IRS had insufficient support were $615 billion in individual taxes collected—this amount was $10.8 billion more than what was recorded in IRS’ master files; $433 billion in social security insurance taxes collected—this amount was $5 billion less than what was recorded in IRS’ master files; and $148 billion in corporate income taxes—this amount was $6.6 billion more than what was recorded in IRS’ master files. Thus, IRS did not know and we could not determine if the reported amounts were correct. These discrepancies also further reduce our confidence in the accuracy of the amount of total revenues collected. Causes of IRS’ Revenue Accounting Problem Contributing to these discrepancies is a fundamental problem in the way tax payments are reported to IRS. About 80 percent, or about $1.1 trillion, of total tax payments are made by businesses and typically include (1) taxes withheld from employees’ checks for income taxes, (2) Federal Insurance Compensation Act (FICA) collections, and (3) the employer’s matching share of FICA. IRS requires business taxpayers to make tax payments using federal tax deposit coupons. The payment coupons identify the type of tax return to which they relate (such as a Form 941, Quarterly Wage and Tax Return) but do not specifically identify either the type of taxes being paid or the individuals whose tax withholdings are being paid. For example, a payment coupon indicating that a deposit relates to a Form 941 return can cover payments for employees’ tax withholding, FICA taxes, and an employer’s FICA taxes. Because only the total dollars being deposited are indicated on the coupon, IRS knows that the entire amount relates to a Form 941 return but does not know how much of the deposit relates to the different kinds of taxes covered by that type of return. Consequently, at the time tax payments are made, IRS is not provided information on the ultimate recipient of the taxes collected. Furthermore, the type of tax being collected is not distinguished early in the collection stream. This creates a massive reconciliation process involving billions of transactions and subsequent tax return filings. For example, when an individual files a tax return, IRS initially accepts amounts reported as a legitimate record of a taxpayer’s income and taxes withheld. For IRS’ purposes, these amounts represent taxes paid because they cannot be readily verified to the taxes reported by an individual’s employer as having been paid. At the end of each year, IRS receives information on individual taxpayers’ earnings from the Social Security Administration. IRS compares the information from the Social Security Administration to the amounts reported by taxpayers with their tax returns. However, this matching process can take 2-1/2 years or more to complete, making IRS’ efforts to identify noncompliant taxpayers extremely slow and significantly hindering IRS’ ability to collect amounts subsequently identified as owed from false or incorrectly reported amounts. Consistent with this process, IRS’ system is designed to identify only total receipts by type of return and not the entity which is to receive the funds collected, such as the General Fund at Treasury for employee income tax withholdings or the Social Security Trust Fund for FICA. Ideally, the system should contain summarized information on detailed taxpayer accounts, and such amounts should be readily and routinely reconciled to the detailed taxpayer records in IRS’ master files. Also, IRS has not yet established an adequate procedure to reconcile the revenue data that the system does capture with data recorded and reported by Treasury. Further, documentation describing what IRS’ financial management system is programmed to do is neither comprehensive nor up to date, which means that IRS does not yet have a complete picture of the financial system’s operations—a prerequisite to fixing the problems. Beginning with our audit of IRS’ fiscal year 1992 financial statements, we have made recommendations to correct weaknesses involving IRS’ revenue accounting system and processes. They include addressing limitations in the information submitted to IRS with tax payments by requiring that payments identify the type of taxes being collected, implementing procedures to complete reconciliations of revenue and refund amounts with amounts reported by the Treasury, and documenting IRS’ financial management system to identify and correct the limitations and weaknesses that hamper its ability to substantiate the revenue and refund amounts reported on its financial statements. Short-Term Fixes to Revenue Accounting Problems The problem of identifying collections by type of tax results from inherent limitations in IRS’ present financial system. To correct this problem in the short term, IRS has developed a methodology that uses software programs IRS believes will capture from its revenue financial management system the detailed revenue and refund transactions that would support reported amounts in its future financial statements. In short, this approach is directed at developing reasonable estimates of taxes by type of tax collected by using the capabilities of IRS’ present systems. To reconcile IRS’ tax revenue data with Treasury’s balances, IRS’ plans call for the extracts from these software programs to be available in accordance with the following schedule: Data for the first 6 months of fiscal year 1996 will be available by October 1, 1996. Data for the entire fiscal year will be available by January 15, 1997. To provide an allocation of taxes between social security, income, and excise taxes, IRS plans call for the extracts from these software programs to be available in the following timeframes: Allocations for the first three quarters of fiscal year 1996 are due by November 30, 1996. An allocation for the final quarter of fiscal year 1996 is due by January 30, 1997. Also, regarding the issue of reconciling accounting records with individual taxpayer accounts, IRS is trying to better understand the differences between its systems and Treasury’s records. To gain this understanding, IRS plans to soon complete documentation of its revenue financial management system in the near future. This is critical to (1) aid in identifying better interim solutions for reporting revenues and refunds and (2) provide better insights on the longer term system fixes needed to enable IRS to readily and reliably provide the underlying support for its reported revenue and refund amounts. Fixing Revenue Accounting Problem Long-Term IRS has not yet put in place the necessary procedures to routinely reconcile activity in its summary accounting records with that maintained in its detailed master file records or taxpayer accounts. This problem is further exacerbated by IRS’ financial management system, which was not designed to support financial statement presentation and thus significantly hinders IRS’ ability to identify the ultimate recipient of collected taxes. Longer term system fixes are necessary to achieve more reliable reporting of these amounts. In this regard, as part of Tax Systems Modernization, IRS has designed the Electronic Federal Tax Payment System (EFTPS) to electronically receive deposits from businesses. EFTPS is planned to be operational by the end of 1996. If implemented as designed, EFTPS will have the capability to collect actual receipt information for excise and social security taxes. However, not all employers will be required to use EFTPS to make their federal tax deposit payments. According to IRS officials, approximately 20 percent of the employers that make federal tax deposit payments will have the option of remaining with the current system, which provides limited information. Therefore, even if employers that use EFTPS are required to provide additional information on social security and excise taxes, to the extent that some businesses still make deposits using the current system, IRS will not have the complete information it needs to determine collections from excise and social security taxes. In addition, IRS will have to make changes to meet criteria for determining revenue that are contained in federal accounting standards, which will be effective for fiscal year 1998. This will require IRS to account for the source and disposition of all taxes in a manner that enables accurate reporting of cash collections and accounts receivable and appropriate transfers of revenue to the various trust funds and the general fund. To achieve this, IRS’ accounting system will need to capture the flow of all revenue-related transactions from assessment to ultimate collection and disposition. Also, IRS’ revenue accounting system does not meet the government’s standard general ledger or other financial management systems requirements. According to IRS, these requirements are not being met because the revenue accounting system was designed more than 10 years ago to post transactions to taxpayers’ accounts. IRS is in the initial stages of developing a new revenue financial accounting system that is expected to meet the government’s standard general ledger and other financial management systems requirements. However, the new system is not expected to be completed until after 1998. TSM Problems Impact IRS’ Financial Information IRS’ capability to develop and make automated systems changes is an area of continuing concern, as we have discussed in our reports and testimonies on IRS’ Tax Systems Modernization (TSM). (See attachment I.) In March 1996, we testified before the Subcommittee on IRS’ significant challenges in financial management and systems modernization, which are central to IRS’ guardianship of federal revenues and ability to function efficiently in an increasingly technological environment. In summary, IRS has initiated actions that begin to implement the dozens of recommendations we have previously made to correct management and technical problems in developing TSM. Many of these actions are still incomplete and do not yet respond fully to any of our recommendations. As a result, until IRS makes more progress in correcting its management and technical weaknesses, its ability to develop systems and make changes to correct financial management problems will be hampered. IRS Touches Financial Reporting Across Government The CFO Act, as expanded by the Government Management Reform Act of 1994, requires the 24 CFO Act agencies to prepare, and subject to audit, financial statements covering all accounts and associated activities of each office, bureau, and activity of the agency. This requirement begins with agencies’ financial statements for fiscal year 1996. Audit reports are to be prepared by March 1, 1997, and each year thereafter. In addition to agencywide financial statements, the expanded CFO Act requires the Secretary of the Treasury to annually prepare consolidated financial statements depicting the Executive Branch’s financial status. This requirement begins with financial statements for fiscal year 1997; GAO is to audit them by March 31 of each year, beginning in 1998. IRS’ financial information will provide significant input to the preparation and audit of both Treasury’s agencywide and the governmentwide financial statements. For example, with $1.4 trillion in tax revenue, IRS accounts for the vast majority of the government’s total reported fiscal year 1995 revenue and IRS’ $113 billion in reported accounts receivables is over two-thirds, or about 68 percent, of the government’s total fiscal year 1995 accounts receivables, which Treasury reported to be more than $166 billion. Also, IRS financial reporting affects the financial reports of the government agencies for which IRS collects tax receipts, such as the Social Security Administration for the Social Security Trust Fund and the Department of Labor for the Unemployment Trust Fund. Beginning in fiscal year 1998, to meet federal accounting standards, IRS will have to disclose the reasons for any continuing noncompliance with the laws relating to the disposition of tax revenue to trust funds and the amount of overfunding or underfunding, if reasonably estimable. As a central government financial management leader, it is essential for the Department of the Treasury to ensure that the problems IRS faces in preparing financial statements on its operations are promptly resolved so that these problems do not delay the preparation, or affect the credibility, of Treasury’s agencywide financial statements. Also, unless IRS’ financial management problems are dealt with, they will affect the ability to render an opinion on the governmentwide financial statements. IRS Follow-Through Will Be Critical In summary, it will be essential for IRS to follow-through and ensure that its planned short-term, interim actions are completed on schedule to improve the reliability of IRS’ financial statements, and we will continue to work with IRS in doing so. We also will continue to monitor IRS’ efforts to complete our recommendations and implement longer term systems improvements. The Subcommittee’s continued oversight of IRS’ progress in implementing the CFO Act and preparing auditable financial statements will provide important impetus as well. Mr. Chairman, this concludes my statement. I would be happy to now respond to any questions. Recent GAO Reports and Testimonies Related to IRS’ Financial Management and TSM Problems Financial Audit Reports Financial Audit: Examination of IRS’ Fiscal Year 1992 Financial Statements (GAO/AIMD-93-2, June 30, 1993) Financial Audit: Examination of IRS’ Fiscal Year 1993 Financial Statements (GAO/AIMD-94-120, June 15, 1994) Financial Audit: Examination of IRS’ Fiscal Year 1994 Financial Statements (GAO/AIMD-95-141, August 4, 1995) Financial Audit: Examination of IRS’ Fiscal Year 1995 Financial Statements (GAO/AIMD-96-101, July 11, 1996) Reports and Testimonies Related to IRS Financial Audits and TSM IRS Operations: Significant Challenges in Financial Management and Systems Modernization (GAO/T-AIMD-96-56, March 6, 1996) Tax Systems Modernization: Management and Technical Weaknesses Must Be Overcome To Achieve Success (GAO/T-AIMD-96-75, March 26, 1996) Tax Systems Modernization: Progress in Achieving IRS’ Business Vision (GAO/T-GGD-96-123, May 9, 1996) Letter to the Chairman, Committee on Governmental Affairs, U.S. Senate, on security weaknesses at IRS’ Cyberfile Data Center (AIMD-96-85R, May 9, 1996) Financial Audit: Actions Needed to Improve IRS Financial Management (GAO/T-AIMD-96-96, June 6, 1996) Tax Systems Modernization: Actions Underway But IRS Has Not Yet Corrected Management and Technical Weaknesses (GAO/AIMD-96-106, June 7, 1996) Tax Systems Modernization: Cyberfile Project Was Poorly Planned and Managed (GAO/AIMD-96-140, August 26, 1996) Internal Revenue Service: Business Operations Need Continued Improvement (GAO/AIMD/GGD-96-152, September 9, 1996) Internal Revenue Service: Critical Need to Continue Improving Core Business Practices (GAO/T-AIMD/GGD-96-188, September 10, 1996) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Internal Revenue Service's (IRS) efforts to prepare reliable financial statements and improve its financial management, focusing on: (1) IRS implementation of GAO recommendations to correct financial management weaknesses; (2) IRS progress in addressing major problems that have prevented GAO from expressing an opinion on its financial statements; (3) IRS problems in developing Tax Systems Modernization (TSM); and (4) how IRS financial management weaknesses affect Department of the Treasury and governmentwide financial statements. GAO noted that: (1) IRS is implementing some short-term interim strategies to resolve financial management problems in time for its fiscal year 1996 financial statement audit; (2) IRS will need to make more sweeping changes and devise long-term solutions to fully address problems in its accounting for administrative operations, reporting accounts receivable, and accounting for revenue; (3) many IRS actions for correcting management and technical problems in developing TSM are incomplete and do not fully respond to the recommendations; (4) IRS financial information provides significant input to and greatly affects the preparation and audit of Treasury and governmentwide financial statements; and (5) it will be essential for IRS to follow through on its short-term and long-term efforts to improve its financial statements and financial management systems.
The Public Service: Veterans’ Preference in Hiring and Reductions-in-Force Mr. Chairman and Members of the Committee: I am pleased to be here today to assist the Committee in its consideration of the proposed Veterans Employment Opportunities Act of 1997, S. 1021, which is to amend title 5 of the United States Code to provide that consideration may not be denied to veterans who are eligible for preference when applying for certain positions in federal service and for other purposes. By law, federal agencies are to provide preferential hiring consideration to veterans and others as a measure of national gratitude and compensation for their military service. As agreed, my comments today are primarily based on our body of work since 1990 on veterans’ hiring preference and work we did on reductions-in-force (RIF) at selected military installations and at the U.S. Geological Survey (USGS) that occurred in 1991 and 1995, respectively.As requested, we also are providing certain statistics on the percentage of preference-eligible veterans among new federal career employees and in the existing federal workforce from 1990 through 1997. The information we present today, although relevant to consideration of S. 1021, does not reflect a current, complete analysis of whether veterans’ preference requirements are achieving their intended purposes. We are not taking a position on the proposed legislation. preference-eligible veterans than those without the names of preference-eligible veterans at the top. In three RIFs conducted at military installations in fiscal year 1991, our review showed that those without veterans’ preference were much more likely to have lost their jobs than were preference-eligible veterans. Similarly, during an October 1995 RIF at the USGS, those employees without veterans’ preference were much more likely to have lost their jobs than employees with such preference. Veterans’ Representation in the Workforce and Among Total New Career Appointments From 1990 through 1997, preference-eligible veterans represented a significantly higher percentage of the federal workforce than did veterans overall in the total civilian workforce. As figure 1 illustrates, preference-eligible veterans represent a gently declining portion of the federal workforce, and veterans also represent a declining portion of the total civilian workforce. However, in each year from 1990 through 1997, the percentage of preference-eligible veterans in the federal workforce was about twice as high as that of veterans in the civilian workforce. Statement The Public Service: Veterans’ Preference in Hiring and Reductions-in-Force Note 1: Civilian workforce data are as of December 30 of each year; federal workforce data are as of September 30 of each year. Note 2: Civilian workforce data are for men and women age 20 years and over. Figure 2 shows that preference-eligible veterans increased as a portion of those attaining new career appointments in the federal government between 1990 and 1997. However, their portion of such appointments held virtually steady for fiscal years 1993 through 1997. Veterans’ preference is applied when agencies use competitive hiring authority to fill positions, as is generally the case for employees who attain career appointments in the federal government. The increase in the portion of new career appointments over this period who were preference eligible was most significant between fiscal years 1992 and 1993. Adherence to Veterans’ Preference Rules for Hiring work experience, and/or the passing of a written examination. The preference points they receive result in veterans being placed higher on federal hiring lists, giving veterans an advantage over other job applicants. In the last several years, we have not done work on agencies’ adherence to veterans’ preference during the hiring process. However, in nearly all cases we reviewed in the early 1990s, agencies followed basic veterans’ preference requirements leading up to the actual selection decision. At that point, hiring officials returned unused hiring certificates that were headed by the names of preference-eligible veterans more often than they did those headed by the names of individuals lacking such preference. Preference Procedures During Hiring When agencies want to fill positions, several hiring alternatives are available. Agencies can promote, transfer, or reassign a current federal employee; reinstate a former federal employee who has career status; make a new appointment from a hiring list, or hiring certificate (i.e., use competitive authority); or use noncompetitive appointment authorities. Of these hiring alternatives, veterans’ preference applies only to competitive hiring. When agencies hire individuals from outside the federal government through competitive means, they can use the following methods: certificates from OPM or either of two authorities delegated from OPM (i.e., delegated examining authority or direct hire authority). Under the first method, OPM receives and examines applications for federal employment, which can include reviewing a written test and/or reviewing qualifications, and determines whether an applicant is qualified for a specific occupation or related occupations. If an applicant is rated as qualified, then the applicant is assigned a score that is used to place the individual in rank order on a federal employment register. As part of this ordering process, OPM is responsible for ensuring that veterans receive all preference points that are due them. When OPM receives and examines applications from veterans, it is to (1) verify veterans’ preference points that are claimed by applicants, (2) add the preference points to the veterans’ scores, and (3) rank all applicants by score. Qualified veterans with service-related disabilities are to be placed at the top of hiring lists, or certificates. Agency hiring officials can then request a certificate from OPM containing the top-rated candidates from the register. Agencies are generally required to select from among the top three available candidates on a certificate. However, they cannot select a nonveteran if a higher placed preference-eligible veteran is available on the list unless the selecting official obtains approval for passing over the higher ranked veteran. Justifications for passing over a higher ranked preference-eligible veteran must be based on qualifications or suitability. Only OPM is authorized to grant such approvals. Under the second method, rather than use certificates developed by OPM, agencies may receive delegated examining authority from OPM to prepare their own certificates. Under such authority, agencies are to follow the same scoring, ranking, and selection rules that OPM follows. In addition, when shortages of qualified candidates exist, OPM provides agencies with direct hire authority, which permits agencies to directly receive applications, examine applicants, and make selections. Effective January 9, 1992, OPM directed agencies to apply regular scoring and ranking procedures, including application of veterans’ preference, whenever more than three candidates apply for a job or whenever both veterans and nonveterans are available. Until that date, direct hire authority, which accounted for almost one-third of all competitive hiring in fiscal year 1990, did not always provide for qualified veterans to receive preference. Veterans Received Appropriate Preference Points and Placement on Fiscal Year 1990 and 1991 Hiring Lists We Reviewed candidates was also correct. Veterans were properly placed on all but 1 of the 1,136 certificates that we reviewed from OPM and executive agencies. Certificates We Reviewed From 1990 and 1991 That Were Headed by Veterans Were Returned Unused More Often Than Those Headed by Nonveterans Although veterans may receive additional points because of their military service and be highly placed on certificates, they are not assured of selection for jobs. Under existing civil service laws and regulations, hiring officials have the option of using a variety of methods to identify and recruit potential candidates for a position. These officials may also leave a position vacant rather than fill it with a candidate who is qualified for the position but with whom they are not satisfied. Therefore, certificates of eligible job candidates may be requested but not used if federal managers are dissatisfied with the choices presented to them. In our 1992 report on federal hiring, we reported that 57 percent of all certificates were returned unused. We found that a greater chance existed that hiring officials would return certificates unused to OPM or the personnel office of their agencies when the name of a preference-eligible veteran was at the top of the hiring list. We reported that of the 1,136 certificates of eligible job candidates we reviewed, about 71 percent of certificates had been returned unused if they were headed by a veteran. About 51 percent of certificates were returned unused if they were headed by a nonveteran. One explanation for this difference is that managers in executive agencies may have less flexibility in selecting from a certificate if the name of a veteran is at the top. For example, if a certificate lists nonveterans in the top three positions, a manager can select any of the three. However, if a certificate is headed by a preference-eligible veteran and nonveterans are in the next two positions, the manager generally has no choice but to select the veteran or return the certificate unused, unless the manager receives approval from OPM to pass over the veteran. certificate, OPM did not enforce this requirement. When agencies did provide reasons for not selecting a candidate, OPM did not collect data on or analyze those reasons to determine their legitimacy, the possibility of antiveteran bias, or whether certificates met managers’ needs. In our 1992 report on federal hiring, we recommended that the Director of OPM establish a tracking system to monitor the use of federal hiring certificates. We also recommended that the Director use the data gathered by this system to analyze veteran hiring patterns. By February 1994, according to OPM, OPM developed a tracking system for evaluating unused certificates and automated portions of it. We have not reviewed this tracking system. Federal Employees With Veterans Preference During RIFs When agencies reduce their workforces through RIFs, veterans also have certain retention rights that are derived from the 1944 Veterans’ Preference Act. We have reviewed RIFs at three military installations and at one division within the USGS in sufficient detail to have statistics on how preference-eligible veterans fared in the RIF process. In each of these RIFs, preference-eligible veterans were more likely to have retained their jobs than were employees lacking veterans’ preference. Veterans’ Preference During RIFs Under OPM regulations, RIFs are accomplished in two phases. First, management determines the number and types of positions that are to be abolished and the “competitive areas” affected by the decision. Second, management identifies the employees within a competitive area and their relative status in the competition for retention. Employees’ retention status and assignment rights to other positions are essentially determined by their tenure, veterans’ preference, and length of service, with additional years of service credit provided based on how well they did on their performance ratings. When the identified positions are abolished, incumbents of those positions may have assignment rights to other positions that are not being abolished, depending on their retention status and qualifications. Once the initial decisions are made that define the numbers, types, and locations of positions to be abolished, determining the retention status of employees and their exercise of assignment rights is a relatively mechanical process with little flexibility. During a RIF, employees are separated from federal employment starting with those having the lowest retention status and continuing with those having increasingly higher retention status until RIF separation targets are met. How Veterans Fared During Selected RIFs In 1994, we testified on the impact on certain groups covered by equal employment opportunity laws of RIFs that occurred in fiscal year 1991 at three Department of Defense installations. We testified that the RIFs resulted in separations of minorities in numbers disproportionate to their numbers in the workforce at the three locations reviewed. Women were separated in disproportionate numbers at two of the locations. In some cases, disproportionate numbers of separations occurred largely because minorities and women did not have the retention factors—tenure, veterans’ preference, or performance-adjusted seniority—of nonminorities or men. Our analysis of the retention factors for civilian workers employed by the military services at the end of fiscal year 1991 showed that minorities and women ranked lower than their nonminority counterparts in all retention factors, including veterans’ preference. For purposes of this hearing, we analyzed the data from our 1994 testimony to determine how well preference-eligible veterans fared during the RIFs at the three installations. Overall, we found that those without veterans’ preference were from about two to seven times more likely to have lost their jobs during the RIFs than were those employees with veterans’ preference. At the Alameda, California, Naval Aviation Depot, those without veterans’ preference were seven times more likely to have lost their jobs in the RIF than were those who had the preference. At Kelly Air Force Base in San Antonio, Texas, those without veterans’ preference were about twice as likely to have lost their jobs. And at the Watervliet, New York, Army Arsenal, those without veterans’ preference were six times more likely to have lost their jobs. In 1996, we reported on a RIF conducted at the USGS during October 1995. This RIF took place within the U.S. Geological Survey’s Geologic Division and was somewhat unusual in that the overwhelming majority of Geologic Division employees were each placed in competitive levels that included only one employee. Employees within a single-person competitive level have less opportunity to move into another position during a RIF. Thus, such employees would be more likely to be separated from an agency. likely to be affected in some manner during the RIF—reassigned, moved to a lower graded position, or laid off—as were employees without veterans’ preference. However, those without veterans’ preference were four times as likely to lose their jobs than were employees who had veterans’ preference. Although you were interested in our updating and adding to these RIF-related retention statistics, we were not able to obtain comparable data on the retention rates for preference-eligible veterans for a wider number of more recent RIFs in the limited time we had to prepare for this hearing. Data are available from OPM’s Central Personnel Data File on the total number of veterans who have lost their jobs during RIFs over the past several years. However, these data alone do not indicate whether preference-eligible veterans were separated as a result of a RIF at rates disproportionate to others. Determining whether veterans have been disproportionately affected during RIFs would require data on the full population that was at risk of losing their jobs during a RIF. This is the population of the competitive area that would have been established by agencies for each specific RIF. Such data are not available from any central database that we were able to identify. In summary, Mr. Chairman, preference-eligible veterans remain a larger portion of the federal workforce than veterans overall in the general civilian workforce. In those job applications we reviewed for fiscal years 1990 and 1991, agencies properly followed veterans’ preference procedures in the hiring process in virtually all cases. However, selecting officials were more likely to return certificates unused if they were headed by the names of veterans than they were if veterans did not head those certificates. Finally, in four specific RIFs we reviewed, employees who did not have veterans’ preference were much more likely to lose their jobs during a RIF than were their colleagues who had veterans’ preference. This concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the work its done on veterans' hiring preference and on reductions-in-force (RIF) at selected military installations and at the United States Geological Survey (USGS). GAO noted that: (1) preference-eligible veterans represent a larger portion of the federal workforce than veterans do of the civilian workforce; (2) from 1993 through 1997, veterans with preference represented about 21 percent of all new career appointments to federal service; (3) the assignment of veterans' preference and the placement of veterans on federal hiring certificates were properly done in nearly all cases GAO reviewed from July 1990 through June 1991; (4) however, hiring officials more frequently returned unused those certificates headed by the names of preference-eligible veterans than those without the names of preference-eligible veterans at the top; (5) in the three RIFs conducted at military installations in fiscal year 1991, GAO's review showed that those without veterans' preference were much more likely to have lost their jobs than were preference-eligible veterans; (6) similarly, during an October 1995 RIF at the USGS, those employees without veterans' preference were much more likely to have lost their jobs than employees with such preference; (7) GAO found that during USGS's 1995 RIF, as required by law and Office of Personnel Management Regulations, employees with veterans' preference were consistently given higher retention standing than competing employees without such preference; and (8) GAO also found that preference-eligible veterans were just as likely to be affected in some manner during the RIF as were employees without veterans' preference.
Background In July 1998, then Vice President Gore and the former Prime Minister of Russia issued a joint statement noting that nuclear disarmament is associated with several socioeconomic factors, including the problem of finding worthwhile civilian-sector employment for Russian personnel formerly employed in the nuclear weapons complex. In September 1998, both countries signed an agreement—the Nuclear Cities Initiative—to create jobs for people in the nuclear weapons complex. Russian officials have identified the need to create 30,000 to 50,000 jobs in its nuclear cities over the next several years. Under the terms of the agreement, the United States will seek to assist in creating new jobs by sharing its experience in downsizing the U.S. nuclear weapons production complex; facilitating the selection of promising commercial projects that will lead to employment opportunities for workers; developing entrepreneurial skills for displaced workers, including training in how to write business plans; facilitating the search for potential investors, market analysis, and marketing for products and services; and facilitating access to existing investment mechanisms, including investment funds. NCI is limited to working in the municipal areas of each city. Beyond these areas are various secret nuclear institutes or technical areas. DOE’s strategy is to encourage investment in commercial enterprises in the municipal areas of the cities thus shrinking, over time, the size of the restricted areas in accordance with the plans of the Russian government. DOE officials believe that if commercial efforts are successful, not only will those employed in weapons manufacturing remain in the city but so will their relatives and friends and there will be less reason for weapons scientists, technicians, and engineers to leave the area. Figure 1 shows the location of Russia’s 10 nuclear cities, and appendix I provides additional information about each city. The day-to-day management of NCI resides within DOE’s Office of Defense Nuclear Nonproliferation, National Nuclear Security Administration. DOE and its national laboratories have long-standing relationships with MINATOM and several closed cities as well as experience in the downsizing of the U.S. weapons complex. The NCI program is managed by an office director with a headquarters staff of seven employees who provide technical, budget, and procurement support. DOE headquarters is responsible for, among other things, setting overall program policy, providing oversight and guidance for the national laboratories, and allocating program funds. DOE has tasked the national laboratories to play a major role in the program. DOE, under the same general authority under which it operates the NCI program, also operates the Initiatives for Proliferation Prevention program. IPP seeks to employ weapons scientists in several countries of the former Soviet Union, including Russia and some of its nuclear cities. According to DOE, IPP is designed to commercialize technologies that utilize the expertise of the scientists who work at the various nuclear weapons institutes. Although the IPP program focuses on employing nuclear weapons scientists, it also has a component that seeks to employ scientists in the former Soviet Union’s chemical and biological weapons institutes. In our 1999 report, we recommended that the Secretary of Energy take steps to maximize the impact of IPP’s funding and improve oversight of the program. Specifically, we recommended, among other things, that the Secretary (1) reexamine the role and costs of the national laboratories’ involvement with a view toward maximizing the amount of program funds going to the former Soviet Union, and (2) eliminate those IPP projects that did not have commercial potential. DOE subsequently implemented our recommendations. The U.S. government has supported other programs that have directed money to scientists working in the closed cities. For example, since 1994, the U.S. Departments of State and Defense have spent over $40 million on scientific research projects in which one or more of the weapons institutes in Sarov, Snezhinsk, or Zheleznogorsk have participated. These projects are administered under the auspices of the State Department’s International Science and Technology Center program. The Center was established by international agreement in November 1992 as a nonproliferation program to provide peaceful research opportunities for weapons scientists and engineers in countries of the former Soviet Union. The scientists working with the Center conduct research and development in a variety of scientific fields, such as environmental remediation and monitoring, nuclear reactor safety, vaccines and other medical treatment, and energy production. The U.S. government has also undertaken efforts in the nuclear cities through the U.S. Civilian Research and Development Foundation. Established by the U.S. government in 1995, the Foundation is a nonprofit charitable organization designed to promote scientific and technical collaboration between the United States and the countries of the former Soviet Union. From October 1996 through December 2000, the Foundation awarded 19 grants totaling about $275,000 to support projects in Sarov and Snezhinsk. The Foundation receives funding from the Department of State, the National Science Foundation, the National Institutes of Health, the Department of Defense, and several private organizations. NCI Program Expenditures From fiscal year 1999 through December 2000, NCI’s expenditures totaled about $15.9 million. Of that amount, about $11.2 million (or 70 percent) had been spent in the United States by the national laboratories and DOE’s headquarters, and about $4.7 million (or 30 percent) had been spent for projects and activities in Russia as shown in figure 2. The U.S. national laboratories’ costs to implement the program for such items as overhead, labor, equipment, and travel represented the bulk of the funds spent in the United States. DOE officials told us that these expenditures were significant but were part of the program’s start up costs. These officials told us that laboratory costs will be reduced and that the laboratories’ role will diminish as commercial investors develop business contacts in the nuclear cities as a result of the program. The expenditures for Russia included contracts with Russian organizations to buy computers and other equipment, a small business bank loan program, and various community development projects. MINATOM officials told us that they were dissatisfied with the amount of program funds that had been spent in their country. In response to direction provided in a conference report on its fiscal year 2001 appropriations, DOE stated in its program guidance that its goal is to spend at least 51 percent of fiscal year 2001 program funds in Russia. U.S. National Laboratories’ Expenditures Comprise Majority of U.S. Program Costs to Date Of the $11.2 million that was spent in the United States for the program, the national laboratories’ expenditures made up $10.7 million, or about 96 percent of that amount. DOE’s headquarters’ expenditures, totaling about $500,000, comprise the remainder of the program funds spent in the United States. DOE’s headquarters’ expenditures covered, among other things, obtaining studies related to Russia’s defense conversion activities and establishing a Website for the program. Regarding the laboratories’ expenditures in the United States, these costs were incurred primarily to develop and monitor various NCI projects and activities. According to DOE officials, the laboratories’ expenditures represent program startup costs. They noted that the program has taken longer to start up because of the economic problems facing Russia and the barriers involved in trying to start new businesses and related activities in the nuclear cities. Figure 3 shows a breakout of the national laboratories’ costs in the United States as of December 2000, and appendix II provides more details about the NCI program’s cumulative expenditures. Note1: Does not include DOE’s headquarters’ expenditures. Note 2: Travel includes travel of U.S. personnel within the United States and Russia. As indicated in figure 3, 75 percent of the funds spent by the laboratories were for overhead and labor costs. Overhead costs comprised the greatest percentage of costs (about 41 percent) and were charged for various activities, such as contract/procurement support and other activities related to the program’s implementation. For example, some laboratories charge an overhead fee for administering travel services for both U.S. and Russian officials. The next highest cost was for labor—34 percent. The laboratories have assigned a principal investigator to manage each NCI project. The principal investigators from the laboratories told us that they spent from 5 to 75 percent of their time on monitoring NCI projects. Additionally, they told us they spent most of this time during the early stages of the project to establish contacts with their Russian counterparts and to help develop contracts with Russian organizations in the nuclear cities. As the figure shows, the remaining 25 percent of the U.S. expenditures included travel (airfare and per diem) of laboratory personnel within the United States and to Russia; costs to purchase materials and services for the program, such as U.S.-based consultants; and other miscellaneous costs, such as training, videoconferences, and translation services. DOE officials told us that they were concerned about the amount of funds spent by the laboratories to administer the program—particularly, the overhead costs. However, these officials believe that the laboratories play an important role in the start up of the NCI program. Some DOE officials, including the program director, stated that laboratory costs would be reduced over time as businesses invest their own capital in the nuclear cities. However, the program director was not sure when the laboratories’ role in the program would be reduced. DOE has taken some steps to reduce laboratory costs as shown in the following examples: One laboratory official from the Savannah River Site told us that, in general, overhead for contracts at his site is about 37 percent of the total cost of NCI-related contracts. He subsequently negotiated with DOE an 11-percent overhead rate in fiscal year 2000 for Russian-related programs to include NCI-related contracts. He said this was done to increase the amount of funds going to Russia. Some of the NCI projects are being managed directly by DOE’s headquarters in an effort to limit national laboratories’ overhead expenditures. DOE recently took over from a national laboratory the management of a U.S. firm that is responsible for monitoring the day-to-day operations of International Development Centers. NCI program funds were used to pay the laboratory for this supervisory function. According to DOE and laboratory officials, DOE’s headquarters assumed this responsibility to reduce the laboratory’s costs. Thirty Percent of NCI Program Funds Spent for Activities in Russia As of December 2000, NCI program expenditures for projects and activities in Russia totaled $4.7 million, or 30 percent of the $15.9 million spent by the NCI program. As figure 4 shows, the largest category of expenditures (about 58 percent) was for contracts. The contracts were used to establish, among other things, the Sarov Open Computing Center. The Center was established in 1999 with NCI funds to help Russian scientists develop commercial skills. According to Center’s officials, a portion of these funds was used to supplement the salaries of the Russian scientists. In addition, some of these funds were used to (1) finance the European Bank for Reconstruction and Development’s (EBRD) activities to establish a small business bank loan program in the cities and (2) support various community development activities. The materials purchased by DOE and the national laboratories for use in Russia comprised 36 percent of the expenditures and included such things as medical equipment, computers, and payments to Russian consultants/trainers. The remaining expenditures (about 6 percent of the total) were for Russian personnel traveling to the United States. MINATOM officials told us that they were dissatisfied with the amount of NCI funds that had been spent in Russia. The First Deputy Minister of MINATOM told us that Russia should have received about 65 percent of the funds programmed for NCI, as it was his understanding that DOE had planned to spend that percentage of program funds in Russia. He questioned why Russia had not received the amount he had expected and wanted to know what happened to these funds. The First Deputy Minister also noted that Russia needs help in creating about 1,500 jobs per year in the nuclear cities and that DOE’s funding for the program has been insufficient to meet this goal. He concluded that when MINATOM officials review NCI’s progress to date, the picture is not optimistic. In his opinion, the lack of progress in the program increases the negative views of the program held by various Russian government officials who allege that the program is a way for the United States to gain access to weapons data in Russia’s nuclear cities. The Congress and DOE have set goals for increasing the amount of NCI program funds spent in Russia. An October 2000 conference report on DOE’s appropriations for fiscal year 2001 stated that the conferees were concerned about the amount of funding for Russian assistance programs that remain in the United States for DOE contractors and laboratories rather than going to the facilities in Russia. The conferees directed that not more than 49 percent of NCI program funding be spent in the United States in fiscal year 2001. The conferees expect DOE to continue to increase the level of funding (beyond 51 percent) for Russia in each subsequent year but did not establish a ceiling for the amount of funds that should ultimately be spent in Russia. DOE’s NCI Program Guidance, issued in January 2001, noted that in order to meet the spending target established by the conference report, U.S. project managers will spend or commit at least 65 percent of the funds for each project in Russia. DOE officials said they expect overall program expenditures to reach the congressional target of 51 percent if 65 percent of each NCI-project’s funds are spent in Russia. DOE’s Lack of Standardized Reporting Procedures Affected Its Ability to Monitor NCI’s Expenditures DOE did not have systematic financial management procedures in place for reporting and tracking NCI’s program expenditures. DOE’s initial financial guidance for the program, which was issued in May 1999, only noted that an accounting procedure overseen by an experienced budget and fiscal official will include regular monthly reports by the laboratories on individual NCI projects. The guidance was silent on the issue of specific reporting requirements, including how expenditures for U.S. and Russian activities should be identified. Although the national laboratories were generally providing cost information on a monthly basis, a DOE budget official told us that this information lacked consistency and uniformity. As a result, the budget official was not confident that the cost information was accurately depicting the breakout of expenditures between U.S. and Russian activities. For example, in May 2000, DOE developed a breakout of the costs and concluded that 65 percent of the funds had been spent in the United States and 35 percent had been spent in Russia. However, the analysis of Russian expenditures included the funds that were obligated as well as actual expenditures. According to one DOE official, this analysis overstated expenditures in Russia. Some national laboratory officials told us that the lack of standardized reporting guidance made it difficult to determine how to account for program expenditures in the United States and Russia or what to include in these cost categories. During the course of our review—and, in part, as a result of our work—DOE established a standardized monthly and quarterly financial report for the NCI program. In January 2001, DOE’s NCI budget official distributed guidance directing all of the national laboratories to report NCI project costs by using a standard format for identifying expenditures. Furthermore, in its January 2001 program guidance, DOE defined how funds were to be categorized. Expenditures in Russia include the costs of Russian officials traveling to the United States, contract payments to Russian organizations, payments to Russian consultants and trainers in Russia, and equipment and materials bought in the United States for Russia or equipment and material bought in Russia. Expenditures in the United States include U.S. labor, U.S. travel to Russia, all laboratory overhead, payments to U.S. consultants and trainers in Russia, payments to all interpreters and/or translator services, and equipment and materials bought in the United States for use in the United States. DOE Has Limited Oversight Over Laboratories’ Expenditures According to DOE program officials, the Department has exercised limited oversight over the national laboratories' use of NCI program funds. Initial DOE program guidance for the NCI program, dated May 1999, did not specifically address financial management procedures for funds disbursed by DOE to the national laboratories and instead relied on existing reporting mechanisms between DOE and the laboratories. According to DOE officials, once funds are transferred to a laboratory, they can be redirected by the laboratory from one project to another. One national laboratory redirected approximately $130,000 from two projects dealing with fiber optics and telecommunications to another project. The NCI program director was not made aware of this transfer until the laboratory requested additional funding from DOE to replenish these projects’ funding. On the basis of these experiences, in January 2001, DOE established new guidance stating that the NCI program director must approve the reallocation of funds to other projects. DOE Has Not Developed a Cost Estimate or Time Frame for the Program’s Future Scope and Direction DOE has not developed a plan, including projected future costs, to gauge the extent to which NCI is meeting its program goals to determine when and under what circumstances it would be appropriate to expand the program beyond the three pilot nuclear cities. In 1999, DOE officials believed the total funding level for NCI could reach $600 million over a 5-year period. However, the Director of the NCI program told us that because the program had not received expected funding levels during its first years of operation, he is uncertain about future program costs and time frames. DOE’s former Assistant Deputy Administrator for Arms Control and Nonproliferation told us that each of the pilot cities is expected to receive funding for several years and that the Department needs to develop an “end point” when assistance is completed for each city. NCI is focusing its initial efforts in these three cities plus a weapons assembly plant that is located at Avangard (in the city of Sarov). DOE has worked jointly with MINATOM and the nuclear cities to develop strategic plans for each pilot city, which include lists of jointly developed project proposals. However, DOE has not developed performance targets that map out its specific contributions to this downsizing effort over time. DOE has stated that key measurements include the number of civilian jobs created, businesses established or expanded, investment in the closed cities, training for Russians, and percentage of funds spent in Russia. While these performance measures are appropriate in a general sense, DOE has not indicated what it hopes to specifically accomplish in these areas over what period of time. Without such targets, it is difficult to determine whether or not the program is on track to meet its long-term objectives. The deputy director of the NCI program told us that DOE is aware of the number of weapons scientists that Russia needs to find jobs for in the nuclear cities but there is no mutually agreed upon number of scientists that DOE plans to help find jobs for. The NCI program director said that DOE would be better able to plan and leverage its own resources if it had more information about how MINATOM is budgeting funds for its own specific defense conversion projects. DOE’s NCI Projects Have Had Limited Impact The NCI program has had limited success during its first 2 years. According to DOE, NCI’s projects are employing about 370 people, including many weapons scientists, primarily on a part-time basis through research sponsored by the U.S. national laboratories. One project has helped create commercial space in several buildings previously used for nuclear weapons assembly work in the city of Sarov. About half of the NCI projects are not designed to directly lead to employment opportunities for weapons scientists, and Russian officials have criticized DOE’s funding decisions. The Department has two programs—NCI and the Initiatives for Proliferation Prevention—operating in Russia’s nuclear cities that have a common goal. Having two such programs has caused duplication of effort, such as two sets of project review procedures and several similar types of projects. Most of the Work Created by NCI Projects for Weapons Scientists Is Part- Time Contract Research for National Laboratories According to DOE, NCI’s projects have generated employment for about 370 people, including weapons scientists, in the nuclear cities. About 40 percent of the work has been generated through the Open Computing Center in Sarov. The purpose of the computing center is to help scientists, mathematicians, and software engineers develop self-sustaining civilian activities, including commercial and contract research. The computing center’s director told us that the part-time employees were also working at the weapons design institute in Sarov on weapons-related activities and are receiving salaries from the institute. The employees are working on contract research for the Los Alamos National Laboratory. This work includes several areas of research such as (1) computing and system software development, (2) computer modeling for the oil and gas industry, (3) computer modeling for the strength of materials related to molecular dynamics, and (4) biomolecular modeling. According to a Los Alamos official, while the laboratory has not benefited directly from the research, it has helped enhance the computer-related skills of the center’s employees and is making them more attractive to Western businesses. The center’s director said he hopes that the center will become self-sufficient within 7 years. DOE officials have estimated that, with successful marketing to commercial businesses, the center will be able to employ more than 500 people by 2005. As of December 31, 2000, the NCI program had spent about $1.2 million on computers, site preparation, contracts with the employees of the center, and other expenses. The center has had some success in attracting business investment. For example, an international bank has contracted with the center to develop electronic banking software on a pilot basis. The bank may contract with the center for additional work if the pilot project proves successful. The bank official responsible for this project said he is optimistic that the bank will be able to develop future work for the scientists. The program also introduced programmers at the Open Computing Center to an engineering software company in the United States that was looking for people to help develop software to analyze fluid dynamics in automobile engines and turbines. The software company worked with NCI and national laboratory staff on a pilot project to test the skills of programmers from the center. The NCI program allocated $40,000 to pay the salaries of four Russian scientists working on non-defense-related test problems as well as for the national laboratory’s expenses. In early 2001, the software company hosted the scientists in the United States for training. As a result of the training, a commercial contract was signed on March 30, 2001. One NCI Project Has Helped Open Commercial Space at Russian Weapons Facilities According to DOE, one of the most successful projects involves the conversion of weapons assembly buildings at Avangard into production space for commercial ventures, including the proposed establishment of a kidney dialysis manufacturing facility. DOE has helped facilitate the relationship between a Western business and Avangard and has allocated about $1.5 million to support this effort. For example, DOE said it has spent several hundred thousand dollars to make commercial space available to potential Western businesses. In August 2000, the Secretary of Energy traveled to Sarov to dedicate the newly established commercial space as part of a new “technopark.” In addition, the NCI program has continued to help Avangard, MINATOM, and the Western company work together to develop a sustainable commercial relationship. The Western company has been looking for a business partner to help it enter into new promising markets, such as Russia. Avangard has manufactured dialysis machines for several years, and the Western company is hoping to take advantage of those skills while expanding into Russia and parts of Europe. According to DOE, Avangard would devote the majority of its initial efforts to manufacture disposable products that are used for various dialysis treatments. The NCI program plans to use the remaining project funding to help prepare the buildings for producing the dialysis components, but those funds have not yet been spent. DOE has also allocated $1.25 million from the Initiatives for Proliferation Prevention program to support production development at the site. In January 2001, an official of the Western company said that he was optimistic about starting production by the end of the year. He expected his company to begin installing manufacturing equipment during the summer of 2001. If the project progresses as planned, the company expects to employ about 150 Avangard weapons assembly employees on a full-time basis. The official said that the number of employees could grow to 1,000 over time. About One-Half of the NCI Projects Are Not Designed to Provide Jobs for Weapons Scientists About one-half of the NCI projects have been established to fund a variety of activities in the nuclear cities. These projects include infrastructure improvements, cooperation with the European Bank for Reconstruction and Development to provide small business loans that are available to city residents, business training, marketing, and feasibility studies. In addition, these projects include community development efforts, such as youth exchange programs and health care services. According to DOE, while these projects may increase the potential for job creation in the closed cities, they are not all designed to directly lead to new jobs for weapons scientists. DOE officials believe that community development projects are needed to improve the economic and social conditions in the cities in order to make them more attractive to commercial investors. However, MINATOM and weapons institute officials have criticized DOE’s decision to fund community development activities and small business loans, claiming that they do not lead directly to employment opportunities or provide sustainable jobs for weapons scientists. DOE has allocated about $1 million through December 2000 to a dozen separate activities that fall into the category of community development. The activities include school exchange programs, Sister Cities exchange programs, and health care services. According to DOE, community development activities are needed to bolster the cities’ ability to provide self-sufficient services, develop municipal capabilities and strengthen citizen and entrepreneurial networks, and build political and economic ties. In addition, DOE officials told us that community development activities are needed to help make the cities more attractive to potential Western investors. However, none of the industry officials whom we talked to during the course of our audit indicated that they would be more likely to invest in the nuclear cities because of municipal and social improvements. MINATOM officials have stated in the past that while these activities may be worthwhile, they do not support them as part of the NCI program because they will not create jobs. In the May 2000 Joint Steering Committee meeting, a MINATOM official stated that job creation was the primary goal of the NCI program and the 1998 NCI government-to- government agreement. He noted that MINATOM believed that only activities that create real jobs should be included under the NCI agreement and that community development activities, should they continue, need to be covered by a separate agreement. According to DOE officials, the community development component of NCI was considered by the former DOE Assistant Secretary responsible for the program to be a vital activity. A July 1999 House Appropriations Committee report accompanying the Energy and Water Development Appropriations Bill, 2000, raised concerns about DOE’s expertise in implementing the NCI program. The report stated that DOE should work with other federal agencies that are implementing similar programs in Russia. As a result, DOE has attempted to include other agencies in the program’s implementation. For example, DOE’s community development activities have worked in tandem with other U.S. government agencies. The U.S. Agency for International Development has granted about $387,000 to a U.S. nongovernmental organization to carry out community health care projects in Sarov and Snezhinsk. NCI has also given a grant to this organization to implement the community health care project in Snezhinsk. These projects are not intended to directly support work by weapons scientists or engineers but to improve the level of health care service in the cities. One of the NCI program’s other major projects has been to enter into a cooperative arrangement with EBRD to extend the bank’s Russia Small Business Fund to the nuclear cities. DOE believes that the loan programs are important to diversify the economies of the cities, although the loans are not necessarily assisting weapons scientists. The Department awarded $1.5 million to EBRD in February 2000 for the bank to set up the programs. As of December 2000, the bank had spent over $438,000 of the $1.5 million on salaries for its own staff consultants, to train new loan officers in the cities, and to cover operating expenses. According to the bank, as of February 2001, it had made about 280 loans to businesses in the cities. DOE routinely receives information on the loan program, but that information does not provide details about the background of the loan recipients. However, according to information from EBRD on loans made in Snezhinsk, the recipients are typically not current employees of the weapons institutes and the loans are not necessarily used to start new businesses. Furthermore, the businesses that receive loans are mostly in the retail trading sector, such as clothing and household goods stores. Some MINATOM officials told us that they question the value of the loan programs, noting that the loans are not going to the types of businesses that are appropriate for highly educated weapons scientists. Officials from the weapons institute in Sarov told us that they did not request the loan program and objected to DOE’s using NCI funds to start it because it does not play a role in restructuring the workforce. (See app. III for more details about the loan program.) About One-Third of the NCI Projects Are Designed to Develop Sustainable Commercial Ventures Eight, or about one-third, of the NCI projects we reviewed are designed to develop sustainable commercial ventures. To date, only one of these has had success in creating jobs; it involves a small company started in Snezhinsk to market and service bar-code technology and other automated devices that are used to identify and inventory property. The Russian company was formed in February 2000 by six former weapons institute employees. According to a national laboratory official, these employees left the institute to form the company. The NCI program allocated $395,000 to the project in fiscal years 1999 and 2000. According to a national laboratory official, the Russian company has used the funds to pay for office space, equipment, and salaries. It also used NCI funds to enter into one contract to receive training and has entered into agreements to distribute and service bar-code and auto-identification technologies manufactured by three U.S. companies. DOE has canceled several NCI projects that were intended to create jobs for weapons scientists for a variety of reasons. According to DOE, many projects were designed to “jump-start” the program with the expectation that not all would evolve into large-scale jobs creation projects. Furthermore, several of these projects were subsequently determined to not be viable, have run into difficulties, and have either been canceled or stalled. For example, the program funded one project in Zheleznogorsk to expand the capacity for recycling luminescent tubes that contain mercury. DOE allocated $250,000 to this project but spent only $2,000. The national laboratory official responsible for overseeing the project said that MINATOM was not willing to bring the recycling technology out of the restricted part of the city. Because access restrictions prevented DOE from working to expand the recycling capacity within the secure area of the institute, the Department canceled the project. DOE funded another project to determine the viability of producing canola oil in the Zheleznogorsk region. The oil can be used for cooking and animal feed and can be used industrially to make lubricants, fuels, and soaps. Initial work under the project would have been to determine whether or not the crop could be successfully grown in the area. According to the national laboratory official responsible for overseeing the project, DOE and officials from the weapons institute in Zheleznogorsk were interested in the idea, but the city’s mayor was not. The national laboratory official told us that the mayor was more interested in promoting the production of barley for livestock that could also be used to make beer and vodka to bring in tax revenues for the city. The national laboratory official was denied access to the city when she tried to promote the project. DOE allocated $302,000 to the project and spent about $114,000 before canceling it. Other NCI projects have been canceled or delayed due to a lack of Russian support and cooperation. For example, in the case of one approved project, Russian officials have not provided DOE with business and marketing plans and other financial information, claiming that the information is proprietary or includes trade secrets. According to DOE officials, NCI projects would more likely succeed if Russia demonstrated its support by contributing funds to the projects. The most successful commercial effort we observed in the nuclear cities involved a major U.S. computer company that employs former weapons scientists in Sarov. This effort, which began about 7 years ago, has been undertaken without U.S. government assistance and now employs about 100 scientists. This commercial venture is discussed in more detail in appendix IV. NCI Program Faces Numerous Impediments to Success In addition to the lack of Russian support for some projects, there are numerous other reasons for the limited initial success of the NCI program. These include poor economic conditions in Russia, the remote location and restricted status of the nuclear cities, the lack of an entrepreneurial culture among weapons scientists, and the inadequacy of the NCI program’s project selection process. As we reported in November 2000, international aid efforts have had difficulty in promoting economic growth in Russia. The country appears to be a long way from having a competitive market economy, and its transition over the past decade has been more difficult than expected. DOE faces even greater problems in trying to promote economic development in the nuclear cities. The cities are geographically and economically remote. Although the cities have a skilled and well-educated workforce, those residents have depended upon government support for their livelihood and do not generally have experience in business or entrepreneurial ventures. According to DOE and industry officials, access to the nuclear cities has been a major impediment. The Russian government requires that all visitors apply for an access permit at least 45 days before arriving but does not always grant those requests. DOE provided us with a list of 25 instances since 1999 in which the Russian government denied requests from DOE headquarters staff, national laboratory staff, U.S. embassy personnel, and Members of Congress for access to one or more of the three cities. (See app. V for more detail.) Complications over a request for access even led to the cancellation of a scheduled Joint Steering Committee meeting in November 2000, which the NCI program director considered a major setback to the program. A MINATOM official told us that the access problem is greatly exaggerated, further noting that “hundreds” of officials have visited Russia on behalf of the NCI program. The MINATOM official also told us that access would be even better as more NCI funds reach the nuclear cities. Notwithstanding the views of MINATOM officials, industry officials told us that the difficulties in obtaining access were a detriment to doing business in the nuclear cities. Several industry representatives told us that the 45-day waiting period would cause serious problems for their commercial ventures in the cities. The EBRD official responsible for managing the loan programs also told us that access problems are an impediment to doing business. Because of access problems, EBRD consultants have had to bring people outside of the cities for training. The official also told us that difficulties with access would make it harder to oversee the loans. NCI’s Projects Were Not Adequately Screened The success of NCI projects has also been limited by the program’s failure to rigorously screen projects before approving them. In May 1999, DOE issued a program plan that included a project selection and approval process. NCI program staff were to screen project proposals to determine their suitability with respect to the program’s objectives by using a list of criteria developed by the Joint Steering Committee. The criteria included such factors as the number, cost, and sustainability of created jobs, the involvement of industry, and whether the project could enhance Russian weapons technology. The process then called for proposals to be reviewed by (1) one or more of three types of working groups; (2) a technical committee comprising government and nongovernment officials; and (3) other U.S. government agencies and offices within DOE with an interest in aid to Russia. DOE and national laboratory officials have told us, however, that the implementation of the project approval process to date has been inconsistent and “ad-hoc.” DOE officials told us that the program did not have documentation to show how approved projects had moved through the review process. According to the NCI program director, projects were approved for funding without a comprehensive review process in order to implement the program quickly and engage the Russians. In addition, although projects are reviewed by DOE and MINATOM through the workings of the Joint Steering Committee, MINATOM officials have not supported several of the major NCI projects, including the EBRD small business loan programs and the community development projects because they did not directly lead to sustainable jobs for weapons scientists. According to DOE officials, DOE and MINATOM have differing views about what the NCI program should be funding. MINATOM believes that only projects that lead directly to jobs creation should be funded while DOE has asserted that many different activities—in addition to jobs creation—need to be addressed as part of the program. In the National Defense Authorization Act for Fiscal Year 2001, the Congress directed that DOE establish and implement project review procedures for the NCI program before DOE would be allowed to obligate or expend all of its fiscal year 2001 appropriation. The act specified that the procedures shall ensure that any scientific, technical, or commercial NCI project (1) will not enhance Russia’s military or weapons of mass destruction capabilities; (2) will not result in the inadvertent transfer or utilization of products or activities under such project for military purposes; (3) will be commercially viable within 3 years; and (4) will be carried out in conjunction with an appropriate commercial, industrial, or nonprofit entity as partner. In response, in January 2001, DOE issued new guidance for the NCI program that includes more detail on the project selection and approval process. For example, the guidelines spell out the process by which DOE will review projects—internally and with interagency assistance—for any military application. The review process is also supposed to confirm that scientific, technical, and commercial projects will have a partner and that they are commercially viable. It is too early to tell how closely DOE will adhere to this project-approval process. In addition, the new guidance states that DOE will give preference, to the extent possible, to those projects with the strongest prospects for early commercial viability and those in which start-up costs are shared with other U.S. government agencies, Russian partners, and/or private entities. Duplication Has Occurred in the Operation of DOE’s Two Programs in Russia’s Nuclear Cities The Nuclear Cities Initiative and the IPP program share a common underlying goal—to employ Russia’s weapons scientists in nonmilitary work. Unlike the IPP program, NCI has a community development component that is designed to create conditions necessary for attracting investment in the nuclear cities. The operation of these two similar programs in Russia’s nuclear cities has led to some duplication of effort, such as two sets of project review procedures and several similar types of projects. Both the IPP program and NCI operate in and provide funds to Russia’s nuclear cities. Since 1994, DOE has spent over $13 million on about 100 IPP projects in five nuclear cities, including the three nuclear cities participating in the NCI program—Sarov, Snezhinsk, and Zheleznogorsk. According to IPP’s Deputy Director, several of the projects have funded the development of promising technologies, such as prosthetic devices and medical implants, nuclear waste clean up technology, and portable monitoring devices to detect nuclear material. He told us that these projects might be commercialized in the next few years. One U.S. national laboratory official told us that there was not a clear distinction between the two programs, and other laboratory officials noted that some projects have been proposed for funding under both programs, shifted from one program to another, or have received funding from both programs. For example, in the case of the kidney dialysis equipment project, NCI has funded infrastructure improvements, and IPP has funded a small planning effort and also plans to fund some activities related to the manufacture of disposable products. Both the NCI and IPP programs reside within DOE’s Office of Defense Nuclear Nonproliferation, National Nuclear Security Administration. In addition, the programs have adjoining offices and share staff to perform budget, travel, and secretarial functions. The Directors of the NCI and IPP program told us that, in their opinion, there was nothing wrong with some overlap in projects or in sharing administrative functions. The IPP program director told us that although he did not believe that the two programs were duplicative, there is potential for duplication to occur because both have a common approach for creating jobs in the nuclear cities. Some of the failures of the NCI commercial development projects may have been avoided if DOE had a common project approval process and had incorporated some of the elements of the IPP project selection process from the outset of the program. In 1999, we recommended that DOE eliminate those IPP projects that did not have commercial potential. Subsequently, DOE implemented our recommendation and strengthened its project selection process. IPP requires that all proposed projects have an industry partner to help ensure the commercial viability of each project. The IPP program has also relied on the U.S. Industry Coalition to help evaluate and develop commercial projects. In contrast, the NCI program has not established a similar relationship with the Coalition or any other industry group nor has it required an industry partner for its projects. On March 21, 2001, DOE solicited the Coalition’s support in disseminating information among its members about the Nuclear Cities Initiative. Most of NCI’s initial commercial development projects would not likely have been approved under the IPP program’s more rigorous approval process. This is because unlike the IPP program, the NCI program did not require that projects have industry partners or demonstrate commercial viability until January 2001, when program guidance on the subject was issued. In addition, the program has only recently begun to develop a more systematic process, as IPP has, for obtaining the views of business or industry experts on the commercial viability of projects. According to the Deputy Director of the NCI program, DOE is now developing a contract with a consulting firm that will review proposed projects for commercial viability. In addition, the NCI program has recently adopted practices established under the IPP program regarding the funding of projects. In January 2001, the NCI program required that 65 percent of all project funds be spent in Russia. The guidance is similar to a congressional restriction on the IPP program, which mandates that no more than 35 percent of IPP funds may be obligated or spent by the national laboratories to carry out or provide oversight of any program activities. Moreover, the IPP program has allocated funds to the national laboratories accompanied by approval letters that specify the exact amount of funding to be allocated (and spent) at the laboratories and in Russia. A similar approval letter procedure has only recently been adopted for the NCI program. Although the programs have many similarities, the level of access to the nuclear cities granted to DOE officials is strikingly different, depending on which program they are representing. For example, officials of the nuclear city of Snezhinsk do not allow DOE and national laboratory officials access to the restricted weapons institutes under NCI. This restriction has impeded the implementation of a few NCI projects. For example, a U.S. national laboratory official told us that he was not granted access to visit a weapons institute in Snezhinsk to observe the equipment being considered for use in an NCI project related to the development of fiber optics. As a result, this project has been canceled. However, the same U.S. official was allowed access to observe this same equipment 2 years earlier when he visited the site under an IPP-sponsored visit. European Nuclear Cities Initiative Focuses on Employing Scientists in Russia’s Nuclear Cities The European Nuclear Cities Initiative, a proposed program that is being supported by the Italian Ministry of Foreign Affairs, is designed to create jobs in Russia’s nuclear cities. This proposal is expected to be smaller in scope than DOE’s NCI, but officials responsible for the effort told us that ENCI should complement and support the U.S. program. We found some significant differences between the two programs. For example, ENCI is expected to (1) target older weapons scientists who are considered to pose a greater proliferation risk than younger scientists who could be more easily assimilated into the Russian economy; (2) start in two nuclear cities; and (3) emphasize environmental and energy-efficiency projects. Furthermore, officials responsible for ENCI told us that it will not emphasize establishing sustainable commercial ventures in the cities. Instead, ENCI proposes to fund projects that utilize Russian weapons scientists’ skills to help develop environmental and energy-related technologies that can be used by European companies. The ENCI proposal is expected to complement DOE’s program. It has been developed and promoted primarily by an Italian nongovernmental organization known as the Landau Network-Centro Volta and by the Italian National Agency for New Technology, Energy and Environment. It has received support from the Italian Ministry of Foreign Affairs. According to a Landau Network-Centro Volta official, ENCI shares the same basic nonproliferation objectives as DOE’s program but will be significantly smaller in scope and size. Furthermore, the European proposal has developed an overall approach and set of proposed activities that differ from the DOE program in several ways. For example, ENCI plans to focus on environmental cleanup and energy-efficiency technology projects that Landau officials believe tap into the strengths of the weapons scientists in the two nuclear cities. Italian officials do not believe that the cities possess sufficient commercial potential to develop sustainable business enterprises in the foreseeable future. As a result, they believe that it makes more sense to develop projects that employ nuclear city weapons scientists as contractors to provide technical assistance to help solve environmental and energy problems in Europe. They also believe that over time, it might be possible to attract Western business partners to enter into commercial relationships with the city if the initial projects prove successful. Program Funding Levels Are Uncertain According to officials from Italy and the European Commission, ENCI will start in two cities—Sarov and Snezhinsk. However, funding for ENCI is uncertain. Italian officials estimated that $50 million will be needed to implement the program over the next 5 years from various donors, including individual countries as well as the European Commission. An Italian Ministry of Foreign Affairs official told us that Italy is considering funding one project in 2001 at a cost of between $500,000 and $800,000. A European Commission official told us that funding levels would probably be modest because some member states do not perceive that unemployed Russian weapons scientists pose a serious proliferation threat. He noted that many European countries were more concerned about the threat posed by nuclear materials in Russia and are more inclined to fund programs that would ensure greater accountability and control over these materials. Furthermore, this official said that member states of the European Commission want more details about the ENCI proposal before they are willing to make a decision about funding for the program. In December 2000, the Italian Ministry of Foreign Affairs—in collaboration with the Landau Network-Centro Volta and the Italian National Agency for New Technology, Energy and the Environment—prepared a list of 34 projects proposed by representatives from Sarov and Snezhinsk. These projects are focused on innovative technologies and energy and environmental issues. Some of these proposed projects are designed to develop environmental centers in Sarov and Snezhinsk, develop renewable energy sources, investigate advanced technological components for fuel cells, and create energy-efficiency centers in Sarov and Snezhinsk. The projects are expected to last from 1 to 3 years with costs ranging from about $69,000 to over $1.8 million. Each proposed project assumes that Russia will fund part of the project. Job creation estimates are included in each project proposal and range from 20 to 50 per project. These projects will be submitted to European Commission members for review and are expected to be discussed at an April 2001 ENCI working group meeting. Italian officials told us that they hope that the Commission would provide funding for some of these projects after the meeting takes place. DOE and Russian Officials Express Support for ENCI DOE officials believe that ENCI will support the goals of the Nuclear Cities Initiative. DOE’s NCI program director said that it is important to increase other countries’ participation in this effort and believes that both programs can work together in the nuclear cities. Although the director noted that the programs have different strategies for creating jobs for weapons scientists, he believes that both are complementary. The U.S. government and the European Commission have started to coordinate their assistance efforts in the nuclear cities. In June 2000, the State Department and DOE jointly sent a letter to the Commission encouraging initiatives that (1) complement efforts to promote nuclear nonproliferation, (2) help downsize Russia’s nuclear weapons complex, and (3) enhance scientific and technical cooperation with scientists in the closed nuclear cities. The Departments noted that in December 1999, several U.S. government representatives participated in an international forum to discuss ENCI. ENCI was viewed as potentially augmenting ongoing U.S. and other international activities, including the Initiatives for Proliferation Prevention program and the International Science and Technology Center’s activities focused on the nuclear cities. MINATOM officials told us they would welcome assistance through ENCI. They stated that the effort to employ weapons scientists in the nuclear cities is a great challenge and believe that ENCI can contribute to accelerating the pace of Russia’s downsizing effort. In a July 2000 letter addressed to the European Commission, MINATOM’s first deputy minister stated that Russia supports the efforts of the Commission to help find jobs for weapons scientists. He noted that Russia was ready to begin taking steps to pave the way so that ENCI could begin working in the nuclear cities. Conclusions DOE’s effort to help Russia create sustainable commercial jobs for its weapons scientists and help downsize its nuclear weapons complex is clearly in our national security interests. It also poses a daunting challenge. The nuclear cities are geographically and economically isolated, access is restricted for security reasons, and weapons scientists are not accustomed to working for commercial businesses. Thus, Western businesses are reluctant to invest in the nuclear cities. However, the successful collaboration of a major U.S. computer firm in the Russian nuclear city of Sarov, without U.S. government assistance, is an example of what can be accomplished over time if the skills of Russia’s weapons scientists are properly matched with the needs of business. Although DOE has had some modest successes with helping Russia create jobs for its weapons scientists and downsize its nuclear weapons complex, we believe that DOE needs to rethink its strategy. A disproportionate percentage of program funds is being spent in the United States—about 70 percent—most of which are going to the U.S. national laboratories instead of to Russia. This is also a major irritant to Russian officials who told us that if DOE is serious about creating jobs in the nuclear cities, a larger percentage of program funds should be spent in Russia. A conference report on DOE’s fiscal year 2001 appropriations has directed that no more than 49 percent of Nuclear Cities Initiative funds be spent in the United States and DOE has incorporated this goal into its program guidance. DOE will have to more effectively monitor and control program spending to meet this goal. We are encouraged that one U.S. national laboratory has negotiated lower overhead rates in order to put more resources in Russia and that DOE has taken steps, as a result of our review, to systematically track U.S. and Russian program expenditures. However, DOE has not developed the quantifiable program goals and milestones that are needed to track progress and make decisions about future program expansion to other nuclear cities and the level of resources needed to continue the program. About one-half of the NCI projects are not designed to create businesses or lead to sustainable employment but rather focus on infrastructure, community development, and other activities. In our view, DOE needs to concentrate its limited program funding on those activities that will most realistically lead to sustainable employment for weapons scientists. Attempting to change the social fabric of the nuclear cities through community development projects, thereby making the cities more attractive to potential investors, may not be a realistic or affordable goal. Furthermore, industry representatives told us that the outcome of these types of projects would have little impact on a company’s decision to invest in the nuclear cities. Indeed, MINATOM and weapons institute officials from Sarov have questioned the value of community development projects because they do not create sustainable jobs in the nuclear cities. While we believe that the above changes are necessary to improve the implementation of NCI, in our view, a more fundamental question needs to be addressed by DOE. Does the Department need two separate programs operating in Russia’s nuclear cities with the same underlying goals and, in some cases, the same types of projects? The IPP program and NCI share a common goal—the employment of Russian weapons scientists in alternative, nonmilitary scientific or commercial activities. Combining the two programs could alleviate many of the concerns we have with the implementation of NCI. For example, the IPP program already has established limits on the amount of funds to be spent in the United States and Russia as well as a strengthened project review and selection process that focuses on the commercialization of projects and jobs creation. Furthermore, efficiencies might be gained by combining the administrative structures of both programs, particularly given that the overhead rates at most national laboratories are relatively high. While we are encouraged that DOE has already taken some steps to reduce laboratory costs, there may be additional opportunities for cost savings in this area. Ultimately, the success of DOE’s efforts to create jobs for Russia’s weapons scientists depends on industry’s willingness to invest in the nuclear cities and elsewhere throughout Russia. We believe that there is a limit to what U.S. government assistance can do in this regard. It is instructive to note that the proposed ENCI limits and targets its assistance because of the difficulty involved in creating sustainable commercial businesses in the nuclear cities. We also believe that this is an appropriate time for the Department to take a closer look at the operations of both its programs and determine how they could work more efficiently and effectively as part of a more consolidated effort. This determination should include an analysis of what changes in both programs’ authorizing legislation would be required. Recommendations for Executive Action We recommend that the Administrator, National Nuclear Security Administration, improve efforts targeted at the nuclear cities by evaluating all of the ongoing NCI projects, particularly those that focus on community development activities, and eliminate those that do not support DOE’s stated objectives of creating jobs in the nuclear cities and downsizing the Russian nuclear weapons complex; establishing quantifiable goals and milestones for jobs creation and downsizing the weapons complex that will more clearly gauge progress in the nuclear cities and use this information to help assess future program expansion plans and potential costs; and strengthening efforts to reduce national laboratories’ costs to implement the program in an effort to place more NCI funds in Russia. In addition, the Nuclear Cities Initiative and the Initiatives for Proliferation Prevention program share a common goal and, in many cases, are implementing similar types of projects. In order to maximize limited program resources, we also recommend that the Administrator determine whether the two programs should be consolidated into one effort—including a determination of what changes in authorizing legislation would be necessary—with a view toward achieving potential cost savings and other programmatic and administrative efficiencies. Agency Comments and Our Evaluation We provided the Department of Energy with copies of a draft of this report for review and comment. DOE’s written comments are presented in appendix VII. DOE concurred with our recommendations and provided technical comments that were incorporated in the report as appropriate. DOE provided additional comments on the following issues: (1) job creation and complex downsizing, (2) economic diversification, (3) the similarities between NCI and the IPP program, and (4) program metrics and project review. DOE noted that our report focused on job creation as the primary measure of NCI success or as the metric for individual activities. In DOE’s view, this reflects an inadequate appreciation of the goals of the program. The program’s goal is not simply funding the employment of weapons scientists but also downsizing Russia’s weapons complex through economic diversification. The outcome of this approach, DOE contends, is sustainable alternative nonweapons jobs that ultimately move scientists out of the weapons facilities. We recognize that Congress has identified the objectives of the NCI program as being both job creation and downsizing Russia’s nuclear weapons complex. Although this report focuses more on job creation, we have identified, where appropriate, the downsizing of Russia’s weapons complex as another objective of the program. We have focused on the job creation objective for a number of reasons. First, it is highlighted in the government-to-government agreement between the United States and Russia which states that the purpose of the NCI program is to create a framework for cooperation in facilitating civilian production that will provide new jobs for displaced workers in the nuclear cities. Second, the Russian officials we met with told us that they are judging the NCI program by one standard—the creation of sustainable jobs. These Russian officials have criticized community development projects because these projects do not lead directly to employment opportunities or provide sustainable jobs for weapons scientists. In addition, the industry representatives we talked to said that the outcomes of the community development projects would have little impact on their company’s decision to invest in the nuclear cities. We continue to believe that DOE needs to concentrate its limited program funding on those projects that will most realistically lead to sustainable employment for weapons scientists. Regarding economic diversification, DOE stated that MINATOM would prefer that funding be provided directly for major projects through a top down approach that reflects central planning. According to DOE, successful economic diversification efforts in the United States have occurred based on active partnerships among government, industry, and the community, which support entrepreneurship and “growth from below”—a goal endorsed by the NCI program. In our view, DOE’s premise that economic diversification approaches in Russia can be modeled after U.S. experiences may be misleading. The economies and social and political structures of the two countries are not comparable. As we noted in our report (1) international aid efforts have had difficulty promoting economic growth in Russia, (2) the country appears to be a long way from having a competitive market economy, and (3) Russia’s transition experience over the past decade has been more difficult than expected. Regardless of the approach that is taken to stimulate economic development in the nuclear cities, we continue to believe that DOE faces a daunting challenge in meeting the ambitious goals of the NCI program. We also continue to question, as we did in our 1999 report, whether DOE possesses the expertise needed to develop market-based economies in a formerly closed society. DOE also noted that our discussion of duplication between NCI and IPP reflects an incomplete understanding of the differing, but complementary, goals of the program. DOE noted that IPP is an older program that focuses on the commercialization of technology inside the weapons institutes of the nuclear cities, while NCI focuses only in the municipal areas of the nuclear cities. In DOE’s view, it is not surprising that program managers at the national laboratories might seek funding for the same proposed activity from NCI and IPP. According to DOE, scientists all over the world try to maximize their chances of receiving grants by applying to multiple sources, and such activity does not make NCI and IPP duplicative or automatic candidates for administrative consolidation. While we recognize that differences exist in the implementation of both programs, both programs share a common underlying goal—the employment of Russian weapons scientists in sustainable, alternative, nonmilitary scientific or commercial activities. Therefore, we continue to question whether DOE needs two separate programs with two sets of similar project review procedures funding numerous similar types of projects in the nuclear cities. As noted in the report, we found that some NCI projects have (1) been proposed for funding under both programs, (2) shifted from one program to another, or (3) received funding from both programs. Combining the two programs could also alleviate many of the concerns we have with NCI’s implementation such as strengthening the project selection and review process. Furthermore, we continue to believe that efficiencies might be gained by combining both programs. Finally, DOE noted that the Nuclear Cities Initiative is less than 2-1/2 years old and that project review processes and program metrics need time to mature and be fully implemented. DOE stated that new project review procedures have been instituted to ensure effective coordination and that the program’s performance is being measured. While we recognize in the report that new procedures have recently been put into place, it is unclear to us why it took DOE over 2 years to develop and implement these procedures when similar procedures already existed under the IPP program. As noted in the report, some of the failures of the NCI commercial development projects might have been avoided if DOE had a common project approval process and had incorporated some of the elements of the IPP project selection process from the outset of the program. Concerning NCI’s program metrics, we recognize in the report that DOE has performance measures, but we continue to believe that these measures require greater specificity. For example, without specific targets, such as the number of scientists that DOE plans to help find jobs for, it is difficult to determine whether the program is on track to meets its long-term objectives. DOE has concurred with our recommendation to establish quantifiable milestones that will more clearly gauge the NCI program’s progress in the nuclear cities. Scope and Methodology To determine the amount of NCI program funds spent in the United States and Russia, we obtained data from DOE’s headquarters and the U.S. national laboratories. Our task was complicated because DOE and the national laboratories were not systematically tracking these types of data. As a result, we developed, in cooperation with DOE’s Nuclear Cities Initiative budget officer, a standardized format and agreed-upon definitions for capturing this information for each laboratory by various cost components, such as salary and benefits, overhead, and travel. The format also was used to help identify program expenditures in the United States and Russia. We reviewed the data submissions from the laboratories to ensure that the program expenditures were grouped by the appropriate expenditure categories. We had numerous discussions with DOE and several national laboratories’ financial officers to ensure that the data were consistent and conformed with agreed-upon definitions of what comprised U.S. and Russian costs. In cooperation with the NCI program office, we reviewed all of the cost data submitted by the national laboratories to ensure that expenditures were consistently categorized. In several instances, we worked directly with national laboratory program and finance officials to clarify and/or supplement cost data they had provided us with. To assess the NCI projects and their impact, we reviewed all of the projects that had been implemented by DOE. We developed a list of projects from information provided by DOE and the U.S. national laboratories. We made some judgments in order to arrive at a final list of projects to review. For example, we excluded activities involving the development of strategic plans, workshops, and other support activities because, while these efforts support the program, we did not consider them to be projects in their own right. In addition, we decided to consider all of the community development activities as one project because those activities involved relatively small expenditures of funds. The NCI program staff concurred with these and other judgments we made about the projects. (See app. VI for a list of projects reviewed.) To assess the impact of the NCI projects, we used, whenever possible, the information contained in DOE’s NCI database to determine the extent to which each project focused on critical nonproliferation objectives, such as the number of weapons scientists engaged in the project and its potential commercialization benefits. However, we found that the database did not always contain current information. We also met or spoke with the principal investigator for each project or a representative who was familiar with the project. We discussed how projects were meeting these objectives and what role the investigator played in meeting these objectives. We met or spoke with officials from the following national laboratories to discuss NCI projects: Argonne National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, Sandia National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, National Energy Technology Laboratory, Westinghouse Savannah River Company, and the Kansas City Plant. We also met with representatives from DOE to discuss those projects that were being managed by DOE’s headquarters. During the course of our work, we also met with or had discussions with officials from the Department of Commerce, the Department of State, the U.S. Agency for International Development, the U.S. Industry Coalition, Inc., the U.S. Civilian Research and Development Foundation, and the European Bank for Reconstruction and Development. In several instances, we contacted industry officials to follow up on the status of commercialization activities and obtain their views about trying to start businesses in the nuclear cities. For example, we discussed selected projects and related commercial activities with officials from ADAPCO, Fresenius Medical Care, Credit Suisse First Boston (Europe), Motorola, Oracle, Intel Corporation, and Delphi Automotive Systems. We toured the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) Software Technology Laboratory in Sarov, which is the company that a Western firm contracts with for software development. We visited Russia in September 2000 to meet with MINATOM officials in Moscow, including the first deputy minister. We traveled to Sarov to meet with representatives from VNIIEF and Avangard, the weapons assembly facility that is located in Sarov. During our visit to Sarov, we asked to visit the Avangard facility, but our request was denied. While in Sarov, we visited the Open Computing Center and met with numerous weapons scientists who were working there. We also visited the Analytical Center for Nonproliferation (one of the projects) and VNIIEF Conversia, the organization that seeks to develop commercial ventures in the city. We also met with the deputy mayor of Sarov to learn more about the economic and social conditions in that city. We also met with representatives from the nuclear city of Snezhinsk during our visit to Moscow. To obtain information about the status of the European Nuclear Cities Initiative, we visited Rome, Italy, and Brussels, Belgium, in January 2001. While in Rome, we met with officials from Italy’s Ministry of Foreign Affairs, the Landau Network-Centro Volta, and the Italian National Agency for New Technology, Energy and the Environment. In Brussels, we met with representatives from the European Commission’s Security Policy and External Relations Directorate. We conducted our work from August 2000 through April 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Honorable Spencer Abraham, Secretary of Energy; John A. Gordon, Administrator, National Nuclear Security Administration, the Honorable Mitchell E. Daniels, Director, Office of Management and Budget; and interested congressional committees. We will make copies available to others upon request. Appendix I: Role of Russia’s Nuclear Cities in Weapons Design and Development This appendix provides information on Russia’s nuclear cities and their role in developing nuclear weapons. Appendix II: NCI’s Cumulative Expenditures as of December 2000 This appendix presents detailed information about the cumulative costs incurred, as of December 2000, by the national laboratories and the Department of Energy’s headquarters, to implement the Nuclear Cities Initiative program. Appendix III: DOE’s Small Business Loan Program in Russia’s Nuclear Cities In February 2000, DOE granted $1.5 million to the European Bank for Reconstruction and Development (EBRD) to establish small-loan programs in the three nuclear cities. EBRD is using local branches of Sberbank, which is the largest commercial bank in Russia, to implement the loan program in the cities. As of the end of December 2000, EBRD had spent about $440,000 of the $1.5 million. About 74 percent of those expenditures paid for the salaries of the EBRD employees who set up the loan programs and act as consultants. The remaining expenditures were used to train and employ 10 new loan officers hired from within the cities, train other potential loan officers, and cover standard operating expenses, such as office rent, communications, and travel. EBRD requested NCI funds to cover the administrative costs of the loan programs for the first 18 months of operation. Thereafter, the expectation is that the programs will be self-sustaining on the basis of the proceeds from loan repayments. According to the EBRD representative responsible for overseeing the loan programs, the bank is likely to request an extension from DOE if it has not spent the $1.5 million by the end of the 18-month period. The new loan departments in the Sberbank branches may borrow from EBRD’s existing $300 million Russian Small Business Fund. While EBRD has not set aside loan capital specifically for the three cities, business owners in Sarov, Snezhinsk, and Zheleznogorsk are now able to work with local loan officers to compete with other Russian businesses for micro loans (up to $30,000) and small loans (up to $125,000) from EBRD. Applicants can receive both a micro and small loan at the same time. As of the end of February 2001, EBRD had issued 279 loans totaling over $1,080,000. Nearly all of the loans were micro loans, and the average size was $3,879. EBRD reported that none of the loans were in arrears more than 30 days. The EBRD representative responsible for the program has projected that the level of loan activity will increase from about 30 loans per month in late 2000 to 130 per month by June 2002. If that level of activity is reached, the bank estimates that it will have issued over 1,600 loans totaling about $9 million by June 2002. The representative also told us in February 2001 that she expected a total of 18 loan officers to be employed in the cities in the near future. DOE does not have good information on whether loan recipients were former weapons institute employees. What the Department has learned about the loan recipients in Snezhinsk—which it believes is representative of the three cities—suggests that most of the loans have gone to small retail and wholesale businesses, including food and household goods merchants. Information supplied by EBRD for loans in Snezhinsk through July 2000 showed that about one-third of the recipients were former institute engineers, physicists, or computer specialists, including some who left the institute in the early 1990s. According to the EBRD representative, the bank does not target loans to specific types of businesses, nor is EBRD concerned about placing limits on who is employed in the businesses that receive loans. The bank is interested in helping to create a sound economy in the cities that will include businesses that might employ spouses or children of weapons scientists and not just weapons scientists themselves. As EBRD has sufficient loan funds, it does not see any reason to ration these funds to a specific group while denying access to others, given that any economic activity in the cities is a benefit. The representative also said that EBRD probably would not have gone into Sarov, Snezhinsk, or Zheleznogorsk without NCI support. A former NCI staff person who was responsible for overseeing the grant to EBRD wrote that because virtually all inhabitants of the cities are employees of the institutes or dependents of employees, loans to small retail businesses are helping to foster entrepreneurial skills among institute employees or their dependents. In addition, the loan programs are helping to diversify the economy of the cities. Russian officials were critical of the loan program. According to a Deputy Director at VNIIEF, there was no coordination with the institute on the decision for NCI to support the loan program. He also said that the EBRD loans do not play a role in restructuring the VNIIEF workforce. The First Deputy Director of MINATOM told us that in his view, the EBRD loan program is inefficient. He noted that the loans are small and the interest rates high (about 38 percent). The bank loans result in a very fast turnover of capital and do not result in production facilities that create self-sustaining enterprises. In his view, butcher shops and flower shops are good, but they do not resolve the fundamental problem of promoting self-sufficiency for weapons scientists. Appendix IV: Successful Commercial Venture Established in Sarov Without U.S. Government Assistance During the course of our review, we found that a major U.S. computer company employs former weapons scientists in Sarov and has done so without U.S. government assistance. According to the company official responsible for the work in Sarov, in the early 1990s, a Russian-speaking employee of the company who was familiar with the skills available in the nuclear cities pursued the idea of starting an operation in Russia. A representative of the U.S. company met with officials from Sarov and determined that the company could benefit by taking advantage of the scientists’ skills in mathematics and attractive salary scale. Over the past 7 years, the number of former weapons scientists under contract to the U.S. company has grown from less than 10 to about 100. Although the software operation in Sarov is partly owned by the weapons institute in that city—the All-Russian Scientific Research Institute of Experimental Physics—the scientists are no longer employed by the weapons institute. When we visited the software operation in September 2000, we were told that the employees work full time and that their salaries are up to three times what they had been paid at the weapons institute. The official who oversees the work in Sarov also told us that other technology firms have expressed an interest in working in the closed cities but have not made the commitment. He said that, while his company has been very pleased with the productivity of the operation in Sarov, it is difficult for Western companies to work in Russia because of language problems, restricted access, and the lack of a relationship with the Russian government. For example, gaining access to Sarov on a regular basis has been difficult for his company, although it has become easier. He believes that the NCI program can help Western businesses overcome these obstacles by, among other things, keeping channels of communication open with MINATOM and nuclear city officials. At the same time, he suggested that the program should concentrate its efforts on projects that will play to the strengths of the Russians. For example, he believes that projects that attempt to link the research and analytical skills of the scientists with the needs of Western companies will be more likely to succeed than projects that attempt to start new commercial ventures in the closed cities. Appendix V: Denials of Access Requests to Three of Russia’s Nuclear Cities This appendix presents information on 25 instances since 1999 in which the Russian government denied requests for access to nuclear cities made by DOE staff and others. According to DOE officials, some requests were denied more than once, while a significant number of requests were approved at a later date. Appendix VI: NCI Projects Reviewed by GAO Appendix VII: Comments From the Department of Energy
TThe United States and Russia began an ambitious nonproliferation program, the Nuclear Cities Initiative (NCI), to create sustainable job opportunities for weapons scientists in Russia's closed nuclear cities and to help Russia accelerate the downsizing of its nuclear weapons complex in in 1998. The program, however, poses a daunting challenge. The nuclear cities are geographically and economically isolated, access is restricted for security reasons, and weapons scientists are not accustomed to working for commercial businesses. Thus, Western businesses are reluctant to invest in the nuclear cities. This report reviews (1) the costs to implement NCI, including the amount of program funds spent in the United States and Russia, as well as planned expenditures; (2) the impact of NCI projects; and (3) the status of the European Nuclear Cities Initiative. GAO summarized this report in testimony before Congress; see: Nuclear Nonproliferation: DOE's Efforts to Secure Nuclear Material and Employ Weapons Scientists in Russia, by Gary L. Jones, Director Natural Resources and Environment, before the Subcommittee on Emerging Threats and Capabilities, Senate Committee on Armed Services. GAO-01-726T , May 15 (10 pages).
Background The Arms Export Control Act, as amended, is the primary statute governing exports of U.S. defense articles and services, including advanced weapons and technologies, to eligible countries through the government-to-government Foreign Military Sales program and sales made directly by U.S. companies. The act also includes a statement of conventional arms transfer policy, which provides that sales of defense items be consistent with U.S. national security and foreign policy interests. The Conventional Arms Transfer Policy, a Presidential Decision Directive last updated in 1995, provides policy for weapons transfers. In addition to stipulating that transfer decisions be made on a case-by-case basis, the policy has several key goals that must be considered when transferring weapons: Ensure U.S. military forces maintain technological advantage over their adversaries. Help allies and friends deter or defend against aggression, while promoting interoperability with U.S. forces when combined operations are required. Promote stability in regions critical to U.S. interests, while preventing the proliferation of weapons of mass destruction and their missile delivery systems. Promote peaceful conflict resolution and arms control, human rights, democratization, and other U.S. foreign policy objectives. Enhance the ability of the U.S. defense industrial base to meet U.S. defense requirements and maintain long-term military technological superiority at lower costs. While the Conventional Arms Transfer Policy generally covers all arms transfers, NDP specifically governs the releasability of classified military information, including classified weapons and military technologies. NDP establishes a framework for policy decisions on proposed transfers to foreign recipients and is key in governing the release of an advanced weapon or technology. These decisions are made before weapons or technologies are approved for transfer. As implemented by DOD Directive, this policy specifies that releasability decisions must satisfy five criteria. For example, the proposed transfer must be consistent with U.S. military and security objectives and be protected by the foreign recipient in substantially the same manner as the United States. The DOD Directive also requires department officials to enter NDP case data, including releasability decisions, into a centralized database to facilitate the coordination and review of potential transfers of weapons. In November 2002, the White House announced that it had begun a comprehensive assessment of the effectiveness of U.S. defense trade policies to identify changes needed to ensure that these policies continue to support U.S. national security and foreign policy goals. It also aims to assess how U.S. technological advantage can be maintained. The assessment is expected to cover such topics as the Arms Export Control Act and the military departments’ technology release policy, as well as a determination of the effectiveness of the Defense Trade Security Initiatives. The assessment is also expected to cover issues related to the NDP process. Process to Determine the Releasability of Advanced Weapons and Technologies Is Inherently Complex The process governing the release of advanced weapons and technologies is inherently complex because it involves multiple, multilevel reviews by various U.S. government entities and individuals with varying perspectives. A country’s request for an advanced system initially is sent to the military department that is responsible for—or “owns” the weapon or technology, which then coordinates with various functional units to arrive at a decision on whether to fulfill the request. Depending on the circumstances of the request and the outcome of this initial review, the request may be submitted to an interagency committee and other special committees for additional review. Further, because the reviewers represent different agencies, they bring varying perspectives to the process and must reconcile differences to reach a unanimous decision on each request. Finally, the guidance governing the process is broad and applied on a case-by-case basis, allowing decision makers to use judgment and interpretation when considering each foreign country’s request for the release of an advanced weapon or technology. Multiple Reviews Are Conducted A foreign government’s request for the transfer of an advanced weapon or technology is directed to the military department that is responsible for the particular weapon or technology. Each military department has its own review process for determining whether the weapon or technology should be released (see fig. 1). To develop a position, the military department receiving the request coordinates with and obtains input from military experts in various offices and divisions within those offices. For example, we were told that the Air Force coordinates a proposed transfer of an Air Force fighter aircraft to a foreign government with subject matter experts in functional offices, such as acquisition, plans, operations, and weapons systems division. These experts, in turn, may consult with other experts within their divisions. For instance, the weapons systems division may coordinate with its electronic warfare staff, its radar staff, or both to obtain input. Military department reviews can result in one of three outcomes: concurrence, concurrence with limitations and conditions, or nonconcurrence. If a consensus to approve a request cannot be reached, the request is elevated within the military department for a final decision. If the requested item (1) is not covered in NDP, (2) exceeds the NDP classification level specified for a particular foreign country, or (3) does not comply with NDP criteria, the military department may seek an exception to NDP from the National Disclosure Policy Committee, an interagency review forum. Timelines for military departments’ reviews of requests can vary. For example, Army officials stated that some cases can be handled quickly while others may require a major investment of time and resources. When NDPC receives a request for an exception, the Executive Secretariat distributes the request to committee members and seeks a unanimous vote within 10 days (see fig. 2). Each committee member coordinates a position with various experts. For example, the Joint Staff sends the request to the Combatant Command that has responsibility for the country requesting the advanced weapon or technology. The Combatant Command, in turn, coordinates the request with various units within the Command, which may include the Scientific Advisor, plans and operations division, weapons systems division, and intelligence division to provide input on such issues as the impact of the transfer on the region. These units may further coordinate with other offices within their units. A final coordinated Command position is then provided to the Joint Staff NDPC member. If any NDPC member votes not to approve a request for an exception, there is a negotiation period for no more than 20 days. During this time, the member that has requested the exception may propose or accept placing different or additional conditions on the request to gain unanimity. If agreement cannot be reached, the request is elevated to the Chairman for a decision. Members have 10 days to appeal the Chairman’s decision or it is accepted. If a committee member appeals the decision, the request is elevated to the Deputy Secretary or Secretary of Defense. However, of the 330 exceptions reviewed over the last 4 years, only 1 had been appealed and 2 denied. The appeal and denials covered requests for weapons and technologies and intelligence information. In addition, 5 requests for exceptions related to weapons and technologies were withdrawn before a decision was reached. According to DOD officials, most exceptions are approved with limitations and conditions. In addition to the military departments’ reviews and the NDPC exception process, special committee processes are set up to review requests for sensitive technologies that may be included in a proposed transfer. For example, if a proposed transfer includes a stealth component, the military department submits the case to the Director of Special Programs within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics who manages low observable/counter low observable (LO/CLO) issues (see fig. 3). Precedent decisions, which are contained in a database, are used to determine the releasability of the technology. Based on the level of sensitivity of the technology involved, the case may be elevated to the Tri-Service Committee for further review. Some controversial or extraordinarily complex cases may require exceptions to precedent LO/CLO policy and further elevation to the LO/CLO Executive Committee for final decision. If needed, the Tri-Service Committee can charter a “Red Team,” which, according to DOD officials, is composed of subject matter experts, including those from industry, academia, and the military department laboratories. The Red Team is convened to assess the risks associated with the proposed transfer. The Tri-Service Committee and the Executive Committee make their decision based on their assessment of the information provided by the military department that is responsible for the technology and the pros and cons presented by the Red Team, if convened. Varying Perspectives and Broad Guidance Governing Potential Transfers Add to the Complexity of the Review Process The multilayered reviews involved in the process for determining the releasability of an advanced weapon or technology can be particularly complex because individual entities and decision makers have varying perspectives. For example, the combatant commanders’ position may concentrate on such issues as the effects the proposed transfer could have on coalition warfare, political-military relations in a region, and their plans and operations. The State Department, concerned with U.S. foreign policy goals, tends to focus on issues such as the proposed transfer’s potential effect on the stability of the region of the requesting country. Others may deliberate the benefits and risks of the proposed transfer. In addition, we were told that resource issues, including turnover of officials involved in the releasability process, can affect the reviews. As we previously reported, military personnel rotate, on average, every 2 years. The guidance governing releasability adds further complexity to the review process because it is broad and implemented on a case-by-case basis, allowing for judgment and interpretation of the unique circumstances surrounding each transfer. Specifically, decisions on the release of advanced weapons or technologies must satisfy five broad NDP criteria that are subject to interpretation. (See app. II for a discussion of all five criteria and examples of information to be considered for each.) For example, one criterion decision makers must consider is whether the proposed transfer is consistent with U.S. military and security objectives. In examining this criterion, decision makers must address multiple factors, including how technological advantage would be protected if the weapon or technology were sold or transferred. According to NDPC members, the broad criteria allow for a certain level of flexibility that is needed in determining whether an advanced weapon should be released to a foreign country. Some NDPC members further pointed out that this flexibility is especially critical in the current foreign policy environment in which many different countries are working with the United States in the war on terrorism. Technological Advantage and Various Safeguards Are Considered When Determining Releasability One criterion NDPC must consider when determining the releasability of advanced weapons and technologies is that the transfer must be consistent with U.S. military and security objectives. In satisfying this criterion, military experts involved in the NDP coordination and review process told us they consider the effect the transfer could have on U.S. technological advantage, along with various safeguards—both case-specific and general—to protect this advantage. The effectiveness of individual safeguards may be limited; however, a variety of safeguards may be considered. In considering technological advantage, military experts said that they first review relevant military department documents and policies to determine if the requested weapon or technology exceeds the technology thresholds specified for the country making the request. If the requested weapon or technology exceeds this threshold, the experts may consult and coordinate with military engineers, the contractor that manufactures the weapon or technology, the system program office, and other operational experts to incorporate appropriate safeguards—typically in the form of case-specific limitations and conditions—to protect U.S. technological advantage. These include (1) sanitized or export variants, where the released weapon or technology has a lower operational capability or less advanced technology than what the United States has in its inventory; (2) anti-tamper measures, where features are built into the weapon to prevent reverse engineering such as code encryption, and protective coatings on internal weapon components; (3) time-phased release, where the advanced weapon or technology is not released until the United States has fielded a better capability; and (4) withheld information and data, where the transfer does not include information such as software source codes. Military experts said that program offices, in some cases, conduct verification tests and the Defense Contract Management Agency works with contractors to ensure that limitations and conditions are implemented before the weapon is transferred. Military department officials told us that in addition to case-specific limitations and conditions, they also consider other general safeguards to preserve U.S. military superiority. These include (1) superior U.S. tactics and training, where military tactics for maneuvers and operations may not be shared with other nations; (2) control of system spare parts, where the United States can stop providing spare parts to former allies; and (3) countermeasure awareness, where the United States has the ability to develop measures to defeat the released system because of its knowledge of how the system functions. However, the effectiveness of certain individual safeguards used to protect technological advantage may be limited for various reasons. For example, a time-phased release may not be effective if the fielding of a more capable weapon or technology is delayed and does not coincide with the contractual delivery date of the weapon to be released to the foreign government. As we reported in January 2003, schedule delays have been pervasive in certain major acquisition programs. The Air Force’s F/A-22, the next generation fighter aircraft, for example, was initially expected to be fielded in September 1995. As development proceeded, the estimated fielding date was pushed out 8 years to September 2003. According to a current estimate, the F/A-22 projected fielding date has slipped another 2 years to December 2005. In addition, factors outside of U.S. control can diminish the effectiveness of certain individual safeguards. For example, the United States may stop providing spare parts to former allies, but these countries may obtain needed parts through other means, such as “cannibalizing” parts from other weapons or obtaining parts from other countries at a higher cost through the “grey market.” Some DOD officials told us that while certain individual safeguards may not be as effective as desired, they consider various safeguards for each proposed transfer to ensure technological advantage is maintained. More Complete, Current, or Available Information Would Better Support Determinations of Releasability In addition to considering technological advantage when making releasability decisions, NDPC considers other criteria such as a foreign government’s capability to protect U.S. classified military information, including weapons or technologies. Information such as Central Intelligence Agency (CIA) risk assessments and NDPC security surveys can be used to validate a country’s capability to provide such protection. DOD’s centralized database contains some of this information, as well as historical case data; however, it is not always complete, up-to-date, or easy to access. In addition, some information such as end-use monitoring reports, which may identify countries that have not protected U.S. military information, is not provided to NDPC. DOD’s Centralized NDP Database Was Not Complete and the Effectiveness of the Upgrade Is Unknown DOD requires that NDP exception cases be recorded in a centralized automated system to assist committee members in reviewing, coordinating, and reaching decisions on proposals to release classified military information. This centralized system contains several databases, including the National Disclosure Policy System, which tracks and assigns exception cases, records releasability decisions, and contains historical data on exceptions. Historical data are important for identifying weapons or technologies that have been released to the requesting country, as well as its neighboring countries. However, the National Disclosure Policy System that was used to make decisions during the last 4 years contained data only for decisions made during that time period. It did not contain data on exceptions that were decided in prior years. In addition, it did not allow users to conduct full text searches or to search for specific data elements, such as exceptions by country, weapon system, or date. Because of limited historical data in the National Disclosure Policy System, NDPC members told us that they could not always use it to analyze precedent cases. To obtain historical data and other information, the military departments have relied on their own separate databases containing information on their departments’ prior requests for transfers. Unlike the military departments, other NDPC members do not have their own databases. For example, the State Department has relied on manual reviews of paper files and discussions with country experts or other officials with knowledge of prior cases—assuming the files still exist and the experts and officials still work at the State Department. Because of limitations in the National Disclosure Policy System, the NDPC Executive Secretariat has also relied on manual file reviews to identify information necessary for preparing its annual NDP report to the National Security Council. To add more capability to the National Disclosure Policy System, DOD’s Policy Automation Directorate developed an upgrade that is expected to provide historical data from 1960 to the present and enhance data query ability. According to NDPC officials, the upgrade has taken over 3 years to develop because of other priorities, technical issues, and limited input requested from users on the requirements and improvements for the upgraded database. In addition, deployment of the upgraded system was delayed several months because the upgrade had been experiencing technical problems. For example, NDP exception cases have been mislabeled as “current” when they were 2 years old, some cases were missing from the system, and certain queries did not always provide accurate results. While the upgrade has recently been deployed, the NDPC Executive Secretariat stated that it may take about 3 to 4 months to assess its effectiveness. CIA Risk Assessments and NDPC Security Surveys Are Often Outdated As part of the NDP process, the DOD Directive requires decision makers to determine whether foreign recipients of classified military information are capable of providing substantially the same degree of security protection given to it by the United States. In addition to historical precedence, decision makers can rely on CIA risk assessments and NDPC security surveys to make these determinations. The National Disclosure Policy System includes information such as security surveys, but it does not include CIA risk assessments. CIA risk assessments provide counterintelligence risk information, including the assessment of risks involved in releasing classified material to a foreign government. NDPC security surveys consist of reviews of the foreign government’s security laws, regulations, and procedures for protecting classified information. These reviews include making certain that recipients (1) have procedures to provide clearances to personnel, restrict access to properly cleared individuals, and report promptly and fully to the United States any known or suspected compromises and (2) agree not to reveal to a third party any U.S. classified military information without prior consent of the U.S. government. Our analysis shows that of the approximately 70 percent of countries covered by NDP that had exceptions approved for advanced weapons and technologies between 1997 and 2002, most have outdated or no CIA risk assessments. Specifically, of these, 66 percent were conducted more than 5 years ago and 12 percent have not been completed (see fig. 4). And while 22 percent of CIA risk assessments are currently up-to-date, our analysis shows that an overwhelming majority of these risk assessments will be out of date by the end of 2003. According to the NDPC Executive Secretariat, CIA officials have been unable to respond to some requests to update risk assessments because of resource reductions and other agency priorities. Responding to a CIA request, the Secretariat prioritized the top four or five assessments that were needed in 1999. However, NDPC would like to have all assessments updated every 2 years. In addition, while NDPC has set a goal to perform security surveys every 5 years, some of them are outdated while others were not conducted. Specifically, 23 percent of these surveys are 5 years or older and 7 percent have not been completed for countries that had exceptions approved for advanced weapons and technologies between 1997 and 2002, (see fig. 5). And while 70 percent of security surveys are currently up-to-date, our analysis shows that over half of these surveys will be out of date by the end of 2003. Some NDPC security surveys have not been completed in a timely manner because of lack of foreign government cooperation, and other unforeseen circumstances, such as country unrest and limited resources. According to NDPC officials, the scheduling of NDPC security surveys is a time- consuming effort performed by one staff member who has other responsibilities. In addition, security surveys are performed as a collateral duty by the Executive Secretariat. Depending on their availability, committee members also volunteer to assist the Executive Secretariat in conducting the surveys. NDPC officials also noted that in some cases, assessments and surveys may not be needed because the system or technology requested is not significant and the country makes infrequent requests. For example, a country may request one weapon requiring an exception in a 20-year time frame, negating the need for expending resources to regularly update or conduct a CIA risk assessment or security survey for that country. However, NDPC members told us that the CIA risk assessments and the NDPC security surveys provide different information that is often important for making NDPC decisions. CIA risk assessments are particularly important for exception cases because they provide an evaluation of a country’s security forces and the risk environment of a country that will potentially receive U.S. advanced weapon systems. However, because the assessments are outdated, they likely do not reflect the current conditions of the countries and therefore cannot be relied on for deciding exception cases. Further, the upgraded National Disclosure Policy System does not include CIA risk assessments—which NDPC members have said would be useful to have in the new upgraded system. According to some NDPC members, having outdated or no NDPC security surveys may hamper efforts to determine whether a country could protect advanced weapons and technologies from compromise. Specifically, without these surveys, NDPC members may not be able to identify weaknesses in the country’s current systems or areas that need improvement. In addition, the NDPC Executive Secretariat said, in some cases, when security surveys were not prepared, decisions were made to grant exceptions because benefits were deemed to outweigh risks. Some Intelligence and Other Information Not Currently Provided to NDP Decision Makers Once weapons have been transferred to other countries, the State Department and the intelligence community track information on their use and disposition. For example, a State Department-chaired committee collects intelligence information on the illegal transfers of weapons to third parties and transfers of non-U.S. weapons among foreign countries. However, according to some NDPC members, this information is used by the State Department primarily for nonproliferation purposes and is not provided to NDPC. This information could assist NDPC members in determining the releasability of a weapon or technology to a foreign country because it indicates how well the country has protected previously transferred advanced weapons and technologies. Further, this information can provide a more accurate assessment of the types of weapons the country receiving the illegal transfers has in its arsenal. In addition, information from DOD’s recently initiated end-use monitoring program could also be useful in making releasability decisions. The program will include monitoring of sensitive defense articles, services, and technologies that have special conditions placed on them when transferred through the Foreign Military Sales program. However, DOD has not yet determined the resources needed to conduct the end-use monitoring requirements outlined in the program’s policy. The end-use monitoring program manager is expected to provide reports on end-use violations to NDPC. Committee officials said that this information would be useful because it would indicate how well a country is protecting the weapons and technologies that have been transferred through the Foreign Military Sales program. Finally, the intelligence community sometimes obtains derogatory information on countries that may be of interest to NDPC in making determinations of releasability. For example, NDPC officials said that in a recent instance an intelligence agency discovered that a country requesting the release of an advanced weapon system did not have the security capabilities to protect U.S. classified military information, but did not provide this information to NDPC during the review process. These officials stated that while such cases are not typical, this type of information would have been useful in evaluating whether the country provided the same degree of protection that would be provided by the United States—a key criterion governing NDP decisions. Conclusions The U.S. government has invested hundreds of billions of dollars in the research and development of advanced weapons and technologies. To protect this investment, it is important for decision makers to be fully informed of the benefits and risks associated with the release of such weaponry. The process for determining the releasability of advanced weapons and technologies is necessarily complex because the integrity of the process relies on multiple layers of decision makers who consider numerous factors in assessing the risks involved if a weapon is compromised or ends up in unfriendly hands. To minimize the risks, it is critical that the decision makers have ready access to reliable and complete information on such factors as the recipient country’s ability to protect the advanced weapon or technology. Yet the process does not always include a systematic sharing of up-to-date information with NDPC members. Given the turnover of military officials involved in the NDPC process, it is especially critical that complete and readily accessible data from the National Disclosure Policy System database, up-to-date CIA assessments and NDPC security surveys, and relevant intelligence information from other agencies are available to make fully informed decisions. Recommendations for Executive Actions To ensure that NDPC members have complete and accurate information in a centralized database that facilitates coordination and decision making on the potential release of advanced weapons and technologies, we are recommending that the Secretary of Defense direct the NDPC Executive Secretariat to evaluate the accuracy and effectiveness of the upgraded National determine with NDPC members the additional capabilities, such as inclusion of CIA risk assessments, needed for the upgraded National Disclosure Policy System, and work with the DOD Policy Automation Directorate to address user comments and technical problems related to the upgraded system as they arise. To ensure that useful and timely information is available for making informed release decisions, we are recommending that the Secretary of Defense direct the NDPC Executive Secretariat to work with CIA to prioritize risk assessments that need to be updated, establish a schedule for performing these assessments, and systematically distribute the assessments to NDPC members through the automated system or other means; develop a plan to be used as a business case for determining the appropriate level of resources required to conduct needed security surveys or if a survey cannot be conducted, ensure that an alternative analysis of or information on the foreign government’s security capability is made available to NDPC members; and identify what additional information, such as end-use monitoring reports, would be useful to NDPC members, and establish a mechanism for requesting this information from appropriate sources, and systematically distribute it to NDPC members. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD agreed with a number of our findings and recommendations but did not agree with others. Specifically, DOD concurred with our recommendations to evaluate the upgraded National Disclosure Policy System, prioritize CIA risk assessments that need to be updated or conducted, and identify additional information needed to facilitate decision making. DOD did not concur with our recommendations to investigate further the capabilities of the upgraded National Disclosure Policy System or establish a firm schedule for addressing technical problems with the upgrade. DOD also did not concur with our recommendation to develop a plan for NDPC security surveys. Further, DOD stated that our depiction of the NDP process appears to mislead the reader about the information available to committee members when making decisions. At the time of our review, DOD had taken 3 years to develop an upgraded system primarily because of limited input requested from users, which resulted in a major redesign of the system. In addition, deployment of the upgrade was delayed a number of times because of technical problems. This system was deployed after our review was completed, and we have since modified our recommendations to reflect the current situation. In commenting on our original recommendations, DOD stated that improvements to the upgrade cannot be identified at this time. However, in our discussions with NDPC members, they have already identified capabilities they would like to have in the upgrade, such as inclusion of CIA risk assessments. Additionally, DOD stated that NDPC personnel will identify problems with the system and bring them to the attention of the software developers. We believe all users of the system, including committee members and not just NDPC personnel, should participate in the identification of technical problems to ensure that the system is meeting user needs. Further, DOD said that developers have quickly fixed minor software problems. We, therefore, are no longer recommending that a firm schedule be established but rather that technical problems be addressed as they arise. With regard to our recommendation on a plan for NDPC security surveys, DOD stated that it already develops a schedule for completion of such surveys. Implementation of the schedule is largely dependent on committee members volunteering to conduct the surveys. However, the plan we envisioned in our recommendation would include not only a schedule but also information such as the reason each security survey is needed and the level of resources necessary to schedule and conduct the survey. We believe the plan would provide an opportunity to develop a business case to determine if dedicated resources are needed to complete security surveys on a prioritized basis, instead of largely relying on committee volunteers. We have modified our recommendation to clarify its intent. We disagree that our report misleads the reader about the sufficiency of the information available to make decisions. Committee members we spoke with stated that information, such as more timely CIA risk assessments and security surveys, would allow them to make more informed decisions. Our recommendations are intended to enhance the information needed for the decision making process. DOD’s letter and our detailed evaluation of its comments are reprinted in appendix III. The Department of State did not provide formal written comments; however, a senior State official said that the report was informative. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after its issuance. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the House Committees on Government Reform, on International Relations, and on Armed Services and Senate Committees on Governmental Affairs, on Foreign Relations, and on Armed Services. We will also send copies to the Secretaries of Defense and State and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or Anne-Marie Lasowski at (202) 512-4146 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Appendix I: Scope and Methodology To ascertain the process for determining the releasability of an advanced weapon or technology, we conducted a literature search, reviewed the related law and regulations, and analyzed policy, directives, and guidance governing the process. We interviewed officials in the Departments of Defense and State, the military departments, the Joint Chiefs of Staff, three Combatant Commands, and the intelligence community to understand how the interagency committee process works for reviewing exceptions to the National Disclosure Policy (NDP). We also obtained briefings on special committee processes such as the Low Observable/Counter Low Observable Executive Committee process. We analyzed military department policies and procedures for reviewing requests for the transfer of weapons and technologies and discussed the review and coordination processes with pertinent military officials. To determine if U.S. technological advantage is considered and protected in the review process, we reviewed selected weapons transfers records, including pertinent initial country requests; military department, Joint Staff, and other National Disclosure Policy Committee (NDPC) members’ input and positions on the requests; and limitations and conditions included in the final committee positions. We analyzed the types of limitations and conditions used to protect technological advantage and discussed these and their effectiveness with military department experts, as well as Joint Staff officials. Through discussions with these officials, we also identified other safeguards that committee members consider to preserve U.S. military advantage. We reviewed GAO and Department of Defense (DOD) reports related to these various safeguards and specific limitations and conditions. To identify and assess the types of information used in the process, we reviewed the NDP and DOD’s and the military departments’ releasability regulations. We interviewed officials in the Executive Secretariat for the NDPC, the military departments, Joint Staff, and State Department to obtain their perspectives on information required for NDP exception decisions. We also obtained a briefing and demonstration on DOD’s centralized National Disclosure Policy System database and its upgrade and discussed the capability of this system with various users. We analyzed data on Central Intelligence Agency risk assessments and NDPC security surveys performed over the last 25 years. We determined the number of assessments and surveys that were performed more than 5 years ago or were not completed for countries that had received exceptions to NDP for potential weapons transfers during 1997 through 2002. We identified additional information that may be useful to the National Disclosure Policy Committee and discussed this with committee members. We performed our review from June 2002 through May 2003 in accordance with generally accepted government auditing standards. Appendix II: National Disclosure Policy Criteria The National Disclosure Policy Committee (NDPC) considers five criteria when determining the releasability of classified military information, including weapons and technologies. These criteria are broad and are implemented on a case-by-case basis. Table 1 provides the criteria and the types of information that decision makers consider when assessing each criterion. Appendix III: Comments from the Department of Defense The following are GAO’s comments on the Department of Defense’s (DOD) letter dated June 24, 2003. GAO Comments 1. We disagree with DOD’s statement that our depiction of the National Disclosure Policy (NDP) process appears to mislead the reader about the sufficiency of the information available to make decisions. We accurately describe the process, but found that the information supporting the decisions was not always complete, up-to-date, or easy to access. We further acknowledge in the report that each request is reviewed on a case-by-case basis. While DOD states that supporting information must be furnished to each member of the Committee for review, committee members we spoke with stated that information such as more timely National Disclosure Policy Committee (NDPC) security surveys and Central Intelligence Agency (CIA) risk assessments would facilitate the process, thus allowing members to make more informed decisions. 2. The National Disclosure Policy System that was used to make decisions during the last 4 years contained data only for decisions made during that time period. DOD indicated that this system was a follow-on to another database containing historical data. However, some committee members and officials told us that this older database is not easy to use and contains only summary information. In addition, one committee member does not have access to this database. The report clearly states that an upgraded system has been developed. DOD asserted that glitches and technical problems are to be expected for a system in development. We understand that such technical problems can occur with an upgrade. However, at the time of our review, the system had taken over 3 years to develop, and deployment was delayed a number of times because of technical problems and limited input requested from users. As DOD has acknowledged, the effectiveness of the upgrade is yet to be determined. 3. We believe that it is too early for DOD to assert that the upgraded system has proven to be reliable and efficient, given that it will not formally assess the effectiveness of the system until September 2003. DOD’s response acknowledges that improvements are expected but cannot be identified at this time. However, NDPC members told us about capabilities they wanted included in the upgrade, such as inclusion of CIA risk assessments. We believe that DOD should be proactive in seeking input from users about such additional capabilities needed for the upgraded system. We have clarified our recommendation to indicate that this information should be obtained from members after they have had an opportunity to use the system and can assess the need for improvements. 4. DOD acknowledged that as NDPC personnel identify problems with the upgraded system, they will bring these problems to the attention of the software developers. However, our recommendation was directed toward obtaining input from all NDPC members who are users of the system, not just NDPC personnel, to ensure that user needs are met. In addition, DOD said that developers have quickly fixed minor software problems. We, therefore, are no longer recommending that a firm schedule be established but rather that technical problems be addressed as they arise. 5. While DOD indicates that it already develops a schedule for completion of NDPC security surveys, our recommendation is intended to include not only a schedule but also additional information. Specifically, we believe that a plan should also identify surveys to be conducted and the reasons each survey is needed; establish time frames for completing these surveys; and estimate the resources needed to schedule and conduct these surveys. Based on this information, DOD can develop a business case to determine if dedicated resources, instead of committee volunteers, are needed to ensure that surveys are completed on a prioritized basis. We have modified our recommendation to clarify this point. Finally, DOD states that no known alternative analysis currently exists that would provide information comparable to that provided through the security surveys. However, the department has acknowledged that the CIA risk assessments may be used as the basis for decisions when a security survey cannot be conducted. This is the type of alternative analysis that we are referring to in our recommendation. Appendix IV: Staff Acknowledgments Acknowledgments Anne-Marie Lasowski, Marion Gatling, John Ting, Ella Mann, Shelby S. Oakley, Karen Sloan, Marie Ahearn, and Stan Kostyla made key contributions to this report. Related GAO Products Export Controls: Processes for Determining Proper Control of Defense-Related Items Need Improvement. GAO-02-996. Washington, D.C.: September 20, 2002. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement. GAO-02-972. Washington, D.C.: September 6, 2002. Export Controls: More Thorough Analysis Needed to Justify Changes in High Performance Computer Controls. GAO-02-892. Washington, D.C.: August 2, 2002. Defense Trade: Lessons to be Learned From the Country Export Exemption. GAO-02-63. Washington, D.C.: March 29, 2002. Export Controls: Issues to Consider in Authorizing a New Export Administration Act. GAO-02-468T. Washington, D.C.: February 28, 2002. Export Controls: Reengineering Business Processes Can Improve Efficiency of State Department License Reviews. GAO-02-203. Washington, D.C.: December 31, 2001. Export Controls: Clarification of Jurisdiction for Missile Technology Items Needed. GAO-02-120. Washington, D.C.: October 9, 2001. Defense Trade: Information on U.S. Weapons Deliveries to the Middle East. GAO-01-1078. Washington, D.C.: September 21, 2001. Export Controls: State and Commerce Department License Review Times Are Similar. GAO-01-528. Washington, D.C.: June 1, 2001. Export Controls: Regulatory Change Needed to Comply with Missile Technology Licensing Requirements. GAO-01-530. Washington, D.C.: May 31, 2001. Defense Trade: Analysis of Support for Recent Initiatives. GAO/NSIAD-00-191. Washington, D.C.: August 31, 2000. Foreign Military Sales: Changes Needed to Correct Weaknesses in End- Use Monitoring Program. GAO/NSIAD-00-208. Washington, D.C.: August 24, 2000. Defense Trade: Status of the Department of Defense’s Initiatives on Defense Cooperation. GAO/NSIAD-00-190R. Washington, D.C.: July 19, 2000. Conventional Arms Transfers: U.S. Efforts to Control the Availability of Small Arms and Light Weapons. GAO/NSIAD-00-141. Washington, D.C.: July 18, 2000. Foreign Military Sales: Efforts to Improve Administration Hampered by Insufficient Information. GAO/NSIAD-00-37. Washington, D.C.: November 22, 1999. Foreign Military Sales: Review Process for Controlled Missile Technology Needs Improvement. GAO/NSIAD-99-231. Washington, D.C.: September 29, 1999. Export Controls: 1998 Legislative Mandate for High Performance Computers. GAO/NSIAD-99-208. Washington, D.C.: September 24, 1999. Export Controls: Better Interagency Coordination Needed on Satellite Exports. GAO/NSIAD-99-182. Washington, D.C.: September 17, 1999. Defense Trade: Department of Defense Savings From Export Sales Are Difficult to Capture. GAO/NSIAD-99-191. Washington, D.C.: September 17, 1999. Export Controls: Issues Related to Commercial Communications Satellites. T-NSIAD- 98-208. Washington, D.C.: June 10, 1998. Export Controls: Change in Export Licensing Jurisdiction for Two Sensitive Dual-Use Items. GAO/NSIAD-97-24. Washington, D.C.: January 14, 1997. Export Controls: Sensitive Machine Tool Exports to China. GAO/NSIAD-97-4. Washington, D.C.: November 19, 1996. Export Controls: Sale of Telecommunications Equipment to China. GAO/NSIAD-97-5. Washington, D.C.: November 13, 1996. Export Controls: Some Controls Over Missile-Related Technology Exports to China Are Weak. GAO/NSIAD-95-82. Washington, D.C.: April 17, 1995.
The heightened visibility of advanced U.S. weapons in military conflicts has prompted foreign countries to seek to purchase such weaponry. In 2001, transfers of U.S. weapons and technologies to foreign governments totaled over $12 billion. The potential loss of U.S. technological advantage has been raised as an issue in recently approved transfers of advanced military weapons and technologies--such as military aircraft that were reported in the media to contain superior radar and avionics than those in the Department of Defense's (DOD) inventory. GAO looked at how releasability of advanced weapons is determined, how U.S. technological advantage is considered and protected, and what information is needed to make informed decisions on the potential release of advanced weapons. Before transfers are approved, the U.S. government must first determine if classified weapons or technologies are releasable to the requesting country according to the National Disclosure Policy (NDP). The process for determining releasability is complex. A foreign government's request is first reviewed by the military department that owns the requested weapon or technology. In cases where the request exceeds NDP's approved classification level, the military department forwards the request to the National Disclosure Policy Committee for its review. For some sensitive technologies, such as stealth, the case is also forwarded to a special committee for review. The process requires coordination among different U.S. government entities--including DOD, the military departments, the State Department, and the intelligence community--which have varying perspectives. Adding to this complexity, determinations of releasability are governed by broad guidance, which allows latitude in interpreting the unique circumstances of each proposed transfer. In determining the releasability of advanced weapons and technologies, a number of factors are considered, including how U.S. technological advantage would be affected. To protect U.S. technological advantage, safeguards--such as lowering the capability of a transferred weapon and withholding sensitive information on how the system operates--are considered for proposed transfers. However, the effectiveness of some individual safeguards may be limited. For example, one safeguard--the ability of the United States to deny spare parts to former allies--may not be effective if these countries are able to obtain spare parts through other means. While certain individual safeguards may not be as effective as desired, DOD officials said they consider various safeguards to ensure technological advantage is maintained. Information needed to assess releasability is not always complete, up-to-date, or available. For example, DOD's centralized National Disclosure Policy System database that was used to make decisions during the last 4 years only contained information for that time period. DOD has recently deployed an upgrade to the system, but has not yet determined its effectiveness. Other information, such as Central Intelligence Agency risk assessments--which provide counterintelligence information and risks involved in releasing advanced weapons to a foreign country--are often outdated or nonexistent. Finally, some intelligence information that could have a direct bearing on whether an advanced weapon or technology should be released is prepared for other purposes and is not provided to decision makers involved in releasability determinations.
Public Television Is Structured around Local Ownership and Control of Stations, with Assistance from National-Level Organizations Many programs shown on public television stations carry the logo of PBS, which can create the misperception that public television is a single, national-level enterprise. However, public television is not a single, national entity, nor is it identical with PBS. Public television evolved from a handful of noncommercial educational television stations in the early 1950s to 349 stations today that reach virtually every household in the United States. The stations were built and continue to operate as independent, nonprofit, community-based entities offering a mix of broadcast programming and outreach activities to their local communities. The late 1960s saw the creation of national-level organizations to support and interconnect the stations: CPB and PBS. With producers and distributors supplying a wide variety of educational, cultural, entertainment, and public affairs programs, public television today remains a locally based enterprise with a national reach that serves the particular needs and interests of the communities within the range of each station. Public Television Stations Are Independent, Locally Based Entities That Serve Their Communities Public television began as, and continues to be, a largely decentralized enterprise, with ownership and control of the stations maintained at the state or local level. The basis for this localism was established by FCC’s initial decision in 1952 to reserve 242 channels assignments for educational television stations in various markets across the country. These reserved channels were to serve “the educational and cultural broadcast needs of the entire community to which they are assigned.” It was left to the local communities to construct and operate television stations to use these reserved channels, since neither FCC nor the Congress provided funds for this purpose. The growth of public television’s station infrastructure has been the work of decades, as civic leaders, universities, and state and local governments have marshaled funding and operational support from public and private sources to establish and operate noncommercial educational television stations to serve their communities. Today, there are 349 such stations, owned and operated by 173 licensees, which reach at least 98 percent of households that have a television. Figure 1 illustrates the pace of station growth since 1953, when KUHT (Houston, Texas) became the first noncommercial educational television licensee. Most public television stations broadcast under the terms of noncommercial educational television licenses granted to them by FCC. Under FCC rules, licensees of public television stations must be one of the following (see fig. 2): A nonprofit educational organization, such as a university or local school board. For example, WKYU (Bowling Green, Kentucky) is a university licensee. A governmental entity other than a school, such as a state agency. For example, Mississippi Public Broadcasting (Jackson, Mississippi) is a state licensee and WNYE (New York) is a local licensee. Another type of nonprofit educational entity, such as a “community organization.” For example, North Texas Public Broadcasting, Inc., operates KERA (Dallas, Texas). Public television stations’ most visible activity is broadcasting programs to serve the educational and cultural needs of their communities. Each station’s management decides what programs to air to meet the particular needs and tastes of their communities. In addition, stations are typically involved in a variety of nonbroadcast activities that extend their educational and cultural mission and support their local communities. As noncommercial educational licensees, the stations must support themselves financially without reliance on the airing of commercial advertising. Both the stations’ activities and the various funding streams that support them are discussed in more detail in later sections of this report. The stations’ overall operational expenses vary greatly depending on a station’s size and specific activities. In fiscal year 2005, these expenses ranged from $881,106 for WVUT (Vincennes, Indiana) to $174,474,123 for WGBH (Boston). Stations incur expenses associated with construction and maintenance of broadcast towers and transmission utilities associated with signal transmission; office and studio facilities; master control equipment to manage the station’s broadcast traffic (see fig. 3); production equipment, such as television cameras; program production and acquisition fees; nonbroadcast community outreach activities; and salaries for station personnel. Many stations have formed affinity groups, such as the Organization of State Broadcasting Executives, the Small Station Association, and the University Licensee Association, to deal with common concerns. Stations may also be members of the Association of Public Television Stations, a nonprofit organization established in 1980 to advocate for public television interests at the national level. The Corporation for Public Broadcasting Provides Federal Support to Stations and Other Public Television Entities Funding has been a continual concern for public television. As noted earlier, the channels reserved for noncommercial educational television in 1952 did not come with any federal funding to get the stations up and running. The first decade of public television saw slow growth in the number of stations. By 1960, 49 stations were broadcasting. To spur the construction of stations, the Educational Television Facilities Act of 1962 was enacted to provide the first direct federal funding for station infrastructure. The Educational Television Facilities Act authorized a $32 million, 5-year program of federal matching grants to licensees for facilities. The program, however, did not cover stations’ operational expenses. In 1965, the Carnegie Corporation sponsored a commission to study educational television’s financial needs. As recommended in the Carnegie Commission’s 1967 report, President Lyndon Johnson proposed, and the Congress enacted, the Public Broadcasting Act, which amended the Communications Act to reauthorize funding for facilities and equipment grants under the Educational Television Facilities Act and to authorize additional federal funding for public television through a new entity— CPB. CPB was authorized under the Public Broadcasting Act to be established as a nonprofit corporation to facilitate the growth and development of both public television and public radio, along with the use of these media for instructional, educational, and cultural programming. This private corporation structure was to afford “maximum protection from extraneous interference and control.” CPB operates under the provisions of the Communications Act, and is governed by a board of directors consisting of nine members appointed by the President and confirmed by the Senate. The Communications Act includes a congressional “Declaration of Policy” stating, among other things, that it is in the public interest to encourage the growth of public radio and television, as well as the development of programming that involves creative risks and serves the needs of unserved and underserved audiences, particularly children and minorities. The declaration also states that public telecommunications services (including public television and radio) constitute a valuable local community resource for addressing national concerns and local problems, and that it is in the interest of the federal government to ensure that all citizens have access to these services. CPB’s main responsibility is distributing congressionally appropriated funds to benefit public broadcasting (both public television and public radio). CPB allocates its appropriated funds (which constitute virtually its entire budget) in accordance with the provisions of the statute. The statute directs CPB to allocate yearly 6 percent of its appropriated funds for “system support” (largely royalty fees, station interconnection costs, and projects and activities to enhance public broadcasting) and not more than 5 percent for CPB’s administrative expenses. Of the remaining funds (about 89 percent), 25 percent is to be allocated for public radio and 75 percent for public television. There is a further division of the funds for public television: 75 percent is to be made available for distribution to station licensees and 25 percent for national programming. Because the distribution formula is defined by statute, changes in CPB’s yearly appropriation affect both public television and public radio licensees. The principal mechanism by which CPB distributes federal funding to public television licensees is the Community Service Grant program. CPB currently administers the program by providing each licensee that operates an on-air public television station with a “basic” grant of $10,000. In addition to the basic grant, eligible licensees receive two component grants in their Community Service Grant—a “base” grant and an “incentive” grant. Base grants are determined by the statutory allocations noted above, CPB’s annual appropriation, the number of licensees eligible for grants, and a fixed grant funding level set by CPB’s board of directors. Incentive grants are designed to encourage stations to maintain and stimulate new sources of nonfederal funding support. Accordingly, the size of an incentive grant depends on the amount of revenues that an individual licensee raises from nonfederal sources. (See app. II for detailed information on the components of Community Service Grants.) The Public Broadcasting Service Is a Nonprofit Organization That Provides Technical and Programming Support to Stations One of the goals of the Public Broadcasting Act was to establish a system to interconnect the individual public television stations for the distribution of programming. The Communications Act, as amended by the Public Broadcasting Act, authorizes CPB to assist in the establishment and development of one or more interconnection systems, but in keeping with the concept of local control, CPB is expressly prohibited from owning the interconnection systems or from producing, scheduling, or disseminating programs. To fill these needs, CPB worked with the stations and other stakeholders to create PBS in 1969 as an entity for managing an interconnection system and acquiring and distributing programs. PBS was established as a private, nonprofit organization made up of licensees of noncommercial television stations. Today, nearly all public television licensees have chosen to be members paying assessments for access to PBS national programming. PBS is governed by a board with a majority of members representing stations. PBS’s activities and services include the following: acquiring and promoting the programs for children’s and prime-time broadcast that make up PBS’s “National Programming Service;” operating a satellite-based interconnection system for distributing programming to member stations for broadcast to their local communities; providing educational services, such as its Web-based TeacherSource; assisting member stations with fund-raising and development support, as well as a variety of engineering and technology development issues, such as the digital transition. As a result of agreements with the stations at the time of its creation, PBS was authorized to coordinate the development of a national program schedule, but not to produce broadcast programming of its own. Programming comes from individual public television stations, outside production companies, and independent producers. PBS selects programs to be included in its National Programming Service and distributes them to stations via the interconnection system. Stations exercise substantial discretion over programming decisions and are free to choose which of these programs to broadcast. A Variety of Entities Produce and Provide Public Television Programs Television production involves developing and funding an individual program or series from an initial concept to a finished product. Producers of programming for public television are both internal to public television, such as producing stations, and external to public television, such as outside producers. Producing Stations. A small number of the larger public television stations regularly produce and coproduce programs and series designed for national audiences that are included in PBS’s National Programming Service. Examples include WGBH (Boston): NOVA, Mystery!, Frontline, and Masterpiece Theatre; WNET (New York): American Masters, Great Performances, and Nature; WETA (Arlington, Virginia): The NewsHour with Jim Lehrer and Washington Week; and OPB (Portland, Oregon): History Detectives and The New Heroes. Other public television stations may, from time to time, produce a show that is chosen by PBS for national broadcast or by individual stations for local broadcast. Local Production. Aside from broadcasting programs developed for a national audience, stations produce and broadcast their own local programs that are designed to meet the special needs and interests of their individual communities. Because program production can be expensive, the amount of a station’s local production is closely tied to its budgetary resources and underwriting support from the business community. Examples of such locally produced programs include WVPT’s (Harrisonburg, Virginia) farm report, Rural Virginia, and WTTW’s (Chicago) showcase of local events and people, Chicago Tonight. Outside Producers. These producers are not public television entities but are independent production companies and individual producers who create programming that is acquired by PBS or individual stations for their broadcast schedules. One such production company is Sesame Workshop, the producer of Sesame Street. Although this long-running program has become strongly identified with public television and PBS, Sesame Workshop is a nonprofit educational organization. (See app. IV for a description of Sesame Workshop.) An example of a for-profit production company is HIT Entertainment, the producer of shows such as Barney & Friends and Bob the Builder. There are also independent producers of public television programming, who are generally not affiliated with a studio, a television station, or a major production company. Ken Burns, for example, has produced some of public television’s best-known series, such as The Civil War, Baseball, and Jazz, as well as profiles of notable Americans, such as Mark Twain and Frank Lloyd Wright. International producers are another source of programming. British productions, in particular, have been a regular feature of public television for decades. The Independent Television Service (ITVS). In 1988, the Congress directed CPB to provide adequate funds to an independent television production service. Pursuant to this mandate, CPB provides annual funding to ITVS. ITVS funds, distributes, and promotes new programs developed by independent producers primarily for public television. ITVS looks for proposals that increase diversity on public television and present a range of subjects and viewpoints that complement and provide alternatives to existing public television offerings. An example of ITVS’s programs include And Thou Shalt Honor. . ., which explores the increasing role of caregiving for elderly Americans. Non-PBS Distributors of Programming. Although PBS is the principal distributor of children’s and prime-time shows for its member stations, other distributors also provide stations with programs. One is American Public Television (APT), which distributes shows such as Lidia’s Italian- American Kitchen and Rick Steves’ Europe. Another is the National Educational Telecommunications Association (NETA), which distributes shows such as This is America with Dennis Wholey. Stations can also acquire broadcast rights from international distributors. Public Television Stations Provide a Variety of National and Local Programs and Services Public television stations broadcast a mix of national and local programs. PBS prime-time and children’s programming constitute a majority of broadcast hours for most public television stations. However, stations supplement these programs with both locally produced and instructional programming to meet the needs of their communities. In addition to programming, public television stations provide a variety of nonbroadcast services. Stations provide educational services, including programs to help promote literacy and facilitate teacher training. Some stations also provide civic engagement and health outreach services. Finally, many stations provide emergency-alert services to facilitate communication among public safety officials and between public safety officials and the public. Public Television Stations Provide a Variety of National and Local Programming Public television stations produce, acquire, and broadcast programs from a variety of sources. According to the Communications Act, public television programming should, among other things, (1) serve educational, cultural, and instructional purposes; (2) address the needs of unserved and underserved audiences, particularly children and minorities; and (3) serve local and national interests. (See app. III for the demographic characteristics of public television viewers.) As we mentioned earlier, each station decides what programs to broadcast to meet the needs and tastes of its communities. Figure 4 illustrates the percentage of broadcast time filled from various program sources. On average, public television stations use PBS programs for 67 percent of all broadcast hours. To a far lesser extent, stations rely on APT and NETA for nationally distributed programming. Finally, stations dedicate about 4 percent of broadcast hours to local programs. The DTV transition expands the programming opportunities for public television stations through multicasting. For example, in the Washington, D.C., television market, WETA (channel 26) broadcasts 26.1, 26.2, 26.3, and 26.4, or four separate digital video signals in addition to its analog signal, expanding the amount of programming that WETA can broadcast. Among the stations we contacted that are broadcasting a digital signal, most are simulcasting (or repeating) their analog signal on one of these digital signals. Most stations we contacted that broadcast in digital also provide additional programming streams such as “PBS HD,” PBS’s high-definition programming service; “World,” an aggregation of PBS and other nonfiction programs; and “Create,” lifestyle and how-to programs. In addition, some stations offer instructional or regional programming. For example, KET (Lexington, Kentucky) offers two instructional channels for Kentucky schools and KAMU (College Station, Texas) offers “The Research Channel.” KTCA (St. Paul, Minnesota) offers “Minnesota Channel,” which features a variety of programming that is from or about Minnesota and its close neighbors, and WFSU (Tallahassee, Florida) provides “The Florida Channel,” a C-SPAN-type channel focusing on Florida. Some station officials with whom we spoke indicated that their future multicasting plans include providing a broader range of programming that is more tailored to the needs of their communities. For example, some stations indicated that they plan to offer additional programming from packaged programming streams, such as “V-me,” a channel planned for launch in early 2007 that will feature Spanish language programs on a variety of topics, or “MHz WORLDVIEW,” which offers international programming. In addition, some stations plan to create their own programming streams tailored to local audiences. For example, WYES in New Orleans, Louisiana, is collaborating with local organizations to develop a tourist-oriented channel, and South Dakota Public Broadcasting (Vermillion, South Dakota) is working with local and state organizations to create a channel that would focus on instructional programming for classroom use during the day and children’s programs in the evening. We identified several types, or streams, of programming broadcast by public television stations. These streams of programs include PBS non- children’s, children’s, local, and instructional programming. While most public television stations share some common programming, such as PBS Primetime and PBS Kids, additional programming choices, such as local and instructional programming, vary from station to station. PBS Programming. Almost all public television stations carry PBS programming. PBS programming represents about half of the broadcast hours provided by public television stations and is a cost-effective approach to acquire and broadcast programming. Most PBS programming is provided via PBS’s National Programming Service, which features a variety of educational and cultural topics. PBS takes a multimedia approach to expand the reach of its programming, including Web sites, teachers’ guides, and lesson plans for many programs. Figure 5 illustrates the major program themes included in PBS’s National Programming Service, excluding children’s programming, which is addressed in the next section. These themes include the following: Public affairs and news programs, such as The NewsHour with Jim Lehrer, long-form coverage and analysis of national news; Nightly Business Report, business and economic news; and Frontline, long- form public-affairs documentaries. Science and nature programming, such as Nova; Nature; and Scientific American Frontiers, covering new technologies and discoveries in science and medicine. Arts and drama programming, such as American Masters, specials on American cultural artists; Masterpiece Theatre, a drama series featuring works by classic and contemporary writers; and Great Performances, broadcasts of music, theater, and dance performances. History programming, such as American Experience, Ken Burns’ American Stories, and History Detectives. Life, cultural, and other programming, such as Religion & Ethics Newsweekly, news on and analysis of religion and ethics; Independent Lens, documentaries and dramas featuring diverse stories; and Wide Angle, international current affairs documentaries. Children’s Programming. Children’s programming constitutes an important portion of broadcast time, and many station officials told us that it is one of the strengths of public television. We found that children’s programming accounts for 16 percent of all program hours broadcast by public television stations. Children’s programming represents over 40 percent of the weekday programming schedule for many stations. Many stations broadcast about 8 to 10 hours of children’s programming per weekday, often beginning before 8:00 a.m. and ending between 5:00 p.m. and 6:00 p.m. Stations often design their weekday schedule to include programming oriented toward the prekindergarten age group during the school day and toward school-age children, after school. PBS Kids features nonviolent, curriculum-based content that promotes skills such as literacy, math, problem solving, and social skills. Several prominent examples of children’s programming include Sesame Street, which encourages the development of preschool level skills, such as those needed for reading, writing, math, and science; Between the Lions, which fosters literacy skills among 4 to 7 year olds; Maya & Miguel, which encourages children to appreciate other cultures and builds understanding of English among 6 to 11 year olds; and Cyberchase, which promotes math problem-solving skills among 8 to12 year olds. PBS and member stations leverage the concepts taught in these and other children’s programs via Web resources, including lesson plans, activities, parent guides, book suggestions, and links to other resources related to the skills promoted in specific programs. Local Programming. Most public television stations produce and broadcast some local programming in order to meet specific needs of their audiences. On average, local programming represents about 4 percent of total broadcast hours for public television stations. Some stations we contacted indicated that they would like to provide more local programming, but that local production is expensive. Although local programming does not constitute a large percentage of the programming provided by public television, some stations we contacted emphasized the unique nature of public television’s local programming or the importance of local programming to their communities. Some stations mentioned that they are the only source in their community of local programming unrelated to news or sports. Stations we contacted cited many examples of local programming, such as the following: Many stations provide programming on local and state history and public affairs. For example, KET (Lexington, Kentucky) covers the Kentucky state legislature live; many stations provide state election coverage; and, of the stations we contacted, the majority provide at least one public affairs program, such as KAID’s (Boise, Idaho) Dialogue. Some stations produce local programming to enhance access to arts and cultural amenities. WBRA (Roanoke, Virginia) produces a virtual excursion show introducing viewers to local sites, a weekly open microphone show featuring soloists and small groups, and a weekly concert series showcasing old-time and bluegrass music from the region. Some stations broadcast local events and residents that are not covered by national networks. For example, KNCT (Killeen, Texas) broadcast the arrivals of and ceremonies for the 1st Cavalry Division and the 4th Infantry Division after their return from Iraq; KOOD (Bunker Hill, Kansas) broadcasts some local high school sporting events; and KNME (Albuquerque, New Mexico) produces documentaries on the art, culture, history, and cultural diversity of New Mexico. Some stations also provide programming that gives underserved viewers access to services and information they might otherwise have difficulty obtaining. For example, several stations broadcast call-in shows, such as Doctors on Call, Lawyers on the Line, and Homework Hotline, during which viewers can ask questions of health-care professionals, lawyers, and teachers, respectively. Other similar programs include Healthy Minds, a WLIW (Plainview, New York) program about mental illness; specials on methamphetamine, such as Meth in Wisconsin from WPTV (Madison, Wisconsin); and topics such as affordable housing on KCWC’s (Riverton, Wyoming) Wyoming Perspectives. Instructional Programming. Many public television stations provide formal instructional programming to meet local educational needs. Instructional programming constitutes about 4 percent of total broadcast hours. The amount and type of instructional programming offered varies from station to station. KET (Lexington, Kentucky) provides instructional programming for students in grades K through 12 and adults. For grades K through 12, KET produces AP courses, virtual field trips, and a news program; for adults, KET provides programming for adult basic education, GED preparation, workplace essential skills, and childcare certification training. WETP (Knoxville, Tennessee) offers 6 hours of instructional programming per day, 175 days per year, for grades K through 12 and teachers. In addition, WETP provides “in-service” professional growth programming for teachers and administrators, including programs such as Reading Rockets: Launching Young Readers; Managing Your Classroom: Supporting Students at Risk; and Principals & Leaders: Set High Expectations & Standards. WYCC (Chicago), licensed to the City Colleges of Chicago (CCC), offers 5 ½ hours of instructional programming each weekday. In 2 ½ years, students can fulfill virtually all requirements for an associate’s degree from CCC by participating in WYCC’s telecourses. WCTE (Cookeville, Tennessee) prodce over 200 ho of locl progrmming nnually, inclding Upper Cumberland Business Profiles, fering re business leder; Kaleidoscope, zine-type progrm highlighting commnity event; Tennessee Sportsman, fering hnting nd fihing; Heart and Soul, concert erie; Road Trips, highlighting regionl trvel detintion; ook review erie; coverge of the Smithville Fiddlers’ Joree, the Ptnm Conty Fir, nd the Cookeville Chritmas Pde; locl high chool nd college port; High School Academic Bowl, covering 30 competition nnually; House Call, medicl cll-in progrm; nd n edtionerie on exuaabuse. Public television stations provide a variety of nonbroadcast services to meet local and national needs. As set forth in the Communications Act, public television stations constitute local community resources for using electronic media to address national concerns and solve local problems through outreach and other community programs. Some public television services are federally funded and centrally facilitated, but involve some local implementation. These services include Ready To Learn, TeacherLine, and the Digital Emergency Alert System, which address both national and local needs, such as literacy, teacher training, and emergency response. Other services are developed and administered at the local level to meet needs of the station’s communities. We identified four primary types of nonbroadcast services: educational, civic engagement, health, and emergency services. Educational Services. Educational services extend the value of public television’s electronic resources, especially broadcast programming and Web resources, to help fulfill a variety of local and national educational needs. These services are rooted in the historical education mission of public television and are the most common type of services provided by the stations we interviewed. For the most part, public television’s educational services are designed to align with local and national standards. KLVX (Las Vegas, Nev) provide intrctionl progrmming nd ervice for Clrk Conty viltiple medi nd mltiple delivery mechni, inclding medi center with over 15,000 item ilable for dily delivery to locchool; free-lon, cptioned medi liry ilable to def or hrd-of-hering dent nd lt, their prent, nd their cregiver; free video treming of intrctionl progrmming ligned with te ndrd; nd “Virtual High School” with co for dent throghot the te, ccessle vi rodcast, DVD, video, nd the Internet. Public television’s centrally facilitated educational services help prepare children for school, train teachers, and provide teaching resources; these services often rely on federal funding and involve some local implementation. The Department of Education’s Ready To Learn initiative was a joint initiative of PBS and 149 public television licensees and included educational programming, workshops, books and magazines, Web sites, and classroom resources. Until recently, almost all public television licensees provided local outreach in association with Ready To Learn, including workshops for over 140,000 caregivers and teachers annually, focusing on linking concepts presented in programs to skill-building activities. Many aspects of the program are being continued or modified under the new Ready To Learn and Ready to Lead in Literacy initiatives, with less emphasis on local-level workshops and greater emphasis on educational programming and more geographically limited, need-targeted outreach. Another initiative featuring PBS and station involvement is TeacherLine, which is funded through the Department of Education’s Ready to Teach program. TeacherLine provides pedagogical and content training for teachers, consistent with national and state standards. Over 22,000 teachers in all 50 states and the District of Columbia enrolled in TeacherLine courses from 2000 through 2005. While PBS provides access to the online courses, several stations customize or supplement course modules for teachers in their region, and many higher education institutions provide graduate credit for TeacherLine courses. In addition to these initiatives, PBS offers TeacherSource, a Web site that provides at least 3,000 free lesson plans, designed to be consistent with individual state education standards, for teachers of grades pre-K through 12. At the local level, stations initiate a variety of other educational services. Station officials whom we spoke with cited many examples of educational services, including the following: Stations increasingly offer instructional programming and other instructional resources via multiple platforms, especially the Internet. Some station officials said that they offer instructional resources, such as advanced placement courses, in order to provide underserved regions with more equitable access to instructional resources. Many stations conduct the “Reading Rainbow Writers and Illustrators Contest” in their viewing areas. In addition, some stations organize and broadcast regional high school knowledge bowls. KLCS (Los Angeles) organizes an awards program that honors teachers and students who create videos that advance the California State Content Standards. Many stations, especially university licensees, provide internship and employment opportunities for students. Civic Engagement and Community Building. Many of public television’s nonbroadcast services foster civic engagement and community building. For example, stations we contacted mentioned the following services: SDPB (Vermillion, South Dakota) provides video streams of all state legislature committee meetings and audio streams of Public Utilities Commission meetings on its Web site. KYUK (Bethel, Alaska) documents the history, culture, and lifestyle of the Yup’ik people of Western Alaska. The station is transferring its large archive of documentaries and raw footage—including oral histories, traditional dances and ceremonies, meetings, and other materials—to digital media in order to preserve these resources and make them available. WKYU (Bowling Green, Kentucky) organized a “Living Will” symposium that attracted 500 people who created living wills with the assistance of an attorney at no charge. Public Eye News, a WNMU (Mquette, Michign) new progrm, provide new for the Upper Gret Lke Region nd trining opportnitie for dent. Northern Michign Univerity dent re reponle for ll aspect of the newast. The progrm fere locl new torie, port highlight, wether report, nd ntionl new. Severl other public televiion tion provide dent-rn new progr. Health Outreach. Many stations provide educational programming on health issues combined with outreach programs to expand the reach of the messages. Two examples follow: WTTW (Chicago) is one of many public television stations that offered outreach in association with the broadcast of A Lion in the House, a documentary addressing childhood cancer. WTTW partnered with the Chicago Pediatric Cancer Care Coalition to offer referral support and answer inquiries about childhood cancer services. Numerous public television stations provided outreach in association with the program The Forgetting, a documentary about Alzheimer’s disease. WNET (New York) provided a range of services, including screenings and panel discussions for the general public and for community service and health-care professionals, Web materials, and print materials for outreach events and partner organizations. Emergency Services. Many public television stations have integrated or will soon integrate emergency services into the public services they provide. At least 26 public television stations in 17 states recently participated in the pilot of a Digital Emergency Alert System (DEAS) that is being created by the Department of Homeland Security in coordination with other federal departments and agencies via a cooperative agreement with the Association of Public Television Stations. The new system will improve the ability of emergency managers and public safety officials to rapidly broadcast emergency information to first responders and the general public. The technology will enable officials to pinpoint to whom the information is sent and can be relayed over a variety of media, such as television, radio, cellular telephones, computers, and personal data accessories. The next phase of the DEAS program includes the extension of the system so that all public television stations can transmit information to local first responders and the public, potentially enabling near universal service throughout the United States once the program is complete. Many stations have developed other emergency services, often in partnership with local organizations, such as the following: To improve community preparedness in the case of flooding of the Red River, Prairie Public Television (Fargo, North Dakota) hosts a “Riverwatch” Web site featuring information provided by government agencies and commercial entities. Some stations, such as MAINE (Lewiston, Maine), provide AMBER Alerts, emergency messages broadcast when a law enforcement agency determines that a child has been abducted and is in imminent danger. Individuals, Businesses, and the Federal and State Governments Provide the Majority of Funds for Public Television Public television receives funding from many sources, the most important of which are individuals, businesses, and the federal and state governments. In 2005, public television licensees reported annual revenues of $1.8 billion, of which 15 percent came from federal sources. However, the relative sources of funds differ significantly from licensee to licensee; licensees with less operating revenue (small licensees) and licensees that provide service in small television markets receive a larger percentage of revenues from federal sources than do licensees with more operating revenue (large licensees) and licensees that provide service in large television markets. In addition to basic support provided through CPB, the Congress provides funds for public television to help licensees complete the DTV transition. Licensees consider federal funding important for their operations, and many suggested that its elimination would lead to staff reductions and less local programming and services. Finally, federal funds help support PBS and the production of national programming. Public Television Licensees Receive Funding from Many Sources; However, Small Licensees and Licensees in Small Television Markets Exhibit Greater Dependence on Federal Funds Public television licensees receive the majority of their revenues from four sources: individuals, businesses, and the federal and state governments. In 2005, the 177 public television licensees reported revenues of $1.8 billion. Of the $1.8 billion, contributions from individuals account for 25 percent, and business support, state support, and federal support each account for 15 percent. The remaining sources make up about 30 percent of licensees’ total revenues. Figure 6 illustrates the sources of revenues for public television licensees in 2005. The sources of revenues vary according to the type of licensee— community, local, state, or university. Table 1 lists the sources of revenues for different types of licensees in 2005. Community licensees received a significant percentage of revenues from individuals, businesses, and the federal government through CPB. Local licensees received a large percentage of revenues from local governments, state licensees received a large percentage of revenues from state governments, and university licensees received a large percentage of revenues from universities. The percentage of revenues from the federal government varied modestly across the types of licensees, with the state licensees receiving the lowest percentage. The percentage of revenue received from the federal government through CPB decreases significantly as the size of the licensee increases; in particular, CPB distributes funds through a statutory formula designed to consider the financial needs and requirements of stations and to maintain and stimulate new sources of nonfederal support. Table 2 provides data for 2005 on the sources of revenues for licensees of different sizes. For the smallest licensees, those with revenues of less than $3.0 million, federal support through CPB represented 33 percent of the average licensee’s revenues. In fact, federal support provides over 40 percent of the revenuesfor 9 licensees. Alternatively, among the largest licensees, those with revenues exceeding $10.7 million, federal support made up about 10 percent of total revenues for the average licensee. Large licensees received a greater percentage of revenues from individuals, businesses, state governments, and foundations than did small licensees. A similar trend appears when we consider the size of the television market where the licensee operates. In larger television markets, licensees have access to larger numbers of individual donors, businesses, and foundations and thus would generally be less reliant on federal support. Table 3 provides data for 2005 on the sources of revenues for licensees in television markets of different sizes. On average, among licensees in the smallest television markets, those with revenues of less than $46.2 million, federal support through CPB represented about 27 percent of revenues. Conversely, for licensees in the largest television markets, those with revenues exceeding $313.0 million, federal support made up an average of 12 percent of revenues. As anticipated, licensees in large television markets received a larger proportion of revenues from individuals, businesses, and foundations than did licensees in the smallest television markets. Most Licensees Received Federal Support for the DTV Transition For commercial and noncommercial television stations, the DTV transition requires a substantial capital investment. In 2002, we reported that stations would incur capital costs of approximately $3.0 million each for the DTV transition. Stations must overhaul and replace the transmitting equipment, including perhaps replacing the antenna, and studio equipment. In addition, during the DTV transition, stations must operate both an analog and digital transmitter, which increases the stations’ operating expenses. To help public television licensees complete the DTV transition, since 1999, the Congress has appropriated nearly $400 million for CPB, NTIA, and the Rural Utilities Service (RUS) of the Department of Agriculture. CPB operates the Digital Distribution Fund, which provides grants for digital transmission equipment necessary to comply with FCC’s regulations. In 2006, CPB offered grants of $500,000 for each transmitter, and stations were required to match 25 percent of the cost of the project. NTIA, through its Public Telecommunications Facilities Program, also provides grants to licensees. In 2006, NTIA required stations to match 25 percent to 50 percent of the cost of the funded project. Finally, RUS operates the Public Television Station Digital Transition Grant Program. This program provides support for rural licensees and does not require matching funds because of the financial burden of the DTV transition for rural licensees. NTIA officials said that the agency coordinates with officials at CPB and RUS to prevent duplication; however, RUS officials noted that a licensee could receive support from more than one agency, as long as the support funded different equipment. Licensees with whom we spoke reported receiving support for the DTV transition from a variety of sources. Forty-two of 54 licensees reported receiving some form of support from the federal government. Among the three grant programs, licensees most frequently cited CPB’s Digital Distribution Fund and NTIA’s Public Telecommunications Facilities Program. In addition to federal support, many licensees reported receiving support from a state government. Licensees also reported receiving funding from universities, licensee capital campaigns, licensee operating funds, and gifts. Public Television Licensees Consider Federal Funding to Be Important Twenty-three of the 54 licensees with whom we spoke said that federal funding was important for their operations. In particular, federal funding has several positive attributes for licensees. First, licensees have generally broad discretion with federal funds and therefore can use these funds for general station operations. Funding from other sources, especially foundations, is generally restricted to specific projects or programs, potentially limiting the licensee’s ability to respond to changing needs. Second, licensees incur relatively minimal costs to secure federal funding, compared with funding from other sources. Finally, some licensees noted, federal funds are a vehicle to attract other funds. For example, WVPB (Charleston, West Virginia) said that the state government considers federal funding a source of matching support, and the state government is willing to appropriate state funds because the licensee will also receive federal funding. If federal funding were reduced or eliminated, some licensees would need to reduce their level of service. In a report prepared for CPB, McKinsey and Company projected that in response to a 15-percent reduction in total revenues, licensees would need to reduce staff by 26 percent and reduce local programming by 40 percent. Twelve licensees with whom we spoke noted that another source of funds does not exist that could fill the void that would be left if federal funding were reduced or eliminated. Eleven licensees said that the station would discontinue operations if federal funding were eliminated; these were generally smaller licensees in smaller television markets. However, a larger number (30) said that they would need to reduce staff, local programming, or services. Some licensees noted that they must continue to purchase PBS programming, because this programming attracts viewers and therefore membership and underwriting support. Thus, some licensees would likely reduce local programming, which is more costly to produce. Furthermore, three licensees said that they would need to reduce or eliminate television service to more rural areas of their service territory. Several licensees with whom we spoke had incurred funding reductions in the past and responded with reductions in staff, local programming, and services. For example, according to three licensees, the state of Tennessee reduced state funding for public television licensees by 9 percent. In response, these licensees undertook the following actions: WETP (Knoxville, Tennessee) eliminated instructional programming, delayed sign-on until 3:00 pm, and reduced staff benefits. WCTE (Cookeville, Tennessee) reduced staff, staff benefits, and local programming. WNPT (Nashville, Tennessee) reduced staff and local programming. In addition, WNMU (Marquette, Michigan) lost 40 percent of its state support and eliminated 12 staff positions from a total of 36. KLCS (Los Angeles) lost $1.3 million in support from the Los Angeles Unified School District and eliminated 33 staff positions from a total of 76. Federal Funds Also Support PBS Nationwide Programming The three largest revenue sources for PBS are underwriting, member station assessments, and CPB and other federal sources. In 2005, PBS’s revenues were $532 million. Of this total, $192 million, or 36 percent, came from underwriting. PBS also received $163 million from member station assessments and $70 million from federal sources, such as funds from CPB. In fiscal years 2000 though 2005, PBS’s annual revenues ranged between $489 million and $542 million. During this period, member station assessments typically increased on a yearly basis, while funding from the remaining sources varied from year to year. Among licensees with whom we spoke, eight indicated that a reduction or elimination of federal funding could negatively affect PBS programming. In 2004, PBS formed the PBS Foundation to increase the long-term stability of the organization. According to PBS staff, the foundation is a 509(a)(3) supporting organization and operates exclusively for the benefit of PBS. The foundation conducts fund-raising activities to support PBS’s needs and PBS controls the foundation through various bylaw requirements. According to PBS staff, the foundation has raised over $17 million, including $2.4 million from the Ford Foundation for the foundation’s operating expenses. Public Television Stations Are Pursuing a Variety of Nonfederal Funding Sources, but Substantial Growth to Offset a Reduction or Elimination of Federal Support Appears Unlikely While contributions from individual members represent a significant source of revenue, this source is not expected to grow significantly in the future. Alternatively, public television officials consider major giving a source of long-term revenue growth, and CPB has initiated a major giving initiative to cultivate major donations. Foundations provide funding to public television, but generally only support capital and other projects, and not station operations. The trend in underwriting support has been mixed, with some licensees experiencing increases and others decreases. While some licensees favor an easing of the statutory and regulatory restrictions on underwriting activities, many licensees do not share this sentiment. Finally, licensees generally receive minimal revenues from ancillary and miscellaneous activities. Basic Membership Revenue Is Not Expected to Grow Significantly in the Future Basic membership, or gifts from individuals of less than $1,000, has been a mainstay of public television for many years. Among the 54 licensees with whom we spoke, several mentioned that their stations began on-air membership campaigns during the late 1960s and early 1970s to increase revenue. Almost all licensees receive contributions from individuals, and with the exception of several local licensees, the licensees with whom we spoke conduct membership campaigns. While basic membership serves as an important source of revenue for licensees, recent trends indicate that this source of revenue is decreasing. Both the number of members and the average gift size determine the amount of basic membership revenue that a station receives. According to CPB, the number of public television members has decreased from 4.7 million in 1999 to 3.6 million in 2005. At the same time, the average annual gift has increased from $79 to $97. As a result, annual basic membership revenue has decreased about $24 million, or 6 percent, from $373 million in 1999 to $349 million. Several factors appear to be contributing to the decrease in the number of members and basic membership revenue. According to a study prepared by McKinsey and Company for CPB, increased competition for gifts from a growing number of nonprofit entities, more viewer choices, and less familiarity with public television are expected to contribute to declines in the number of members and basic membership revenue. Additionally, the free-rider problem hinders the ability of licensees to acquire members. The free-rider problem refers to the tendency of individuals not to contribute to a service that they can receive free of charge; in the case of broadcast television, individuals can view the station’s signal without contributing. Furthermore, officials at licensees with whom we spoke said that increasing the number of members and basic membership revenue is difficult for the following reasons: Competition for charitable gifts has increased because more nonprofit entities are seeking gifts. Viewers have many more choices since the advent of cable and satellite television and as a result are less familiar with public television than in the past. In some areas, a poor local economy limits the number of viewers that are able to make charitable gifts. (See app. III for the demographic characteristics of public television members.) Several licensees are adopting alternative approaches to increase the number of members and basic membership revenues. Traditionally, licensees purchase a package of programs from PBS—known as the Station Independence Program (SIP)—that the stations broadcast during their on-air membership campaigns. However, several licensees said they do not use the traditional SIP programming. Rather, these officials stated that airing programming that viewers most enjoy or local programming, rather than the SIP programming package, could attract more viewers during on-air membership campaigns and thereby increase the number of members and basic membership revenues. Some station officials added that discovering what programming viewers most enjoy and airing that programming could be important to increasing the number of members and basic membership revenues. In addition, some officials told us that involvement in community activities is more important to attracting members and gifts than are on-air membership campaigns. Major Giving Is Seen as Having Potential for Long- Term Growth To improve the financial sustainability of public television, in 2003, CPB launched a major giving initiative to help stations increase gifts of $1,000 or more. According to CPB officials, public television lags behind most other nonprofit organizations in designing and implementing campaigns to garner major gifts. For example, in 2005, 13 percent of revenues from members came from gifts of $1,000 or more. In contrast, CPB noted that other nonprofit organizations receive a much larger share of revenue from major gifts. Since acquiring major gifts requires an approach much different from traditional membership campaigns, CPB implemented a capacity building program for station staff. Acquiring major gifts requires one-on-one contact with current and potential donors, instead of the retail-oriented effort associated with on-air membership campaigns. The major giving initiative also requires station management and staff alter their traditional roles. For example, the station manager must focus not just internally on station operations, but also externally on fund-raising. The capacity building program consists of four elements: team leadership meetings attended by the station’s chief executive officer, board members, and chief development officer to involve top station management; curricula delivered via Web lectures once a month for 6 months, with follow-up teleconferences among various station groups to share experiences; on-site consulting to help the stations implement their specific plans; a set of Web-based tools, including (1) information about best practices and budgeting, (2) on-air spots for station use, and (3) videos to show at donor gatherings. According to CPB, 110 of 177 licensees are participating in the major giving initiative. Among the 54 licensees with whom we spoke, most are participating, or plan to participate, in the initiative. Several licensees had efforts under way to attract major gifts prior to CPB’s initiative; some of these licensees have joined CPB’s initiative while others have chosen to continue with their own efforts. Licensees with whom we spoke that have chosen not to participate in the initiative cited several reasons for their decision, including a small number of individuals in their area that have the financial resources to make a major gift and a lack of staff and budgetary resources to undertake the initiative. According to CPB, early results from the major giving initiative appear encouraging. In 2004, licensees received $49.3 million in major giving revenue. However, in 2005, the first full year of the major giving initiative, revenue from this source increased by about 3 percent to $50.8 million. Furthermore, among the first group of licensees participating in the major giving initiative, major giving revenues increased from $16.2 million in 2004 to $19.2 million in 2005, or 18 percent in 1 year. CPB also cited several examples of major gifts: KCET (Los Angeles) and WNPT (Nashville, Tennessee) both received $1,000,000 gifts while KWCM (Appleton, Minnesota) received a $100,000 estate-related gift. Among the 54 licensees with whom we spoke, several also mentioned early successes. For example, an official at KNME (Albuquerque, New Mexico) told us that the station increased major giving revenue from $35,000 in 2004 to $1.1 million in 2005. While the major giving initiative has generated some early successes, CPB and licensees noted that realizing the benefits of the initiative requires a long-term effort. Of the 54 licensees we spoke with, 16 said that major giving is a long-term effort. CPB noted that acquiring major gifts requires a lengthy period of courtship and confidence building. As a result, CPB said, it will take several years for the major giving initiative to mature and CPB will not have definitive quantitative measures until 2009. Furthermore, CPB does not anticipate that increases in major giving revenues will offset decreases in basic membership revenues for several years. Thus, major giving appears to hold promise, but at this early stage, it is difficult to project how much funding the initiative will generate and whether it will benefit all stations, especially those in rural and low-income areas. Foundations Typically Provide Support for Projects and Capital, but Not Station Operations Most licensees receive support from foundations; but, the amount varies significantly between licensees. According to our analysis of SABS data, 158 of 177 licensees received foundation revenue in 2005. However, the largest 25 percent of licensees received an average of $2.1 million from foundations while the remaining 75 percent of licensees received an average of $153,520. Officials from the Ford Foundation noted that stations in large cities can more easily attract foundation support than stations in smaller cities and rural areas. In general, foundations provide support for specific projects, such as capital expenditures and programming, and not for general station operations. Among licensees with whom we spoke, many said that foundations provide support for specific projects. For example, officials at Prairie Public Broadcasting (Fargo, North Dakota) noted that the station received foundation support to implement the major giving initiative and the DTV transition. Again, officials from the Ford Foundation said that few foundations provide general support for public television, but that some foundations support particular programs or projects. From 1999 through 2004, CPB data show that foundation revenues increased 19 percent, from $97 million to $115 million; however, in 2005, foundation revenues remained at $115 million. Among the licensees we contacted, many said that they do not expect a significant increase in support from foundations. Some licensees do not receive or seek foundation support because there are no, or a very limited number of, foundations in their local area. Other licensees said that foundation support is increasingly difficult to obtain because of greater competition from other nonprofit organizations for foundation support. These officials added that many foundations seek out projects that have a direct and measurable impact on a population and that it is difficult to measure the impact of public television programming. Underwriting Revenues Are Generally Flat, and Licensees Express Mixed Opinions about Greater Commercialization of Underwriting The Communications Act and FCC regulations establish parameters for underwriting acknowledgments. Unlike commercial television stations, public television stations are prohibited from airing advertisements. However, public television stations are permitted to acknowledge station support and, without interrupting regular programming, may acknowledge underwriters on air. Such acknowledgments may not promote the underwriters’ products, services, or businesses, and may not contain comparative or qualitative descriptions, price information, calls to action, or inducements. Within these statutory and regulatory parameters, individual licensees develop and implement underwriting policies for their stations. For example, in 2004, we reported that an equal number of licensees aired and did not air or plan to air, 30-second underwriting acknowledgments. In addition, PBS established guidelines that govern how underwriters of PBS-distributed programs may be identified on air. PBS guidelines specify that the maximum duration for all underwriter acknowledgments for PBS-distributed programs may not exceed 60 seconds, and generally the maximum duration for a single underwriter may not exceed 15 seconds. Virtually all public television licensees receive underwriting support, although the amount varies greatly among licensees. According to our analysis of SABS data, 173 of 177 licensees received underwriting support in 2005. Among licensees with whom we spoke, 11 said that local businesses, such as banks, legal offices, medical facilities, and retail businesses, provided most of their underwriting support. For licensees receiving underwriting support, the average amount of underwriting revenue was $1.6 million in 2005. However, licensees’ experiences differ dramatically. The largest 25 percent of licensees, in terms of total revenues, received on average $4.6 million of underwriting support. Conversely, the remaining 75 percent of licensees received just $544,245 on average. Licensees with whom we spoke experienced mixed results with underwriting. In a 2003 report for CPB, McKinsey and Company suggested that underwriting represented a potential source of revenue growth. Consistent with this assessment, 11 licensees said that underwriting revenues have increased. Among factors contributing to the increases in underwriting revenues, licensees cited hiring new staff, implementing a packaged strategy through which companies sponsor a single program over an extended period of time, and adding local sports to the programming schedule. However, eight licensees said that underwriting revenues have decreased. These licensees cited increased competition for corporate dollars, a lack of staff or turnover among underwriting staff, and poor economic conditions in the local area as contributing to the decrease in their underwriting revenues. Among the 54 licensees with whom we spoke, some noted that corporate consolidation and an increased advertising focus among corporations have negatively affected underwriting. Twelve licensees said that corporate consolidation hinders underwriting activities. For example, some licensees mentioned that corporate offices and facilities have moved from their service area, thereby eliminating a source of underwriting support. Similarly, some licensees said that distant corporate headquarters limit the discretion of local branch operations in terms of underwriting and other charitable contributions. Twenty-two licensees said that corporations increasingly adopt an advertising approach to underwriting. Some licensees note that corporate marketing departments and national advertising agencies increasingly handle underwriting activities, rather than corporate philanthropy departments. With the greater emphasis on advertising, corporations and advertising agencies seek out programming with high ratings and targeted demographics. In response to the changing environment, some licensees favor less restrictive underwriting regulations and policies. In particular, 11 licensees favor greater flexibility for on-air underwriting acknowledgments, including perhaps permitting calls to action and price quotes. The licensees favoring greater underwriting flexibility serve large television markets or an entire state. These licensees said that greater underwriting flexibility would enable the licensee to increase underwriting revenues; would allow corporations to use the same advertisement on commercial and public television, thereby enabling them to avoid the cost of developing multiple advertisements; would not represent a significant change, since underwriting acknowledgments and pledge drives have already become commercialized; and would not threaten the licensee’s mission, because licensees operate as nonprofit entities and therefore would not focus on low-quality, high- ratings programming. In the early 1980s, public television conducted a limited experiment with greater underwriting flexibility. In 1981, the Congress amended the Communications Act and established the Temporary Commission on Alternative Financing for Public Telecommunications to conduct demonstrations of limited advertising. The amendments authorized 10 public television stations to experiment with paid commercials for 18 months. Following the experiment, the commission concluded that potential revenues from advertising were limited in scope and that the avoidance of significant risks to public broadcasting could not be ensured. However, one licensee with whom we spoke that participated in the experiment said that all sources of its revenues increased, including both membership and underwriting revenues. Among licensees with whom we spoke, 19 oppose greater flexibility. These licensees said that greater underwriting flexibility would not generate increased underwriting revenues, since corporations and advertisers desire programming with high ratings and a targeted demographic, which some licensees said public television cannot deliver; would upset viewers and contribute to a decline in membership support; could threaten a licensee’s ability to receive financial support from a would be inconsistent with the mission of public television and could alter programming decisions. Ancillary Revenues Are a Minor Source of Funding for Many Licensees Ancillary and miscellaneous revenues represent another nonfederal funding source. According to our analysis of SABS data, 151 of 177 licensees received ancillary and other miscellaneous revenue in 2005. Although many licensees receive ancillary and miscellaneous revenues, these are generally not significant sources of funding. On average, these sources contributed $691,648 per licensee in 2005. However, just as it does from underwriting, the amount of funding from these sources varies significantly across licensees. Whereas the largest 25 percent of licensees receive approximately $2.3 million on average in annual ancillary and miscellaneous revenue, the remaining 75 percent of licensees receive $141,936 on average. Among the 54 licensees with whom we spoke, 30 mentioned receiving ancillary and other miscellaneous revenues. Sixteen of these licensees said ancillary and miscellaneous revenues constituted a relatively minor source of revenue. Licensees cited many examples of ancillary and miscellaneous activities, including the following. Tower leasing was the most frequently mentioned source of ancillary revenue. A television station installs its antenna on a tower to facilitate the distribution of the station’s video signal. If the station owns the tower, the station can lease space to other companies, such as other television stations, cellular telephone companies, and other organizations that use wireless technologies. These leases represent a source of ancillary revenue; however, in one instance, the licensee leases tower space to state government agencies at below-market rates, thereby lowering the possible tower leasing revenue. Licensees sell videos of various programs and events. For example, KLVX in Las Vegas sells Spanish language and parenting skills videos. WKYU (Bowling Green, Kentucky), licensed to Western Kentucky University, sells videos of the university’s commencement. Several licensees also reported receiving revenues from leasing excess office space and providing access to the station’s production facility; for example, a company might pay a licensee to produce a training video at the station’s production facility. WYES in New Orleans operates YES Productions, a for-profit subsidiary. This subsidiary produces most of the sports-oriented programming in the New Orleans metropolitan area, including the National Basketball Association Hornets games, as well as concerts and other entertainment events. According to WYES staff, YES Productions is the largest source of revenues for the licensee. Public Television Is Unlikely to Generate Significant Additional Back-End Revenues Some television programs generate back-end revenues from separate business ventures, such as syndication, the sale of books and videos, and the sale of clothing and toys. In commercial television, broadcast networks and cable channels receive rights to these back-end revenues, and the distribution of these rights depends on the relative amount of up-front investment in the development and production of programming that each participant contributes. In public television, CPB and PBS also negotiate for and receive rights to back-end revenues. The extent to which CPB and PBS share in the back-end revenues depends on the relative amount of up- front investment and the importance of PBS as a distribution outlet for producers of programming. While CPB and PBS receive between $7 million and $10 million annually in back-end revenues, a significant increase in this source of revenues appears unlikely. Television Programs Can Generate Back-End Revenues Some television programs generate back-end revenues, which arise from separate business ventures associated with the program. Such business ventures include syndication, sales of books and videos, and sales of clothing and toys. For example, Sesame Street generates back-end revenues from the sale of books, clothing, DVDs, and toys; and Seinfeld, a situation comedy broadcast on NBC from 1990 to 1998, generates back-end revenues from syndication and the sale of DVDs. The Commercial Model for Rights to Back-End Revenues Broadcast networks and cable channels produce some, but not all, of the programming they distribute. Traditionally, studios, such as television divisions of movie studios, produced the vast majority of programming for broadcast networks. Today, broadcast networks and cable channels have several ways to procure programming, including purchasing the programming from an external supplier, such as a studio; entering a joint venture with an external supplier; or producing the programming internally. Broadcast networks typically produce programming for certain parts of the day internally, including morning shows, news and new magazines, and sports; daytime, prime-time, and children’s programming are more likely to be externally produced. Among the three cable channels we contacted, one relies primarily on internal production while the other two primarily purchase programming from external suppliers. In commercial television, investment in the up-front development and production of a program influences the relative distribution of back-end revenues. We were told that the financing and rights associated with a program are as unique as the program itself, and therefore each financing and rights structure arrangement is unique. However, the extent of up-front investment in the development and production of programming greatly influences the financing and rights structure. Because of the large costs and risks associated with developing and producing television programming, entities providing a significant share of the funding and assuming the financial risk seek and generally receive a greater portion of the rights to back-end revenues. Thus, we were told that the more funding an entity provides, the greater will be its share of back-end revenues. Descriptions follow of the primary approaches to funding commercial television programs and the associated back-end rights. For internally produced programming, the broadcast network or cable channel funds the development and production of the program. The network or cable channel assumes the financial risk associated with the program and retains the back-end rights and associated revenues. For externally produced programming and coproductions, the broadcast network or cable channel funds a lesser portion of the program development and production. For externally produced programming, the network or cable channel pays a license fee for the program, which may cover one-half to two-thirds of the production costs; for coproductions, the network or cable channel provides funding in excess of the typical license fee. However, in either instance, the external supplier must arrange financing to cover the remainder of the development and production costs, referred to as the production deficit. If the network or cable channel pays only the license fee, it may not receive rights to back-end revenues, although it may share in back-end revenues with coproductions. Public Television Negotiates for and Receives Rights to Back-End Revenues As we mentioned earlier, public television acquires programming from a variety of sources. PBS does not produce programming but rather acquires programming from two primary sources: producing public television stations and independent producers. WETA, WGBH, and WNET are the major producing stations. The producing stations operate as a production company, producing programming internally and also coproducing programming with outside suppliers. Independent producers deliver programming directly to PBS or producing stations. For example, Ken Burns, Scholastic, and Sesame Workshop produce programming for public television. Much like their counterparts in commercial television, CPB and PBS negotiate financing and rights arrangements with producing stations and independent producers. One academic expert with whom we spoke said that two factors influence the rights structure: the size of the up-front investment and the importance of PBS as a distribution outlet for an outside supplier. For public television as for commercial television, a larger up-front investment generally leads to a greater portion of the back-end rights and associated revenues. The importance of PBS as a distribution outlet is such that, several producers said that they prefer to distribute their programming through public television. For example, two producers of children’s programming said they prefer to distribute their programs through public television because of the high-quality, education-based programming distributed by PBS and public television. In these instances, CPB and PBS might receive a more favorable back-end rights arrangement than the extent of their up-front investment would ordinarily warrant because these producers desire PBS distribution for their programs. In response to criticism about its arrangement with the producer of Barney & Friends, CPB revised its revenue-sharing policy in 1997. The stated objectives of the revised policy include ensuring the availability of quality programming, reflecting consideration of producers’ objectives, and capturing windfall revenues. To fulfill these objectives, CPB created three categories of programming, each with a somewhat different rights structure. Children’s Programming. For 15 years, CPB receives a 50/50 share of the net proceeds from the program, after the producer recoups any production deficit. For example, the 50/50 share implies that if CPB provides 25 percent of the project’s cost, CPB receives 12.5 percent of the net proceeds. The net proceeds represents the revenues less the expenses associated with producing, marketing, and distributing the ancillary products and uses. Between years 15 and 20, the producer may retain CPB’s share of the net proceeds as long as the producer applies those proceeds to future children’s programs. Otherwise, CPB receives its share of the net proceeds. Major Events. This category includes programs with a production budget exceeding $500,000 per hour or music, theater, and similar genre programming. CPB receives a 50/50 share of the net proceeds from the program for 20 years; the producer may be allowed to recoup the production deficit before sharing the net proceeds with CPB. Other Programs and Series. This category includes all other programming, which CPB reports would include the majority of its programming. For 15 years, CPB receives a 50/50 share of the net proceeds from the program, after a $250,000 threshold. The producer can retain the $250,000 threshold amount as long as the producer uses the proceeds for any public television purpose. Like CPB, PBS negotiates for back-end rights with producing stations and independent producers. PBS staff said that the organization does not take a formulaic approach to rights management. Rather, the rights structure varies from program to program. In general, PBS holds rights to back-end revenues in perpetuity. However, several factors influence the percentage of back-end revenues that PBS receives. According to PBS staff, these factors include the extent of PBS’s investment in the production, the program genre and existence of a production deficit, and obligations to third parties. The program genre is a factor because PBS typically receives a larger percentage of back-end revenues from children’s programming than it does from prime-time programming. PBS believes that its distribution adds considerable value to children’s programming; and therefore, it possesses greater leverage with producers of such programming. This allows PBS to negotiate a more favorable rights structure for children’s programming, compared with prime-time programming. With prime-time programming, PBS frequently allows producers to recoup much of the production deficit before PBS begins sharing the back-end revenues. With children’s programming, PBS frequently receives a share of back-end revenues proportional to its up- front investment and typically receives these revenues sooner than it would with prime-time programming. Public Television Is Unlikely to Realize Significant Back- End Revenues CPB and PBS both receive back-end revenues. CPB reports receiving between $100,000 and $300,000 annually from back-end sources since 2003. According to PBS staff, since 2000, PBS has received between $7 million and $10 million annually from back-end sources. PBS’s back-end revenues exceed CPB’s because (1) PBS funds a greater percentage of children’s programming, which more frequently generates back-end revenues; and (2) CPB allows PBS to retain and reinvest CPB’s share of back-end revenues earned on many programs that CPB funds through PBS. Thus, in aggregate, CPB and PBS receive about $7 million to $10 million annually from back- end sources. Commercial broadcast networks and cable channels also receive back-end revenues. According to some networks and cable channels, ancillary revenues from product sales are not a major source of revenue. Cable channels rely on advertising and subscriber fees for revenue and do not depend on ancillary sales for financial sustainability. For example, one cable channel told us that ancillary product sales represent about 1 percent of the channel’s total revenues. However, syndication can represent another source of revenue for broadcast networks. Given its statutorily defined mission and limited financial resources, it would likely be difficult for public television to substantially increase back- end revenues. We identified four constraints to public television’s realizing significant back-end revenues: (1) relatively few programs are successful, (2) net proceeds are a small percentage of gross retail sales, (3) public television does not generally make significant up-front investments in program development and production, and (4) public television faces competition in the distribution of programming. Few Programs Are Successful. In commercial television, relatively few programs achieve long-term success. A broadcast network might receive 500 to 800 proposals yearly for new programs, and of these, the network might place orders for 12 to 14. Furthermore, only about one-third of new programs return the following year. Thus, we were told that picking a hit is risky. To earn syndication revenue, a program generally must air for 4 years. Regarding ancillary product sales, we were told that a couple of programs might yield most of a cable channel’s revenues. Because success is infrequent and uncertain, commercial television production is a portfolio business, and a company must have many programs in the pipeline at any given time to ensure that some are successful. Officials from CPB and PBS, as well as major producing stations WGBH and WNET, said that their organizations do not base funding or programming decisions on the potential to generate back-end revenues. Rather, these organizations make funding and programming decisions that further the mission of public television. As a result, most public television programs do not generate significant back-end revenues. We were told that children’s programming and Ken Burns’ productions have the greatest likelihood of commercial success. However, these programs are anomalies and are not guaranteed to generate back-end revenues. For example, WGBH staff mentioned that Between the Lions generates little back-end revenue, even though it has been successful in attracting viewers. Similar to the experience of commercial networks and cable channels, PBS staff said that in 2005, 90 percent of their organization’s back-end revenues came from just 23 series. Net Proceeds Are a Small Percentage of Gross Retail Sales. For both commercial and public television, we found that the net proceeds to producers and investors in program-related business ventures are a small fraction of the retail sales prices. For general merchandise associated with a television program, such as toys, the producer enters into an arrangement with one or more manufacturers. The manufacturer produces and distributes the merchandise and pays the program producer a royalty for the sale of merchandise associated with the television program. These royalties are typically 5 to 15 percent of the wholesale price, which is typically 50 percent of the retail price. Thus, for example, on a $20 sale, the royalty will typically be $0.50 to $1.50. The difference represents reductions for manufacturing, distribution, and retail. Figure 7 depicts this relationship. Similar discounts apply to other business ventures associated with television programs. According to CPB staff, a video distributor generally pays the producer 15 percent of the wholesale price for video products associated with a television program. For books, the producer typically receives between 5 and 10 percent of the retail price. Finally, when a producer syndicates a television program, the producer usually receives 50 to 65 percent of the sales price, and the syndication agent retains the remainder. In some instances, the producer does not own the underlying intellectual property associated with the program. For example, Norman Bridwell created Clifford the Big Red Dog and Marc Brown created Arthur. In these instances, the authors and owners of the intellectual property must be paid from the royalty proceeds. Public Television Does Not Generally Make Significant Up-front Investments. In general, CPB and PBS contribute less than 50 percent of the production budget associated with programming. PBS staff said that the organization generally provides seed money to producers, who must leverage these funds with funds from other organizations. From 2000 through 2005, PBS contributed between 22 and 27 percent of the total production budgets for nationally distributed programs. Producing stations and independent producers confirmed that CPB and PBS contribute relatively modest amounts to programming. PBS provided about 25 percent of WNET’s total production budget over a 3-year period, and PBS’s net contribution to Sesame Workshop is less than 10 percent of the total production costs for Sesame Street. Thus, CPB and PBS appear to contribute less to the total production budget for programming than is typical in commercial television, where the license fee may cover one-half to two-thirds of the production costs. Since CPB and PBS contribute modestly to up-front program development and production, the organizations must share the resulting back-end revenues with other participants. As discussed above, rights to back-end revenues are positively correlated with the share of up-front investment. Given their relative contributions to program development and production, it is not surprising that CPB and PBS share in the rights to back-end revenues. Because CPB and PBS provide a modest portion of the up-front program development and production budget, producers must secure the remaining funds from other sources, perhaps requiring the producers to establish relationships with many organizations. For example, WNET said that it cannot fund its productions with just one or two major participants. Producers may also sell some of the rights to back-end revenues in return for up-front funding or in-kind support. Finally, some producers are unable to obtain external funding for an entire program and thus incur production deficits. In these instances, the back-end revenues allow the producer to recoup the production deficit. According to PBS and one producer, most programs are deficit financed. Increasing the proportion of up-front investment in programming appears to be beyond the financial capacity of CPB and PBS and could expose the organizations to significant risks. First, PBS supplies programming for over 170 public television licensees. To accomplish this, CPB and PBS provide some funding to producing stations and independent producers, and rely on these organizations to secure the remainder of the necessary funding. We were told that CPB and PBS do not have sufficient resources to both contribute significant amounts to individual programs and ensure adequate programming for the remainder of the broadcast year. Second, investing in program development and production involves risks. As noted above, relatively few programs are successful, and it is difficult to predict which programs will be successful. Thus, as one broadcast network told us, television production is a portfolio business in which a few winners offset losers. Without a significant pool of resources to develop a portfolio of programming, CPB and PBS could be exposed to significant financial risk if the organizations made relatively large investments in a small number of programs. In particular, if the organizations made relatively large investments in programs and those programs did not generate sufficient back-end revenues, the organizations might be unable to adequately supply programming for the remainder of the broadcast year. Public Television Faces Competition in the Distribution of Programming. Even with their modest up-front investments, CPB and PBS could seek greater rights to back-end revenues; however, it is unclear whether the organizations could receive greater rights because of the presence of other distribution outlets. We were told that if CPB and PBS became too aggressive in seeking rights to back-end revenues, producers could distribute their programming through alternative outlets, such as cable channels. For example, Nickelodeon represents an alternative distribution outlet for children’s programming. In fact, Sesame Workshop already distributes two programs—the Upside Down Show and Pinky Dinky Doo—through cable channels. Other producers confirmed that they distribute programming through other outlets besides PBS as well. Agency Comments We provided a draft of this report to CPB; the departments of Agriculture, Commerce, Education, and Homeland Security; FCC; and PBS. CPB and PBS agreed with the report, and their written comments appear in appendixes V and VI, respectively. The Department of Agriculture neither agreed nor disagreed with the report, but it emphasized the extensive burden that the DTV transition imposes on small and rural television stations. The Department of Education, the Department of Homeland Security, and FCC provided technical comments that we incorporated as appropriate. The Department of Commerce had no comments on the report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees and to the Secretary of Agriculture, the Secretary of Commerce, the President and Chief Executive Officer of the Corporation for Public Broadcasting, the Secretary of Education, the Chairman of the Federal Communications Commission, and the President and Chief Executive Officer of the Public Broadcasting Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix VII. Scope and Methodology This report examines the funding and operation of public television throughout the United States. In particular, the report provides information on (1) the organizational structure of public television, (2) the programming and other services that public television provides, (3) the current funding sources for public television, (4) the extent to which public television stations are increasing their nonfederal funding support and developing new sources of nonfederal support, and (5) the extent to which public television benefits financially from business ventures associated with programming and how this compares with commercial broadcasters. To respond to the overall objectives of this report, we interviewed officials from the Corporation for Public Broadcasting (CPB), the Federal Communications Commission (FCC), the National Telecommunications and Information Administration of the Department of Commerce, and the Public Broadcasting Service. For the first objective, we reviewed existing literature on the foundation and current structure of public broadcasting and reviewed relevant provisions of the Communications Act of 1934, as amended, and FCC regulations. For the second, third, and fourth objectives, we interviewed officials from 54 of the 173 public television licensees (see table 4). To ensure a diversity of views, we selected licensees according to their type of license, total revenues and percentage of total revenues derived from federal funding, and by the size of the television market where the licensee operates. We also interviewed officials from the Association of Public Television Stations, a membership organization representing public television stations; the Department of Education; the Federal Emergency Management Agency of the Department of Homeland Security; the Ford Foundation; the National Science Foundation; the Rural Utilities Service (RUS) of the Department of Agriculture; and the Urban Institute. Using data from CPB’s Stations Activities Benchmarking Study (SABS), we analyzed 177 licensees’ revenue sources, membership, and programming. (In 2005, the year for which we have SABS data, there were 177 public television licensees; currently, there are 173 licensees.) SABS is a data- gathering mechanism through which licensees provide information annually on their finances and operations; licensees must complete the study to receive their yearly Community Service Grant, which is the mechanism through which CPB distributes federal funding to licensees. To assess the reliability of SABS data, we reviewed relevant information about the database, including the user manual and a data dictionary, and we interviewed CPB officials and subcontractors for information on data quality assurance procedures. We also performed electronic testing to detect obvious errors in completeness and reasonableness. We concluded that the SABS data were sufficiently reliable for the purposes of this report. For the fifth objective, we interviewed officials from organizations producing programming for public television, including David Grubin Productions, Ken Burns (Florentine Firms), HIT Entertainment, Insignia Films, Lumiere Productions, Scholastic, Sesame Workshop, WETA, WGBH, and WNET; the Independent Television Service; commercial broadcast networks and cable channels, including A&E Television Networks, Fox, National Geographic Channel, Nickelodeon, and NBC; and several experts. We also reviewed the relevant media economics literature and materials provided by CBS. We conducted our review from January through November 2006 in accordance with generally accepted government auditing standards. CPB Funding Allocation On the basis of statutory provisions and the receipt of an annual federal appropriation from the Congress, CPB makes an annual Community Service Grant award to each eligible licensee of one or more noncommercial, educational public television station(s). Table 5 summarizes the criteria for awarding funds through each of the three component grants of a Community Service Grant. In addition to the Community Service Grant, CPB provides Criteria Based Grants, including the Local Service Grant and the Distance Service Grant; the latter grant provides additional funds for licensees operating multiple transmitters, which extend television service to outlying areas. Demographics of Public Television Viewers and Members This appendix discusses our analysis of the demographic characteristics of public television viewers and members. Specifically, we discuss (1) our data sources and methodology, (2) the demographic characteristics of viewers of public television’s prime-time programming, (3) the demographic characteristics of viewers of public television’s children’s programming, and (4) the demographic characteristics of public television members. Data Sources and Methodology We required several data elements to assess the demographic characteristics of public television viewers and members. The following is a list of our primary data sources. We obtained data on a sample of households in the United States from Knowledge Networks/SRI, using Knowledge Networks/SRI’s product The Home Technology MonitorTM: Spring 2005 Ownership and Trend Report. From February through April 2005, Knowledge Networks/SRI interviewed a random sample of 1,501 households in the United States. Knowledge Networks/SRI asked participating households a variety of questions about their television viewing, including how many nights per week that the household watched various television networks (such as ABC, CBS) and public television. The questions also addressed the household’s demographic characteristics. We used information from the U.S. Census Bureau to obtain demographic information for the U.S. population. The Knowledge Networks/SRI’s product The Home Technology MonitorTM is a survey of a probability sample of telephone-owning households in the continental United States. To assess the reliability of Knowledge Networks/SRI’s data, we reviewed data documentation on survey methodology and sampling, e-mails with company officials regarding data procedures and weighting, and additional information from a previous reliability assessment. We also performed basic electronic testing to detect obvious errors in completeness and reasonableness. We concluded that these data were sufficiently reliable for the purposes of this report. To assess the demographic characteristics of public television viewers and members, we conducted t-tests with a Bonferroni adjustment. These tests allowed us, for households responding to Knowledge Networks/SRI’s survey, to compare the demographic characteristics of households that viewed certain public television programming with households that did not view the corresponding programming, and to compare the demographic characteristics of households that are members and former members of public television with households that have never been members of public television. Viewers of Public Television’s Prime- Time Programming We found that households viewing public television’s prime-time programming are more likely to be older, to be African American, and to have children under the age of 18, and are less likely to be Hispanic than are households not viewing this programming. A greater proportion of prime-time viewers are age 50 or older, compared with nonviewers in this age category. However, a greater proportion of prime-time viewers also report having children under the age of 18; 37.0 percent of viewers report having children under the age of 18 compared with 33.5 percent for nonviewers. While 5.3 percent of nonviewers are African American, 9.4 percent of viewers are African American, indicating that African Americans are more likely to watch prime-time public television programming. By contrast, Hispanic households make up 8.8 percent of viewers, compared with 12.3 percent of nonviewers. Prime-time viewers are more likely to have some college education than nonviewers; 73.6 percent of viewers have some college education, compared with 68.7 percent of nonviewers. Finally, we did not find a significant difference in the income level of viewers of public television’s prime-time programming and of nonviewers. Viewers of Public Television Children’s Programming Households that watch public television’s children’s programming are more likely to have low-incomes, to be African American and Hispanic, and to have children under the age of 18 than households that do not watch this programming. Of households that watch public television’s children’s programming, 9.5 percent report household income of less than $10,000, compared with 6.1 percent for nonviewers, thereby indicating that the low- income households are more likely to view public television’s children’s programming. Households viewing public television’s children’s programming are also more likely than nonviewing households to rely on over-the-air television, rather than cable or satellite television. Both African American and Hispanic households are more likely to watch children’s programming; 13.7 percent of households viewing public television’s children’s programming are African American, compared with 6.2 percent of nonviewers, and 17.4 percent of viewing households are Hispanic, compared with 8.2 percent of nonviewers. Finally, and as expected, households watching public television’s children’s programming are more likely to have children under the age of 18, compared with households not watching this programming. Public Television Members Unlike viewers, current and former public television members are more likely to be older, white, and report higher levels of income. Compared with households that have never been members of public television, a larger percentage of current and former member households are age 50 and older. Furthermore, 80.7 percent of public television members are white, compared with 74.7 percent of nonmembers, indicating that the white households are more likely to be current or former members of public television. Current and former public television members also report higher income levels than nonmembers; 25.6 percent of current and former members report household incomes of $100,000 or more, compared with 11.2 percent for nonmembers, and 43.1 percent of current and former members report household incomes below $50,000, compared with 56.0 percent of nonmembers. Finally, current and former public television members are more likely to have college degrees, compared with nonmembers. Sesame Workshop Sesame Workshop (the Workshop), the producer of Sesame Street and several other children’s programs, is an independent 501(c)(3) nonprofit organization; the Workshop is not affiliated with public television or any government agency. To help ensure its financial self-sufficiency, the Workshop licenses the distribution of products, such as books and videos, associated with its television programs. The revenues derived from these product licensing activities offset some of the production and educational research expenditures associated with the Workshop’s programs. Today, public television pays less than 10 percent of the project costs associated with Sesame Street. Background The Workshop was founded in 1968 as the Children’s Television Workshop. The Carnegie Corporation, CPB, and the Ford Foundation provided the initial start-up funding for the Workshop. At the time, the Workshop was affiliated with National Educational Television (NET) for organizational support. Sesame Street premiered on November 10, 1969. Following the first season, the Workshop severed its ties with NET and organized as a separate entity. Today, the Workshop is an independent 501(c)(3) nonprofit organization and is not affiliated with public television or any government agency. In addition to Sesame Street, the Workshop produces several programs for distribution though public television and domestic cable channels, including Dragon Tales, Pinky Dinky Doo, and the Upside Down Show. The Workshop also produces programs for international distribution. Product Licensing The Workshop has pursued a course for financial self-sufficiency to fulfill its mission. To this end, the Workshop has partnered with companies, such as Fisher-Price and Random House, for the distribution of products associated with the Workshop’s television programs. These products include books and magazines, clothing, toys, and videos. For the year ending June 30, 2005, the Workshop received approximately $54 million from its product licensing activities. In addition, the Workshop received approximately $21.2 million in direct public support, $11.0 million from government grants, and $20.3 million from program services including government fees and contracts. In total, the Workshop reported revenues of $107.0 million. Figure 8 illustrates the percentage of revenues derived from the various sources for the Workshop. As a nonprofit organization, the Workshop uses its revenues to fund educational research and development of programs and content consistent with its mission. For the year ending June 30, 2005, the Workshop reported expenses of $107.4 million. Nearly three-quarters of the Workshop’s expenses consisted of program production, product licensing, and educational research and marketing. Program production expenses were approximately $47 million, educational research expenses approximately $6 million, and product licensing approximately $15 million. The product licensing expenses include licensing; quality control of general merchandise; and administration, development, and distribution of programs for international television. Figure 9 breaks down the Workshop’s expenses. Relationship with Public Television Since the Workshop and the Public Broadcasting Service (PBS) are separate organizations, PBS negotiates with the Workshop for the broadcast rights to Sesame Street and other Workshop programs. According to the Workshop’s Return of Organization Exempt From Income Tax (I.R.S. Form 990), the Workshop incurred direct production expenses of about $13.3 million for Sesame Street. Additionally, the Workshop incurs expenses associated with educational research for the development of program content and with its acquisition of the Sesame Street Muppets characters. According to Workshop officials, PBS pays the Workshop a license fee for Workshop programming. In return, PBS receives (1) exclusive rights to the distribution of the programming for 2 years and (2) a back-end participation in revenues arising from the sale of general merchandise and underwriting. Considering both the license fee and offsetting back-end revenues, Workshop officials noted that PBS’s net contribution to the production of Sesame Street for public television is less than 10 percent of the project’s expenses. Officials from CPB also mentioned that the Workshop bears all the financial risk associated with its production. Thus, the ability of the Workshop to generate revenues from product licensing helps offset the project expenses associated the programming and outreach provided by the Workshop for public television. Comments from the Corporation for Public Broadcasting Comments from the Public Broadcasting Service GAO Contact and Staff Acknowledgments GAO Contact Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. Staff Acknowledgments Individuals making key contributions to this report include John Finedore, Assistant Director; Allison Bawden; Michael Clements; H. Brandon Haller; Laura Holliday; Michael Mgebroff; Lisa Mirel; Anna Maria Ortiz; and Mindi Weisenbloom.
How to fund public television has been a concern since the first noncommercial educational station went on the air in 1953. The use of federal funds to help support public television has been a particular point of discussion and debate. This report reviews (1) the organizational structure of public television, (2) the programming and other services that public television provides, (3) the current funding sources for public television, (4) the extent to which public television stations are increasing their nonfederal funding sources and developing new sources of nonfederal support, and (5) the extent to which public television benefits financially from business ventures associated with programming and how this compares with commercial broadcasters. GAO reviewed revenue, membership, and programming data for all public television licensees. GAO also interviewed officials from 54 of public television's 173 licensees, the Corporation for Public Broadcasting, the Public Broadcasting Service, federal agencies, and producers of commercial and public television programming Public television is a largely decentralized enterprise of 349 local stations, owned and operated by 173 independent licensees. The stations' operations are funded in part by the Corporation for Public Broadcasting (CPB), a nongovernmental entity that receives federal appropriations. The Public Broadcasting Service (PBS), a nonprofit organization funded by fees paid by member licensees and CPB grants, operates a satellite-based interconnection system to distribute programs to local stations. These programs are created by producers inside public television and by outside production entities. Public television stations broadcast national and local programs and provide a variety of nonbroadcast services to their communities. PBS prime-time and children's programs account for the majority of broadcast hours, to which stations add instructional and local programs tailored to meet the needs and interests of their communities. Nonbroadcast services include educational, civic engagement, health, and emergency-alert services. In 2005, public television licensees reported annual revenues of $1.8 billion, of which 15 percent came from federal sources and the rest from a variety of nonfederal sources including individuals, businesses, and state and local governments. Federal funds help licensees leverage funds from nonfederal sources. Thirty of 54 licensees GAO interviewed said that cuts in federal funding could lead to a reduction in staff, local programming, or services. In general, smaller licensees receive a higher percent of revenue from federal sources and 11 said that cuts in federal support might force the station to shut down. Substantial growth of nonfederal funding appears unlikely. The one area with growth potential is major gifts, which many licensees are pursuing with help from CPB. Program underwriting by businesses and foundations has traditionally been an important source of revenues. A few licensees believe that these revenues could be increased if restrictions on the content of on-air underwriting acknowledgments were relaxed. Many licensees, however, believe that this would go against the noncommercial character of public television and could cause a loss of funding support from other sources. Public television sometimes benefits from business ventures associated with its programs, but these opportunities are infrequent and do not generate significant revenue. Public television does not have the financial resources to invest heavily in the cost of program production to secure a larger share of any resulting back-end revenues. Moreover, the sale of merchandise associated with a program generally returns only a small percentage of the retail price to the program's producer and investors, as is also true for commercial television programs. GAO provided CPB and PBS with a draft of the report for their review and comment. CPB and PBS agreed with the report.
Strengthening Strategic Human Capital Management An agency’s most important organizational asset is its people—they define the agency’s culture, drive its performance, and embody its knowledge base. Leading public organizations worldwide have found that strategic human capital management must be the centerpiece of any serious change management initiative. However, NASA, like many federal agencies, is facing substantial challenges in attracting and retaining a highly skilled workforce, thus putting the agency’s missions at risk. While NASA is taking comprehensive steps to address this problem across all mission areas, implementing a strategic approach to marshal, manage, and maintain human capital has been a significant challenge. In January 2001, we reported that NASA’s shuttle workforce had declined significantly to the point of reducing NASA’s ability to safely support the shuttle program. Many key areas were not sufficiently staffed by qualified workers, and the remaining workforce showed signs of overwork and fatigue. Recognizing the need to revitalize the shuttle program’s workforce, NASA discontinued its downsizing plans in December 1999 and initiated efforts to hire new staff. In September 2001, we testified that NASA was hiring approximately 200 full-time equivalent staff and that it had focused more attention on human capital in its annual performance plan by outlining an overall strategy to attract and retain skilled workers. However, considerable challenges remain, including the training of new staff and addressing the potential loss of key personnel through retirement. As we reported in January 2003, these challenges have not been mitigated, and work climate indicators, such as forfeited leave and absences from training courses continue to reflect high levels of job stress. In addition, staffing shortages in many key skill areas of the shuttle program remain a problem, despite the recent hires. These areas include subsystems engineering, flight software engineering, electrical engineering, environmental control, and shuttle resources management. NASA’s hiring posture for fiscal year 2003 has been to target areas where skill imbalances still exist in the shuttle program. NASA believes that similar workforce problems affect the entire agency and that, as a result, its ability to perform future missions and manage its programs may be at risk. Currently, the average age of NASA’s workforce is over 45, and 15 percent of NASA’s science and engineering employees are eligible to retire; within 5 years, about 25 percent will be retirement eligible. At the same time, the agency is finding it difficult to hire people with science, engineering, and information technology skills—fields critical to NASA’s missions. Within the science and engineering workforce, the over-60 population currently outnumbers the under-30 population nearly 3 to 1. As the pool of scientists and engineers shrinks, competition for these workers intensifies. The agency also faces the loss of significant procurement expertise through 2007, according to NASA’s Inspector General. Coupled with these concerns, NASA has limited capability for personnel tracking and planning, particularly on an agencywide or programwide basis. Furthermore, NASA acknowledges that it needs to complete and submit to the Office of Management and Budget (OMB) a transformation workforce restructuring plan, which it notes that, in conjunction with its strategic human capital plan, will be critical to ensuring that skill gaps or deficiencies do not exist in mission- critical occupations. NASA is taking steps to address its workforce challenges. For example: NASA is developing an agencywide integrated workforce planning and analysis system that aims to track the distribution of NASA’s workforce across programs, capture critical competencies and skills, determine management and leadership depth, and facilitate gap analyses. NASA has completed a pilot of an interim competency management system to facilitate analyses of gaps in skills and competencies. NASA plans to implement the interim system agencywide in 2003 and integrate it with the new comprehensive workforce planning and analysis system in 2005. The new system should foster better management of the existing workforce and enable better strategic decisions about future workforce needs. NASA has developed a strategic human capital plan, which identifies human capital goals, problems, improvement initiatives, and intended outcomes and incorporates strategies and metrics to support the goals. The plan has been approved by OMB and the Office of Personnel Management (OPM). According to NASA, the plan is based on OMB’s scorecard of human capital standards and OPM’s scorecard of supporting human capital dimensions, as well as our own model, which we published in March 2002. NASA has renewed its attention to hiring applicants just out of college and intends to pursue this even more aggressively in coming years. The agency is undertaking a number of initiatives and activities aimed at acquiring and retaining critically needed skills, such as using the new Federal Career Intern Program to hire recent science and engineering graduates, supplementing the workforce with nonpermanent civil servants where it makes sense, and implementing a program to repay student loans to attract and retain employees in critical positions. Finally, NASA has included an objective in its most recently updated strategic plan and fiscal year 2004 performance plan to implement an integrated agencywide approach to human capital management. The plans state that this approach will attract and maintain a workforce that represents America’s diversity and will include the competencies that NASA needs to deliver the sustained levels of high performance that the agency’s challenging mission requires. The 108th Congress is currently considering a series of legislative proposals developed by NASA to provide it with further flexibilities and authorities for attracting, retaining, developing, and reshaping a skilled workforce. These include a scholarship-for-service program; a streamlined hiring authority for certain scientific positions; larger and more flexible recruitment, relocation, and retention bonuses; noncompetitive conversions of term employees to permanent status; a more flexible critical pay authority; a more flexible limited-term appointment authority for the senior executive service; and greater flexibility in determining annual leave accrual rate for new hires. We continue to monitor NASA’s progress in resolving its human capital problems, including how well its human capital initiatives and reforms and any new and existing flexibilities and authorities are helping to strategically manage and reshape its workforce. Correcting Weaknesses in Contract Management Much of NASA’s success depends on the success of its contractors—who received more than 85 percent, or $13.3 billion, of NASA’s funds in fiscal year 2002. However, since 1990, we have identified NASA’s contract management function as an area at high risk because of its ineffective systems and processes for overseeing contractor activities. Specifically, NASA has lacked accurate and reliable information on contract spending and has placed little emphasis on end results, product performance, and cost control. NASA has addressed many of these acquisition-related weaknesses, but key tasks remain, including completing the design and implementation of a new integrated financial management system. Since 1990, our reports and testimonies have repeatedly demonstrated just how debilitating these weaknesses in contract management and oversight have been. For example, our July 2002 report on the International Space Station found that NASA did not effectively control costs or technical and scheduling risks, provide adequate oversight review, or effectively coordinate efforts with its partners. In other examples, we found that NASA lacked effective systems and processes for overseeing contractor activities and did not emphasize controlling costs. Center-level accounting systems and nonstandard cost-reporting capabilities have weakened NASA’s ability to ensure that contracts are being efficiently and effectively implemented and that budgets are executed as planned. The agency’s financial management environment is comprised of decentralized, nonintegrated systems with policies, procedures, and practices unique to each of its field centers. For the most part, data formats are not standardized, automated systems are not interfaced, and on-line financial information is not readily available to program managers. NASA’s lack of a fully integrated financial management system also hurts its ability to collect, maintain, and report the full cost of its projects and programs. For example, in March 2002, we testified that NASA was unable to provide us with detailed support for amounts that it reported to the Congress as obligated against space station and related shuttle program cost limits, as required by the National Aeronautics and Space Administration Authorization Act of 2000. In recent years, NASA made progress in addressing its contract management challenges. For example: In July 1998, we reported that NASA was developing systems to provide oversight and information needed to improve contract management and that it had made progress in evaluating its field centers’ procurement activities on the basis of international quality standards and its own procurement surveys. In January 1999, we reported that NASA was implementing its new system for measuring procurement-related activities and had made progress in evaluating procurement functions in its field centers. NASA has also made progress reducing its use of undefinitized contract actions (UCA)—that is, unnegotiated, or uncosted, contract changes. In 2000, we reported that NASA’s frequent use of undefinitized contract changes could result in contract cost overruns and cost growth in the International Space Station program. In March 2003, NASA’s Office of Inspector General reported that NASA had significantly reduced both the number and dollar amount of undefinitized contract actions since we highlighted UCAs as one reason for designating NASA’s contract management as a major management challenge. NASA has also recognized the urgency of implementing a fully integrated financial management system. We recently reported that NASA has estimated the life-cycle cost of this effort through 2008 to be $861 million., While this is NASA’s third attempt at implementing a new financial management system (NASA’s first two efforts covered 12 years and cost $180 million), this effort is expected to produce an integrated, NASA-wide financial management system through the acquisition and incremental implementation of commercial software packages and related hardware and software components. The core financial management module, which NASA considers to be the backbone of the Integrated Financial Management Program, is currently operating at 6 of NASA’s 10 centers and is expected to be fully operational in June 2003. According to NASA’s business case analysis for the system, the core financial module will provide NASA’s financial and program managers with timely, consistent, and reliable cost and performance information for management decisions. While NASA has made noteworthy progress in strengthening its contract oversight, much work remains. As NASA moves ahead in acquiring and implementing its new financial management system, NASA needs to ensure that its systems and processes provide the right data to oversee its programs and contractors—specifically, data to allow comparisons of actual costs to estimates, provide an early warning of cost overruns or other related difficulties, and monitor contract performance and make program requirement trade-off decisions. In addition, NASA must employ proven best practices, including (1) aligning its selection of commercial components of the system with a NASA-wide blueprint, or “enterprise architecture;” (2) analyzing and understanding the dependencies among the commercial components before acquiring and implementing them; (3) following an event-driven system acquisition strategy; (4) employing effective acquisition management processes, such as those governing requirements management, risk management, and test management; (5) ensuring that legacy system data are accurate to avoid loading and perpetuating data errors in the new system; and (6) proactively positioning NASA for the business process changes embedded in the new system, for example, by providing adequate formal and on-the-job training. However, as we reported in April 2003, the core financial module is not being designed to accommodate much of the information needed by program managers and cost estimators. For example, to adequately oversee NASA’s largest contracts, program managers need reliable contract cost data—both budgeted and actual—and the ability to integrate these data with contract schedule information to monitor progress on the contract. However, because program managers were not involved in defining system requirements or reengineering business processes, the core financial module is not being designed to integrate cost and schedule data needed by program managers. In addition, because NASA has embedded in the core financial module the same accounting code structure that it uses in its legacy reporting system, the core financial module is not being implemented to capture cost information at the same level of detail that it has received from NASA’s contractors. Finally, because NASA has done little to reengineer its acquisition management processes to ensure that its contractors consistently provide the cost and performance information needed, the core financial module does not provide cost estimators with the detailed cost data needed to prepare credible cost estimates. Because more work is needed to demonstrate substantial progress in resolving the root causes of NASA’s contract management weaknesses, our 2003 Performance and Accountability Series continued to report contract management as a major management challenge for NASA and a high-risk area. We are continuing to monitor NASA’s progress in addressing contract management weaknesses. In response to a request from the Senate Commerce, Science, and Transportation Committee and the House Science Committee, we continue to assess the extent to which NASA’s financial management system acquisition is in accordance with effective system acquisition practices and is designed to support NASA’s decision-making needs and external reporting requirements. Controlling International Space Station Costs The International Space Station represents an important effort to foster international cooperation in scientific research and space exploration. It is also considered one of the most challenging engineering feats ever attempted. The estimated cost of the space station has mushroomed, and expected completion has been pushed out several years. NASA is taking action to keep costs in check, but its success in this area still faces considerable challenges. In the meantime, NASA has had to make substantial cuts in the program, negatively impacting its credibility with the Congress, international partners, and the scientific community. The grounding of the shuttle fleet following the Columbia accident has had a significant impact on the continued assembly and operation of the International Space Station. The shuttle is the primary vehicle for transferring crew and equipment to and from the station and is used to periodically reboost the station into a higher orbit. Although on-orbit assembly of the station has stopped, NASA must continue to address the challenges of developing and sustaining the station and conducting scientific experiments until shuttle flights resume. While controlling cost and schedule and retaining proper workforce levels have been difficult in the past, the shuttle grounding will likely exacerbate these challenges. Because the return-to-flight date for the shuttle fleet is unknown at this time and manifest changes are likely, the final cost and schedule impact on the station is undefined at this time. NASA has had difficulty predicting and controlling costs and scheduling for the space station since the program’s inception in 1984. In September 1997, we reported that the cost and schedule performance of its prime development contractor, which showed signs of deterioration in 1996, had continued to worsen and that the program’s financial reserves for contingencies had all but evaporated. In our January 2001 Performance and Accountability Series, we reported that the prime contract was initially expected to cost over $5.2 billion and that the assembly of the station was expected to be completed in June 2002. But by October 2000, the prime contractor’s cost had grown to about $9 billion—$986 million of which was for cost overruns—and the current estimate is about $11 billion. Because of on-going negotiations with the international partners and uncertainty associated with the shuttle’s return to flight, the station’s final configuration and assembly date cannot be determined at this time. NASA’s Office of Inspector General also reported cost overruns in a February 2000 audit report, and based on recommendations in that report, NASA agreed to take several actions, including discussing the prime contractor’s cost performance at regularly scheduled meetings and preparing monthly reports to senior management on the overrun status. However, in July 2002, we reported continued cost growth due to an inadequate definition of requirements, changes in program content, schedule delays, and inadequate program oversight. While NASA’s controls should have alerted management to the growing cost problem and the need for action, they were largely ignored because NASA focused on fiscal year budget management rather than on total program cost management. NASA is instituting a number of management and cost-estimating reforms, but significant challenges threaten their successful implementation. First, NASA’s new life-cycle cost estimate for the program—which is based on a three-person crew instead of a seven-person crew, as originally planned— will now have to be revised because of changes to the program’s baseline. The lack of an adequate financial management system for collecting space station cost data only exacerbates this challenge. Second, NASA must still determine how research can be maximized with only a limited crew. Last, NASA has yet to reach agreement with its international partners on an acceptable on-orbit configuration and sharing of research facilities and costs. As a result, the capacity and capabilities of the space station, the scope of research that can be accomplished, and the partners’ share of operating costs are unknown at this time. Ongoing cost and schedule weaknesses have profoundly affected the utility of the space station—with substantial cutbacks in construction, the number of crew members, and scientific research. As a part of the space station’s restructuring, further work and funding for the habitation module and crew return vehicle have been deferred, which led to the on-orbit crew being reduced from seven to three members, limiting the crewmember hours that can be devoted to research. Additionally, the number of facilities available for research has been cut from 27 to 20. NASA’s international partners and the scientific community are not satisfied with these and other reductions in capabilities and have raised concerns about the viability of the space station science program. Reducing Space Launch Costs In our earlier identification of costs to build the International Space Station, we identified space shuttle launch costs as being a substantial cost component—almost $50 billion. NASA recognized the need to reduce such costs as it considered alternatives to the space shuttle. Indeed, a key goal of the agency’s earlier effort to develop a reusable launch vehicle was to reduce launch costs from $10,000 per pound on the Space Shuttle to $1,000 through the use of such a vehicle. As we testified in June 2001, NASA’s X-33 program—an attempt to develop and demonstrate advanced technologies needed for future reusable launch vehicles—ended when the agency chose not to fund continued development of the demonstrator vehicle in February 2001. Subsequently, until November 2002, NASA was pursuing its Space Launch Initiative (SLI)—a 5-year, $4.8 billion program to build a new generation of space vehicles to replace its aging space shuttle fleet. SLI was part of NASA’s broader Integrated Space Transportation Plan, which involves operating the space shuttle program through 2020 as successive generations of space transportation vehicles are developed and deployed, beginning around 2011. The primary goals for SLI were to reduce the risk of crew loss as well as substantially lower the cost of space transportation so that more funds could be made available for scientific research, technology development, and exploration activities. Currently, NASA spends nearly one-third of its budget on space transportation. In September 2002, we reported that SLI was a considerably complex and challenging endeavor for NASA—from both a technical and business standpoint. For example, SLI would require NASA to develop and advance new technologies for the new vehicle, including (1) new airframe technologies that will include robust, low-cost, low-maintenance structure, tanks, and thermal protection systems, using advanced ceramic and metallic composite materials, and (2) new propulsion technologies, including main propulsion systems, orbital maneuvering systems, main engines, and propellant management. The program would also require NASA to carefully coordinate and communicate with industry and government partners in order to reach agreements on the basic capabilities of the new vehicle, the designs or architectures that should be pursued, the sharing of development costs, and individual partner responsibilities. Last, the SLI project would require careful oversight, especially in view of past difficulties NASA has had in developing the technologies for reusable launch vehicles to replace the space shuttle. These efforts did not achieve their goals primarily because NASA did not develop realistic requirements and, thus, cost estimates, timely acquisition and risk management plans, or adequate and realistic performance goals. Most importantly, however, we reported that NASA was incurring a high level of risk in pursuing its plans to select potential designs for the new vehicle without first making other critical decisions, including defining the Department of Defense’s (DOD) role in the program; determining the final configuration of the International Space Station; and identifying the overall direction of NASA’s Integrated Space Transportation Plan. At the time, indications were that NASA and DOD differed on program priorities and requirements; NASA had yet to reach agreement with its international partners on issues that could dramatically impact SLI requirements, such as how many crew members would operate the station. NASA agreed with our findings and, in October 2002, postponed its systems requirements review for SLI so that it could focus on defining DOD’s role, determine the future requirements of the International Space Station, and firm up the agency’s future space transportation needs. In November 2002, the administration submitted to the Congress an amendment to NASA’s fiscal year 2003 budget request to implement a new Integrated Space Transportation Plan. The new plan makes investments to extend the space shuttle’s operational life for continued safe operations and refocuses the SLI program on developing an orbital space plane—which provides a crew transfer capability to and from the space station—and next-generation launch technology. The Integrated Space Transportation Plan is an integral part of our ongoing work assessing NASA’s plans to assure flight safety through space shuttle modernization through 2020. As NASA proceeds with its revised plans, it will still be important for NASA to implement management controls that can effectively predict what the total costs of the program will be and minimize risks. These include cost estimates, controls designed to provide early warnings of cost and schedule overruns, and risk mitigation plans. With such controls in place, NASA would be better positioned to provide its managers and the Congress with the information needed to ensure that the program is on track and able to meet expectations. Better Mechanisms Needed for Sharing Lessons Learned In addition to taking actions to address its management challenges, NASA uses various mechanisms to communicate lessons garnered from past programs and projects. In 1995, NASA established the Lessons Learned Information System (LLIS), a Web-based lessons database that managers are required to review on an ongoing basis. NASA uses several mechanisms to capture and communicate lessons learned—including training, program reviews, and periodic revisions to agency policies and guidelines—but LLIS is the principal source for sharing lessons agencywide. In January 2002, we reported that NASA had recognized the importance of learning from the past to ensure future mission success and had implemented mechanisms to capture and share lessons learned. However, spacecraft failures persist, and there is no assurance that lessons are being applied toward future mission success. We reported that insufficient risk assessment and planning, poor team communications, inadequate review process, and inadequate system engineering were often cited as major contributors to mishaps. (See table 1.) At that time, we also reported on a survey we conducted of NASA’s program and project managers. The survey revealed that lessons are not routinely identified, collected, or shared by programs and project managers. The survey found that less than one-quarter of the respondents reported that they had submitted lessons to LLIS; almost one-third did not even know whether they had submitted lessons. In addition, most respondents could not identify helpful lessons for their program or project. Furthermore, many respondents indicated that they were dissatisfied with NASA’s lessons learned processes and systems. Managers also identified challenges or cultural barriers to the sharing of lessons learned, such as the lack of time to capture or submit lessons and a perception of intolerance for mistakes. They further offered suggestions for areas of improvement, including enhancements to LLIS and implementing mentoring and “storytelling,” or after-action reviews, as additional mechanisms for lessons learning. While NASA’ s current knowledge management efforts should lead to some improvement in the sharing of agency lessons and knowledge, they lack ingredients that have been shown to be critical to the success of knowledge management at leading organizations. Cultural resistance to sharing knowledge and the lack of strong support from agency leaders often make it difficult to implement an effective lessons-learning and knowledge-sharing environment. We found that successful industry and government organizations had overcome barriers by making a strong management commitment to knowledge sharing, developing a well- defined business plan for implementing knowledge management, providing incentives to encourage knowledge sharing, and building technology systems to facilitate easier access to information. The application of these principles could increase opportunities for NASA to perform its basic mission of exploring space more effectively. To fulfill its vision, NASA is taking on a major transformation aimed at becoming more integrated and results-oriented, and at reducing risks while working more economically and efficiently. However, to successfully implement its human capital, financial management, and other reforms, NASA will need sustained commitment from senior leaders. Given the high stakes involved, it is critical that NASA’s leadership provide direction, oversight, and sustained attention to ensure that reforms stay on track. NASA’s Administrator, who comes to the position with a strong management background and expertise in financial management, has made a personal commitment to change the way NASA does business and has appointed a chief operating officer to provide sustained management attention to strategic planning, organizational alignment, human capital strategy, performance management, and other elements necessary for transformation success. The challenge ahead for NASA will be to achieve the same level of commitment from managers at NASA centers so that NASA can effectively use existing and new authorities to manage its people strategically and quickly implement the tools needed to strengthen management and oversight.
Since its inception, the National Aeronautics and Space Administration (NASA) has undertaken numerous programs that have greatly advanced scientific and technological knowledge. NASA's activities span a broad range of complex and technical endeavors. But the agency is at a critical juncture, and major management improvements are needed. In January of this year, we identified four challenges facing NASA: (1) strengthening strategic human capital management, (2) improving contract management; (3) controlling International Space Station costs, and (4) reducing space launch costs. In summary, these challenges affect NASA's ability to effectively run its largest programs. NASA's ultimate challenge will be in tackling the root problems impeding those programs. This will require (1) instituting a results-oriented culture that fosters knowledge sharing and empowers its workforce to accomplish programmatic goals; (2) ensuring that the agency adheres to management controls to prevent cost overruns and scheduling problems; (3) transforming the financial management organization so it better supports NASA's core mission; and (4) sustaining commitment to change.
Background DOD initiated the SBIRS program to meet all military infrared surveillance requirements through a single, integrated system, and to provide better and timelier data to the Unified Combatant Commanders, U.S. deployed forces, U.S. military strategists, and U.S. allies. SBIRS is to replace the existing infrared system, the Defense Support Program, which has provided early missile warning information since the 1970s. The SBIRS program was originally conceived as having high- and low-orbiting space- based components and a ground segment for mission-data processing and control to improve current capabilities. In 2001, the SBIRS Low component was transferred from the Air Force to the Missile Defense Agency and renamed the Space Tracking and Surveillance System. The Air Force continued developing SBIRS High (herein referred to as “SBIRS”). It, along with its associated ground segment, is one of DOD’s highest priority space programs. The SBIRS program originally consisted of four satellites to operate in geosynchronous earth orbit (GEO), plus one spare, an infrared sensor placed on two host satellites in highly elliptical orbit (HEO)—known as “HEO sensors”—and a ground segment for mission-data processing and control. The SBIRS GEO satellite is designed to support two infrared sensors—a scanning sensor and a staring sensor. The first GEO satellite is commonly referred to as GEO 1. Figure 1 shows the GEO satellite that is to operate in space. As a result of past technical and program difficulties experienced during sensor and satellite development, the SBIRS program has encountered cost and schedule increases. These difficulties have led DOD to restructure the program multiple times, including revising program goals in 2002, 2004, and 2005. For example, in 2002, the program faced serious problems with software and hardware design progress and, in the Conference Report accompanying the National Defense Authorization Act for Fiscal Year 2002, conferees recommended cutting advance procurement funding due to concerns about program developments and the unclear status of the SBIRS program. At that time, the first satellite launch slipped from 2002 to 2006. In late 2005, SBIRS was restructured for a third time which stemmed from a 160 percent increase in estimated unit cost, triggering a fourth Nunn-McCurdy breach, which again postponed the delivery of promised capabilities to the warfighter. Flight Software The flight system software is expected to control the GEO satellite’s mission critical functions and activities. Unlike other software programs that can be deferred and uploaded to the satellite after launch, the flight software cannot be deferred because it is critical to the satellite’s operation and function. The flight software is expected to operate, control, and monitor the GEO satellite’s health, status, and safety. Based on the original design, the flight software was to operate on two of four computer processors onboard the satellite and perform important functions and operations, such as telemetry, thermal control, power management, and fault detection activities. Figure 2 shows a simplified diagram of the original flight software design. Origin and Chronology of Flight Software Events In 1996, development of the flight software began as an independent research and development project by Lockheed Martin—referred to as reusable flight software (RFSW)—to be used for multifunctional “bus” purposes. In 2004, the RFSW was provided to the SBIRS program for development as the flight system software to operate, control, and monitor the GEO satellite’s health, status, and safety. At that time, the software needed to address 1261 requirements in order to satisfy the specific flight software system needs for the GEO satellite. From 2005 to 2006, the Air Force and Lockheed Martin conducted detailed requirements reviews that resulted in the delivery of flight software that was integrated into the satellite’s computers. In January 2007, the flight software underwent testing in a space representative environment called thermal vacuum testing and experienced a higher number of unexpected and unexplained failures. By April 2007, in additional tests, the number of problems escalated well beyond what was expected. At this time, Lockheed Martin notified DOD of the seriousness of the problem. From April 2007 to July 2007, the Air Force and Lockheed Martin analyzed the problems and developed two options: modify the existing software or redesign the software by simplifying the architecture, developing more software, and increasing the robustness of the fault management system. The Air Force chose to redesign the software architecture and began its work with Lockheed Martin on detailed software designs from September 2007 to December 2007. In March 2008, the new design underwent Incremental Design Review Block 1 and was approved by the program review board for the revised cost and schedule baseline. In April 2008, six independent review teams examined the Block 2 design during the Systems Engineering & Incremental Design Review and authorized the Air Force and Lockheed to proceed with formal software coding under the redesign. DOD Is Taking Steps to Mitigate Software Problems, Including Initiatives to Improve Program Oversight To mitigate the software problems, DOD has assessed various alternatives and developed an approach for implementing the software redesign effort and overseeing its development. DOD and the SBIRS contractor are taking steps to address problems, among others, with the original software architecture. DOD has redesigned the architecture, and is in the midst of developing additional software, and testing elements critical to the integration and test of systems. DOD has also undertaken several initiatives to improve its program oversight and to help it better manage the development, including addressing weaknesses in program management responsibility, accountability, and other areas. Steps Have Been Undertaken to Address Poor Software Architecture To address the software’s poor architectural design that ultimately resulted in the unexpected loss of telemetry and commanding for extended periods and unexpected hardware errors, a trade study was conducted by Lockheed Martin to examine options for redesign. Table 1 shows the trade study options considered, and recommendations made. As indicated in table 1, the trade study recommended a simplified architecture that places all the software applications on a single processor, processor “A”, rather than using distributed applications because it represents the best fit with system designs. Lockheed Martin officials stated that the simplified software architecture will address a number of areas that were problematic with the original design, such as the timing of stored programs that failed during thermal vacuum tests. Among other elements, the new design will involve the development of additional software that will also increase the robustness of the fault management system. Major Redesign Approved for Coding Software Approved in April 2008, the new designs have undergone numerous reviews, the last of which was subjected to comprehensive and detailed examination involving six independent review teams. Teams comprised of personnel—from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; Aerospace Corporation, a federally funded research and development center (FFRDC); Lockheed Martin Corporate; Air Force Space and Missiles Systems Center Wing; and the Software Engineering Institute—evaluated the technical solutions, development approach, and readiness of the test facilities, among other elements. The objective of the design review was to authorize the start of formal software coding. For the incremental design review, independent review teams were provided detailed information about software issues on the original design, including the severity of the issues and the status of each. Other information included DOD’s approach in managing risk, resolution of critical issues, disposition of deficiency reports, requirements volatility, and integration with ground systems. Technical data included diagrams of the simplified architecture, operating system interface design, and lines of software code that would be impacted from earlier designs. Other information about the software included designs of subsystems, schematics, integration and delivery schedules, and productivity and sizing estimates. Progress Is Being Made to Develop Software and Conduct Tests DOD is making progress to develop needed software and conduct tests of elements that are critical to the first satellite system, called GEO 1. For example, in June 2008, DOD held a design review on software for the fault management system that elicited concurrence from external stakeholders to proceed with coding activities. At the same time, they held a space technical interchange meeting that provided consensus on the methodology and a plan for complete space vehicle testing, including the flight software. In July 2008, Lockheed Martin delivered 63,000 of the projected 67,000 source lines of code for the space vehicle and ground software integration effort, including a database that provided data so that development efforts could continue on ground software and testing activities. According to Lockheed Martin, software development efforts followed a disciplined process, except in those cases where waivers were requested and granted by the software engineering process group. Figure 2 shows Lockheed Martin’s process for developing and qualifying flight software. Qualification DOD has taken steps to fund critical test bed resources that are needed to adequately test, model, analyze, and simulate software functions as a means to reduce integration and test risks, in response to lessons learned from the failed software that identified the need to add and upgrade their simulation and test bed resources. For example, an evaluation of the software problems found several contributory factors that prevented them from identifying the software problems earlier. These include: test beds that had matured in parallel with the flight software and hardware, making it difficult to distinguish between test bed and software issues; oversubscription of test beds and lack of simulation resources that precluded them from checking out high-risk areas (timing, and stored programs); and insufficient modeling of timing, and analysis of stored program implementation, which might have shed light earlier on lack of robustness. In May 2008, the additional test bed and simulator was brought online and is currently in use. Actions Have Been Undertaken to Address Program Weaknesses, and Improve Oversight of GEO Development DOD and Lockheed Martin have undertaken several initiatives to address areas of program risk, such as efforts to improve oversight of GEO 1 and flight software development. These include acting on recommendations made in an Independent Program Assessment (IPA) that was conducted to ensure the validity of the technical, cost, and schedule baselines. As part of the assessment, the IPA study assessed contractor performance, evaluated program risk areas, and made recommendations on where program improvements could be made. In November 2007, officials from the Air Force, Lockheed Martin, and Aerospace Corporation reported the IPA findings. Table 2 shows the IPA findings, recommendations, and status of implementation efforts. As indicated in table 2, the Air Force and Lockheed Martin have taken actions to address areas of risk. Among others, these actions included deliberately emphasizing the software development process where adherence to process disciplines was lacking, and enhancing the interaction between cost and schedule functions where the Air Force organization structure was found to be flawed because it did not mirror the contractor’s more traditional approach where these functions are combined for better program control. To improve the oversight and management of the GEO 1 satellite and software development, the Air Force and Lockheed Martin established a dedicated execution team with a focus on overseeing the test, integration, and assembly of software and hardware, and ensuring delivery of the GEO 1 satellite. The execution team is a joint effort that includes the Air Force, Lockheed Martin, and Aerospace Corporation. As part of the management approach, the execution team is responsible for conducting daily meetings to review “inch stone” metrics and to resolve issues. The execution team also meets weekly with the Executive Program Management leadership to provide early insight on issues and resolve organizational weaknesses, and conduct monthly reviews with senior executives to provide consistent communication and allow opportunity for guidance. According to DOD officials, the execution team not only improved oversight of software development and management of the GEO 1 effort, but also addressed weaknesses identified in the IPA study. For example, these weaknesses included, among others, the need to fix the program’s responsibility, accountability, and authority disconnects. Officials reported that the execution team helped alleviate the strained relationships that had existed between the Air Force and Lockheed Martin where adversarial relationships and morale problems were evident. DOD’s Plan for Resolving the Software Problem Is Optimistic While DOD has estimated that the SBIRS program will be delayed by 15 months and cost $414 million to resolve the software problems, those estimates appear too optimistic, given the cost and schedule risks involved. For example, SBIRS contractors’ report low confidence that software can be produced in time to meet the December 2009 satellite launch goal. Further, DOD and the contractor face significant challenges and risks that could result in more time and money being required to meet program goals, to include the bypassing of some disciplined software practices that add risk to cost and schedule. Finally, as of August 2008, DOD reported that SBIRS was already behind schedule on some software development efforts, and thousands of activities remain that must be integrated and tested across various systems, with cost and schedule implications, if problems or unintended consequences occur. Low Confidence That Software Can Be Produced to Meet Cost and Schedule Goals A major concern is the infeasibility of producing the software in time to meet the estimated launch goal. For example, technical contractors— Aerospace Corporation, Galorath Inc., and Lockheed Martin—estimated the confidence to be “low” that software can be developed within the tight time frames. These estimates are based on widely accepted models (System Evaluation and Estimation of Resources, Software Estimating Model, and Risk Assessment) that take into account the effective size of the software, staffing of the effort, complexity, volatility of software requirements, and integration and risk of anticipated rework and failure in system tests. Using DOD’s self-imposed baseline schedule goal, software productivity estimates show very low confidence levels that the schedule goal can be met. Table 3 shows the confidence in meeting the GEO 1 launch goal in December 2009 (various models used). As indicated in table 3, one estimate shows only a 5 percent confidence that the software can be produced in time to meet the schedule goal, while the other estimate shows a less than 10 percent confidence level. Lockheed’s own software productivity estimate shows a 50 percent confidence level in meeting the December 2009 launch schedule, but its estimate assumes (1) a higher productivity than has been demonstrated, and (2) the software will require less effort, which has not been the program’s experience. According to DOD’s Cost Analysis Improvement Group, if productivity on software does not materialize, or problems occur during testing and integration beyond what was marginally planned for, then it could cost an additional $400 million for each year of schedule slippage. Major Challenge and Risks to the Redesign and Development Effort Still Exist Based on an April 2008 review of the revised software designs and software development approach, the independent review teams— comprised of personnel from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics; Aerospace Corporation; Lockheed Martin Corporate; Air Force Space and Missiles Systems Center Wing; and the Software Engineering Institute—concluded that the program should proceed with formal software coding, but also expressed concern about the ambitious schedule. Specifically, the review teams cited the program’s aggressive schedule as a major challenge because it allows “little margin for error” and concluded the program faces high risk of not meeting the schedule. Table 4 shows the weaknesses and risks to software development. Although the Air Force and Lockheed Martin are committed to the effort and have built in a 120-day margin to fix unexpected and unforeseeable problems, a computer engineer from the Defense Contract Management Agency who is familiar with the program believes that the margin is insufficient because the planned schedule considers only routine development activities, and that additional time will likely be needed to address any unanticipated problems. Bypassing Disciplined Software Practices Adds Risk Further, to meet the cost and schedule goals, the program is using approaches that will increase program risk. These risks stem from waivers, which were requested by Lockheed Martin, as specified by software provisions in the program’s software development process. In following the SBIRS Software Development Plan, for Flight Software System 1.5, waivers were generated and approved by a software engineering process group so that developers could deviate from the established processes. These deviations from the disciplined development process allowed the program to shortcut important processes in order to meet the ambitious schedule goal, rather than follow a disciplined process to develop software. For example, a waiver was granted for software design to be done in parallel with the software specification activity. However, according to DOD, the risk is that requirements could be rejected and that rework may be required in coding or design. Another waiver was granted for software unit integration testing to be done in parallel with formal unit testing. According to DOD, the risk is that formal unit testing may find problems that were not identified during prior informal (developer) unit testing, thereby necessitating possible rework. Cost and Schedule Goals Are at Risk Because Some Software Elements Are Behind Schedule, and Complex Integration and Other Activities Remain Some of the flight software’s elements are already behind schedule and a significant amount of activities remain to be done, posing concern to DOD. For example, DOD reported that, as of August 2008, the software qualification test case and script development effort was already a month behind schedule. Also, final delivery of the Block 2 flight software is now forecasted to be at least 2 weeks late. Other problems that could set back SBIRS are the thousands of integration and coordination activities that must take place as they ramp up. For example, Lockheed Martin reports that the schedule has more than 14,500 tasks that will occur, beginning in January 2008, across multiple systems. This means that the flight software test activities and integration efforts must all be integrated in a “single- flow” without consequence across a broad spectrum of systems, such as integration with ground, space, and database systems, among others. Software experts, independent reviewers, and government officials acknowledged that the aggressive schedule, when combined with the significant amount of work that remains, is the biggest challenge facing the program. Still, there are external factors that could create schedule impacts for meeting the SBIRS schedule goal. For example, DOD reports that the GEO 1 satellite launch could be affected by other satellites scheduled to launch prior to the SBIRS launch. Essentially, these launch activities use the same launch range resources that will be required to launch the GEO 1 satellite, and delays in any of these events could create unintended consequences to the SBIRS GEO 1 launch goal. Conclusions Given the technical complexity of the program and SBIRS’ poor program history, it is unwise for DOD to pursue such ambitious goals for resolving the flight software problem. More than 12 years after its inception, the SBIRS program continues to face major challenges that have proven technically challenging and substantially more costly than originally envisioned. The testing failure of the flight software is further proof that sophisticated technology and inherent complexities related to software continue to be underestimated. To its credit, DOD has instilled greater discipline by involving outside experts, regaining control of development activities, and dealing with the poor relationships that had existed for some time. To ensure that such steps can lead to success, adherence to disciplined software practices should be made a priority over steps or measures taken to compress the schedule for the sake of meeting the self- imposed launch goal. Prioritizing such disciplines will improve efforts to acquire a better product, increase executability of the program, and reduce program risk. In turn, establishing goals that are synchronized with such priorities will allow DOD to achieve expectations and program deliverables with greater reliability. Essentially, these will position the leadership to better direct investments by establishing goals with greater confidence that they can be achieved. Recommendations for Executive Action To better ensure that SBIRS can meet the cost and schedule goals for resolving the flight software problems as well as launch the first satellite on schedule, we recommend that the Secretary of Defense revise the cost and schedule estimates based on more realistic assumptions to increase the confidence of success, and require that the contractor make adherence to disciplined software practices a priority to reduce program risk. Agency Comments and Our Evaluation DOD provided us with written comments on a draft of this report. DOD partially concurred with our recommendation to revise the cost and schedule estimates based on more realistic assumptions, and concurred with our recommendation to require the contractor to make adherence to disciplined practices a priority. DOD’s comments appear in appendix II. In its comments, DOD partially concurred with the recommendation that the cost and schedule estimates be revised based on more realistic assumptions to increase the confidence of success. DOD noted that the current goals are executable on the basis of available management reserve and schedule margin. In the event that the program encounters any unforeseeable problems that may cause further delays, DOD stated that Congress has approved an additional $45 million in funding to mitigate any future launch delays. The department pointed out that OSD is working with the SBIRS program to hold a more specific review of the flight software. Based on the results of this review, DOD stated it would consider them in any decision to modify the cost and schedule estimates. DOD expects these assessments to be complete by the end of the 2008 calendar year. As indicated in our report, SBIRS has been restructured several times because it underestimated the technical complexity and inherent challenges associated with software, among other technical elements. Neither the software assessment conducted to determine the confidence of producing software nor the independent reviewers who examined the redesign approach indicated that the current goals were executable. Rather, as we noted, software experts, independent reviewers, as well as the government officials we interviewed expressed concern over the aggressive schedule and questionable schedule margin, which the Defense Contract Management Agency believes is insufficient. Moreover, as we previously reported and noted in this report, the expenditure of management reserves has been particularly problematic because these funds were being rapidly spent. Further, while OSD’s plan to assess software and its willingness to revise the cost and schedule goals appear plausible, we believe this approach falls well short of a more reasonable approach to increase the confidence of success for the reasons we cited. In light of the program’s risks, poor performance history, and technical challenges expected during integration, we maintain that establishing goals that are based on more realistic assumptions would place DOD in a better position to achieve cost and schedule goals with greater confidence. DOD concurred with the second recommendation stating that adherence to disciplined software development processes improves the quality and predictability of the software development while reducing the amount of rework. DOD further states that the program office and the contractor jointly accepted two process waivers to streamline the process, but that these waivers have had no adverse impact on the software development effort. In order to keep the focus on quality software deliveries, DOD noted that the program would disapprove any waivers which might compromise the team’s ability to complete the development. We are encouraged by DOD’s efforts to adhere to disciplined software processes to improve the quality and predictability of development. In this endeavor, DOD states that it would disapprove any waivers that could compromise the development effort. However, it is unclear exactly what criteria DOD will use to determine whether a waiver will compromise development efforts. Without this, there is no mechanism to ensure that any waivers that are granted will not have a material effect on software development. We also received technical comments from DOD which have been addressed in the report, as appropriate. We are sending copies of this report to the Secretary of Defense; the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; the Secretary of the Air Force; and the Director, Office of Management and Budget. Copies will also be made available to others on request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions concerning this report, please contact me at (202) 512-4589. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix III. Appendix I: Scope and Methodology To identify the Space Based Infrared System’s (SBIRS) approach to mitigate the flight software problems, we reviewed the plans and alternatives the Department of Defense (DOD) put in place to mitigate the software problem. We also interviewed Air Force, Defense Contract Management Agency, and Lockheed Martin officials who were responsible for management and oversight of the software development effort. We also examined technical reports, studies, and analyses about the factors that contributed to the flight software problems, as well as planning documents and alternatives that were considered in fixing the software problem. To assess the cost and schedule risks and challenges of the way forward, we held discussions with both the DOD and Lockheed Martin on their efforts to assess the program risks and challenges, including their approach to manage, mitigate, and redesign the flight software that is to operate, control and monitor the satellite’s health, status, and safety. We also reviewed schedules, risk reports, analyses, program assessments, and independent review reports pertaining to the flight software’s redesign, and selected assessments by independent sources that were used, in part, as basis for selecting December 2009 as the launch goal for the GEO 1 satellite. We also interviewed Air Force and contractor officials responsible for developing and executing the redesign, including a contractor hired for their expertise in estimating software productivity. We conducted this performance audit at the Office of the Secretary of Defense, Washington D.C.; Space and Missile Systems Center, Los Angeles Air Force Base, California; and Lockheed Martin and the Defense Contract Management Agency, Sunnyvale, California from April to August 2008 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. In addition, we drew from our body of past work on weapon systems acquisition practices and disciplined software practices. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments Contact Acknowledgments In addition to the individual named above, Arthur Gallegos, Assistant Director; John M. Ortiz Jr.; Claire A. Cyrnak; Madhav S. Panwar; Bob S. Swierczek; and Alyssa B. Weir made key contributions to this report. Related GAO Products Space Acquisitions: Major Space Programs Still at Risk for Cost and Schedule Increases. GAO-08-552T. Washington, D.C.: March 4, 2008. Space Acquisitions: Space Based Infrared System High Program and Its Alternative. GAO-07-1088R. Washington, D.C.: September 12, 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Space Acquisitions: Actions Needed to Expand and Sustain Use of Best Practices. GAO-07-730T. Washington, D.C.: April 19, 2007. Space Acquisitions: DOD Needs to Take More Action to Address Unrealistic Initial Cost Estimates of Space Systems. GAO-07-96 (DATE?) Space Acquisitions: Improvements Needed in Space Systems Acquisitions and Keys to Achieving Them. GAO-06-626T. Washington, D.C.: April 6, 2006. Space Acquisitions: Stronger Development Practices and Investment Planning Needed to Address Continuing Problems. GAO-05-891T. Washington, D.C.: July 12, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Defense Acquisitions: Risks Posed by DOD’s New Space Systems Acquisition Policy. GAO-04-0379R. Washington, D.C.: January 29, 2004. Defense Acquisitions: Improvements Needed in Space Systems Acquisition Policy to Optimize Growing Investment in Space. GAO-04- 253T. Washington, D.C.: November 18, 2003. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001.
In 1996, DOD initiated the Space Based Infrared System (SBIRS) to replace the nation's current missile detection system, and to provide expanded missile warning capability. Since then, SBIRS has been restructured several times to stem cost increases and schedule delays, including revising program goals in 2002, 2004, and 2005. These actions were partly due to the challenges of developing sophisticated technologies and software. In 2007, SBIRS had a major setback when flight software for the first satellite underwent testing and failed, a failure caused by design issues. DOD developed a plan for resolving these issues, and revised its cost and schedule goals. GAO has assessed (1) the approach used to mitigate the problems, and (2) the cost and schedule risks and challenges of that approach. To conduct our work, GAO has contacted, met with, and performed detailed work at numerous DOD and contractor offices; and reviewed technical documents on flight software. To mitigate the SBIRS flight software problems, DOD has assessed various alternatives and developed a way to implement the software redesign and oversee its development. In April 2008, DOD approved the redesign effort, which addressed problems with the original design that affected the timing of stored programs, distribution of control between processors, and failure at the hardware interface level. Six review teams comprised of 70 personnel in all evaluated the designs to ensure the technical solutions, development approach, and readiness of test facilities were adequate. DOD and its contractor are now implementing the simplified architecture, developing new software, and testing elements critical to the integration and test of systems. DOD is also improving its program oversight and better managing the SBIRS development, by acting on the recommendations of an Independent Program Assessment; addressing weaknesses in management responsibility, accountability and organizational structure; and establishing a central execution team. DOD has estimated that the SBIRS program will be delayed by 15 months and cost $414 million in funding to resolve the flight software problems, but these estimates appear optimistic. For example, confidence levels--based on the program's ability to develop, integrate, and test software in time to meet the schedule goal--have been assessed as low. Further, the review teams who approved the designs to start coding software report that the program's aggressive schedule is a major challenge because it allows "little margin for error." DOD has also introduced risk by granting waivers to streamline the software development processes to meet the aggressive schedule. These allow the program to deviate from disciplined processes in order to compress the schedule and meet the goal. In addition, some software elements are behind schedule, and thousands of software activities and deliverables remain to be integrated. Delay by these other programs could create unintended consequences for the SBIRS launch goal. If DOD should need additional time or encounter problems beyond what was planned for, more funds will be needed and launch of the first satellite in December 2009 could be jeopardized.
Background The United States Postal Service commenced operations on July 1, 1971, in accordance with the provisions of the Postal Reorganization Act of 1970 (P.L. 91-375). The Service is an independent establishment of the executive branch with a goal to operate on a break-even basis and cover its expenses almost entirely through postal revenues. The equity the U.S. government held in the former Post Office Department became the initial capital of USPS (approximately $3 billion), and the U.S. government remained responsible for all the liabilities attributable to operations of the former Post Office Department. At inception, the Postal Service did not have any unpaid liabilities to OPM for retirement benefits. At that time, USPS employees participated in the federal Civil Service Retirement System (CSRS), which provided them the same benefits as other federal employees, that is, their future retirement benefits would be paid by OPM based on payroll deductions and USPS contributions under provisions of law governing CSRS. With over 900,000 employees at the end of fiscal year 2000, USPS has the largest number of federal civilian employees, and Fortune magazine ranks it as the second largest employer in the United States. Similar to other federal career employees, USPS career employees participate in one of three federal retirement systems primarily administered by OPM: the Civil Service Retirement System, the Federal Employees Retirement System (FERS), and the CSRS Offset Plan. At the end of fiscal year 2000, nearly 786,000 USPS career employees, or 87 percent of the Service’s employees, were participating in one of the three federal retirement programs. The remaining 13 percent were casual labor and transitional employees who do not participate in the federal retirement plans. Of the total career employees, 263,383 employees (33.5 percent) participated in CSRS; 510,509 employees (65 percent) participated in FERS; and 12,021 employees (1.5 percent) participated in the CSRS Offset Plan. Plan 1 – Civil Service Retirement System CSRS is administered by OPM, which maintains a Civil Service Retirement and Disability Fund (CSRDF) for federal employees. CSRS is a defined benefit retirement plan, which provides a basic annuity to participants. Benefit payments to federal retirees and their survivors participating in any of the three retirement plans are made from CSRDF. CSRS covers employees hired prior to January 1, 1984. Employees hired after December 31, 1983, are not eligible for coverage in CSRS, but participate in either FERS or the CSRS Offset Plan. Contributions to the CSRS plan are collected from the Service and its employees and deposited into OPM’s CSRDF, according to the proportionate sharing arrangement established in law. Participating USPS employees contribute in the same proportion and percentage amounts as most other civilian federal employees. USPS and its employees also contribute to Medicare at the rate prescribed by law. In addition to the contributions to CSRS discussed above, the Service is responsible for making additional contributions to CSRDF to fund the future retirement costs of increases to pay that the Service granted to employees under terms in new labor contracts and the annual COLAs to retirees, which were prescribed by law. The provisions for the Service to make these additional contributions to OPM were included in amendments to the law governing CSRS and make the funding of USPS retirement plans different from other federal agencies making annual contributions to the federal CSRS retirement plan. USPS makes these payments without any additional contributions from employee pay. Plan 2 - Federal Employees Retirement System In the Social Security Amendments of 1983 (P.L. 98-21), Congress mandated participation in Social Security by all civilian federal employees initially hired after December 31, 1983. Because Social Security provides both retirement and disability benefits, and because enrolling federal workers in both CSRS and Social Security would have resulted in employee contributions of more than 13 percent of each worker’s salary, Congress directed the development of a new federal employee retirement system with Social Security as the cornerstone. The result of these efforts was FERS, created by P.L. 99-335, enacted on June 6, 1986. All permanent federal employees, including USPS employees, whose initial federal employment began after December 31, 1983, are covered by FERS, as are employees who voluntarily switched from CSRS to FERS during specified “open seasons.” FERS consists of three elements: Social Security, a FERS annuity (a defined benefit plan), and a Thrift Savings Plan (TSP) (a defined contribution retirement savings and investment plan). The Service and its employees also contribute to Social Security and Medicare at the rates prescribed by law. In addition, USPS is required to contribute to TSP a minimum of 1 percent per year of basic pay for employees covered by FERS. The Service also matches voluntary employee contributions up to 3 percent of an employee’s basic pay, and 50 percent of a contribution from 3 to 5 percent of basic pay. Plan 3 – CSRS Offset Plan In the legislation that created FERS, Congress also created the CSRS Offset Plan. Typically, CSRS Offset retirement applies to employees who had breaks in service that exceeded 1 year and ended after 1983 and had 5 years of creditable civilian service as of January 1, 1987. CSRS Offset retirement coverage also applies to employees hired before January 1, 1984, who acquired CSRS coverage for the first time after that date and had at least 5 years of creditable service by January 1, 1987. Under this plan, each employee and employer contribute an equal amount into Social Security, as prescribed by law. In retirement, these employees’ CSRS benefits are reduced (offset) by a portion of their Social Security benefits. Under the provisions of the CSRS Offset Plan, both USPS and the employee contribute a percentage of the employee’s basic pay to the CSRS fund, Social Security, and Medicare at the statutorily prescribed rates. Scope and Methodology To better understand the full nature and components of the Service’s retirement plans, we (1) interviewed officials at the Service and OPM, (2) reviewed and analyzed documents, including legislation, funding plans, budget documents, financial statements, USPS projections, and fiscal impact statements, and (3) analyzed the future projected costs of these plans. Our scope did not include identifying ways for the Service to respond to the current legal framework for funding its retirement liabilities to OPM for annual increases to CSRS basic pay and retiree COLAs. We conducted our work from May 2001 through October 2001 in accordance with generally accepted government auditing standards. We did not independently verify underlying data. We obtained oral comments from USPS and written comments from OPM on a draft of this report. OPM’s written comments are reprinted in appendix IV. Estimated Future Annual Costs for the Three Plans The Service projects that the total annual retirement costs for the three plans, including installment payments for its additional liability for increases in pay and retiree COLAs under CSRS, will increase over the next 10 years from $8.5 billion in fiscal year 2000 to an estimated $14 billion in fiscal year 2010. The FERS portion of that total, including Social Security, is estimated to more than double from $4.1 billion in fiscal year 2000 to $9 billion in fiscal year 2010. See figure 1 for the total of all the retirement plans’ historical and projected annual costs, including the installment payments made by USPS for its additional obligation to OPM for increases in pay and retiree COLAs under CSRS. See appendix I for the dollar amounts of each plan’s cost, including payments for increases to CSRS employees’ basic pay and retirees’ COLAs. A discussion of the costs for each of the three individual plans follows. Costs and Payments of the Three Individual Plans The cost of the CSRS retirement plan for fiscal year 2000 was $800 million, excluding the payments made toward the Service’s additional obligation to OPM. Although total pension costs for all three retirement plans are expected to increase significantly, USPS estimates that the cost for the standard, annual CSRS contributions will decrease in the future as current employees participating in the plan begin to retire. (See figure 2.) In addition to the standard, annual contributions to CSRS discussed above, the Service is responsible for paying additional amounts to CSRDF to fund the future costs of pay increases that USPS granted to its employees under terms in new labor contracts and the annual COLAs to retirees, which were prescribed by law. The provisions for USPS to make these additional contributions to OPM make the funding of the Service’s retirement plans for both pay increases and COLAs different from other federal agencies making annual contributions to the federal CSRS retirement plan. As described in more detail in a later section of this report, USPS makes annual installment payments to OPM toward its additional liability for CSRS employees’ pay increases and retirees’ COLAs. The total installment payment for these liabilities in fiscal year 2000 was $3.6 billion, which included $1.6 billion in interest charges and $2 billion in principal payments. The Service estimates that its annual payments for these liabilities will continue to be significant, increasing steadily through fiscal year 2010, then decreasing at some later point as the number of employees, retirees, and survivors under CSRS decreases. (See figure 3.) The cost of FERS for fiscal year 2000 was $4.1 billion. The Service estimates that the annual cost of FERS will more than double to approximately $9 billion by fiscal year 2010. The large increase in FERS costs is expected because new employees (those hired after January 1, 1984) are mostly only eligible to participate in FERS. (See figure 4.) As employees in CSRS retire, they will be replaced by employees participating in FERS. The employers’ standard, annual contributions toward FERS are higher than CSRS because FERS contributions are calculated by OPM on a stronger actuarial basis than CSRS contributions. FERS contributions are on a “dynamic” basis, which includes assumptions for future rates of inflation, future salary increases, and a provision for an assumed percentage rate of return on plan investments. Together, USPS contributions and employee withholdings are intended to fully fund the annual pension cost for employees covered under FERS over the employees’ working careers with the Service. CSRS Offset Plan The cost of this plan for fiscal year 2000 was $35 million. Although total pension costs for USPS are expected to increase significantly, it estimates that the standard, annual contributions to the CSRS Offset Plan will decrease in the future as current employees participating in the plan begin to retire. (See figure 5.) The plan became effective during fiscal year 1986, but it covered certain employees hired after 1983; consequently, amounts paid in fiscal years 1986 and 1987 by the Service represent the retroactive effect of those costs. Funding Status of the Three Plans The Service pays in full its standard, annual retirement payments to OPM under provisions of the laws governing all federal employees participating in CSRS, FERS, and the CSRS Offset Plan. In addition, the Service reported in its fiscal year 2000 audited financial statements an outstanding liability for future retirement benefits of $32.2 billion (excluding $16.5 billion of future related interest charges over 30 years) due to obligations that made the Service liable for pay increases that employees received under terms of new labor contracts and for COLAs to retirees, who retired on or after July 1, 1971, and their survivors, under the CSRS retirement plan. COLAs are based on the rate of inflation as measured by the Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W). As prescribed by law, all CSRS retirees and survivors receive yearly COLAs equal to the annual percentage change in the CPI-W. The Service’s total liability balance has generally been increasing each year, even though it has been making annual installment payments toward this retirement liability. (See figure 6.) The increase is occurring because the annual additions to the Service’s liability have generally been greater than the annual principal payments made under the installment payment provisions set forth by law. For instance, the Service’s fiscal year 2000 additional liability was $2.7 billion, while its required principal installment payment was $2 billion, plus interest of $1.6 billion. Figure 7 displays how the annual increases in the liability have cumulatively increased more rapidly than the annual accumulated principal payments. Portion of Liability Due to Increases in Employee Pay By law, whenever USPS increases a CSRS employee’s pay as a result of new labor union contracts, it is liable to OPM for the present value of additional future retirement benefits to be paid to the employee upon retirement as a result of the pay increase. When an increase in pay is authorized, OPM determines the Service’s present value of the retirement liability for the future retirement benefits that will result from the pay increase. The Service is required to pay for this incremental liability in 30 equal annual installments, with interest computed at the rate used in the most recent valuation of CSRS, with the first payment due at the end of the fiscal year in which an increase in pay becomes effective. The interest rate for calculating the present value of the incremental liability and for determining the amortization payments has been 5 percent for 29 years. According to OPM’s Office of the Actuary, the law prescribes that the calculation of the additional annual cost of retirement benefits due to increases in basic pay be made on the “static” basis, which assumes no future inflation and no future general schedule salary increases. OPM’s Board of Actuaries has recommended a 5-percent discount rate for the purpose of the static valuation. OPM does not make the calculations on a “dynamic” basis, which would include an assumed annual rate of inflation, future salary increases, and a provision for an assumed percentage rate of return on plan investments. For fiscal year 1972, the Service’s first fiscal year of operations, OPM determined that the additional liability for basic pay increases was approximately $1 billion. Under the 30-year installment arrangement, the Service paid $63 million toward that additional cost for fiscal year 1972, leaving an unpaid liability balance of $954 million to be paid for future years. In each subsequent year, additional liabilities were accrued as a result of pay increases in each of those years. Because the liabilities being added each year are also being paid off in 30-year installments, the overall liability for unpaid pension costs has grown dramatically, even though the Service has been making annual payments on the accumulating balance. Figure 8 shows the growth in the liability balance attributable to increases in CSRS employee pay. For fiscal year 1972 through fiscal year 1981, the liability balance grew from approximately $1 billion to $10.3 billion. During the next 10 fiscal years (1982 through 1991), the balance grew to $21.8 billion (a 112-percent increase), and in the latest 9 fiscal years (1992 through 2000), it grew to $25.9 billion (a 19-percent increase). As more of the current CSRS employees eligible for basic pay increases retire, future increases in the liability balance should level off and, eventually, the balance due for pay increases should start to decrease. However, this decrease will likely be accompanied by an increase in the growth of retiree COLA liabilities until the CSRS retiree group begins to diminish. In the 29 years from fiscal years 1972 through 2000, the OPM assessments for pay increases totaled $42.1 billion. During that same period, the Service paid $16.2 billion in principal payments toward that liability, plus interest of $21.6 billion, for total payments to OPM of $37.8 billion. As of September 30, 2000, the Service owed $25.9 billion in principal, and its annual principal payments have generally been less than the additional liabilities assessed each year. Because the Service is making principal payments in amounts less than the new liability added each year, the unpaid balance is growing, as is the interest charged annually on the unpaid balance. See appendix II for the annual liabilities added by OPM to the Service’s balance for CSRS pay increases and the annual installment payments made by the Service to OPM to reduce the liability balance. Portion of Liability Attributable to Increases in Retiree COLAs By law, the Service is also liable for its share of the COLAs granted to retirees who retired after July 1, 1971, and their survivors. As prescribed by law, CSRS retirees and survivors receive yearly COLAs. Each year, OPM determines the estimated increase in the Service’s liability for the COLA increase and establishes the amount of the installment payments to be made over a 15-year period, plus interest at 5 percent per year. Since fiscal year 1990, the Service has recorded a total liability of $11.1 billion for retiree COLA increases. Of that liability amount, the Service has paid OPM $4.8 billion, plus interest of $2.3 billion for a total of $7.1 billion. Because the Service is only required to pays a portion of the annual increase of this liability, its payments have generally been less than the additional liabilities added each year, and the balance, as well as the interest on the unpaid balance, continues to grow. Even with projections of low inflation, the Service expects the new annual liability amounts to be larger than its annual payments on the liability because the retiree/survivor population will increase. Thus, the total liability for retiree COLAs is expected to continue to grow over time until most of the CSRS annuitants are deceased. Figure 9 depicts the change in the liability amount. Also, see appendix III for the annual liabilities added by OPM to the Service’s liability balance for COLA increases and the annual installment payments made by the Service to OPM toward reducing the balance. An Additional Retirement Benefit: The Post-Retirement Health Benefit Program Although not part of a retirement plan, the Post-Retirement Health Benefit Program is an additional benefit available to USPS retirees. The post- retirement health benefit represents a significant cost, which is also expected to increase in future years. USPS estimates that the annual cost of this benefit will increase from $744 million in fiscal year 2000 to about $2 billion in fiscal year 2010. In the Service’s Integrated Financial Plan for Fiscal Year 2002, health care costs are projected to increase by 10 percent; however, subsequent to that projection it was reported that the average premiums for employees will rise an average 13 percent in fiscal year 2002. These large, unexpected increases in health care costs make projections of future costs very uncertain. USPS is required to pay the employer’s share of health insurance premiums incurred through participation in the Federal Employees Health Benefit Program (FEHBP) for all employees who retired on or after July 1, 1971, and their survivors. The annual cost for this program is included in the Service’s Total Compensation and Benefits expense reported in its annual financial statements and disclosed separately in the footnotes to those statements. When the cost of the post-retirement health benefits of $744 million for fiscal year 2000 is added to the Service’s total retirement costs of $8.5 billion for fiscal year 2000, the total for retirement-related costs becomes $9.3 billion in fiscal year 2000. USPS projects that in fiscal year 2010, these total retirement-related costs will increase to $2 billion and $14 billion, respectively, for a total of $16 billion. The Service also projected that these costs, in fiscal years 2001 and 2002, would increase to $9.9 billion and $10.3 billion, an increase of $600 million and $400 million over the preceding year for each of those years, respectively. These increases exert upward pressure on postal rates and constrict cash flows needed for operating purposes. Service Faces Major Challenges Many stakeholders are calling for a structural transformation of the Service because of the major financial, operational, human capital, and market competition challenges confronting it. Accordingly, in April 4, 2001, testimony before the House Committee on Government Reform, the Comptroller General announced that we had placed the Service’s transformational efforts and long-term outlook on our high-risk list. This focused needed attention on the challenges facing the Service. The Service responded by establishing a Transformation Plan Task Force on July 25, 2001. The task force will identify options to transform the Service so that it will be able to resolve the many challenges it faces in the future. Questions for Further Consideration and Analysis As the Transformation Plan Task Force examines the impact of these retirement costs and liabilities on the Service’s overall financial condition and future operations, there are key questions that need to be addressed. Once the task force has analyzed these questions in detail, it can weigh various options and their long-term implications for the Service. Some of the specific questions that we see as being important include the following. What is the significance of USPS’ growing retirement-related obligations on various options that will be considered as part of its transformation? How would issues relating to retirement-related obligations be addressed if a specific option were to be chosen, such as transforming USPS into a government corporation or a publicly owned company? What is the impact of USPS retirement-related obligations, including retiree health care costs, on its overall financial condition, equity position, cash flows from operations, and ability to fund capital outlays that depend on positive cash flows? Also, what is the impact of the Service’s retirement-related obligations on the scope and quality of postal services that depend on the use of funds for continued modernization and maintenance of capital assets? What is the potential impact of growing retirement-related expenses on postal rates? Could this impact affect USPS’ ability to be successful in a marketplace with increasing competition from electronic alternatives, private delivery companies, and foreign postal administrations? Under current law, is USPS fully covering OPM’s future retirement costs for USPS employees, or is USPS paying more than is needed to cover OPM’s future payments to USPS retirees? Has OPM estimated the amount of future obligations to USPS retirees, and has OPM determined that USPS has contributed a sufficient amount, or, possibly, more than enough, toward plan assets that will pay USPS retirees? Agency Comments USPS provided oral comments that substantially agreed with our report. Matters of emphasis and points of clarity recommended by USPS have been reflected in this report, as appropriate. OPM provided written comments that are reprinted in appendix IV, along with our comments. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the Postmaster General, the Chairman of the Postal Board of Governors, the Chairman of the Postal Rate Commission, the Chief Financial Officer of the U.S. Postal Service, the Director of the Office of Personnel Management, and interested congressional committees. We will also provide copies to others on request. If you have any questions on this report, please contact me at (202) 512- 2600 or Jeanette M. Franzel, Acting Director, at (202) 512-9471. We can also be reached by e-mail at steinhoffj@gao.gov or franzelj@gao.gov, respectively. Joseph Applebaum, Senior Actuary, Michael Fischetti, Meg Mills, John Sawyer, and Fred Evans were key contributors to this report. USPS Total Annual Retirement Plan Costs FERS (including TSP) FERS (including TSP) USPS Retirement Liability for Employees’ Pay Increases USPS Retirement Liability for Cost of Living Adjustments Comments From the Office of Personnel Management The following are GAO’s comments on the Office of Personnel Management’s letter dated November 23, 2001. GAO Comments 1. The “fine tuning” items were not included in the report because the amounts were paid to OPM by fiscal year 1998 and are no longer outstanding liabilities of the Postal Service. 2. Our report has been revised to reflect that the Board of Actuaries of CSRS determined the 5-percent discount rate used in the amortization tables to calculate payments to be made to OPM. 3. Our Senior Actuary agreed with the OPM comment; therefore, we have deleted the sentence from our report.
This report identifies long-term structural or operational issues that may affect the U. S. Postal Services's (USPS) ability to provide affordable universal postal service on a break-even basis. One key issue is the Service's retirement costs and future liabilities. USPS had a net loss of $199 million in fiscal year 2000 and recently announced a $1.7 billion net loss for fiscal year 2001. The impact of September 11 and the subsequent anthrax mailings on the volume and the cost of future mail service is unclear. USPS' annual retirement plan costs are projected to rise significantly in the next 10 years--from $8.5 billion in fiscal year 2000 to $14 billion in fiscal year 2010. USPS also faces mounting debt because of pay increases resulting from new labor contracts and annual cost-of-living adjustments for retirees. USPS reported an outstanding liability for future retirement benefits of $32.2 billion as of September 2000, and anticipates paying another $16.5 billion in interest on this liability over 30 years. The Post-Retirement Health Benefit Program--an additional benefit available to USPS retirees--cost $744 million in fiscal year 2000. When this benefit is added to the retirement plan, it raises total retirement costs for fiscal year 2000 to $9.3 billion. USPS projects that this additional post-retirement health benefit will cost $2 billion in fiscal year 2010, raising the Service's total retirement costs to $16 billion that year.
Background In fiscal year 1980, of the more than 2 million servicemembers on active duty, over 170,000 (8.4 percent) were women. Congressional action and DOD policymaking lifted the prohibition on women serving in positions in combat aviation, aboard combatant vessels, and in ground units (brigade level and above) and DOD’s new definition of combat jobs opened over 259,000 additional military positions to women servicemembers since April 1993. By December 1995, the number of women serving on active duty had risen to over 191,000 (about 12.8 percent of the approximately 1.5 million servicemembers). At the time of our report, DOD had opened over 80 percent of all positions to all servicemembers, ranging from a low of 62 percent of positions open to women in the Marine Corps to a high of over 99 percent of positions open to women in the Air Force. Section 543 of the Fiscal Year 1994 National Defense Authorization Act required the services to adopt gender-neutral occupational performance standards and defined those as being work standards that are common, relevant, and not based on gender. The act also required the services to adopt physical performance standards for any occupation in which DOD determined that strength, endurance, or stamina was essential to the performance of duties. The DOD General Counsel later determined that the services were not required to have physical standards for any occupation but that if such standards did exist they would have to be applied on a gender-neutral basis for any occupation open to both men and women. The services use a variety of pre-enlistment, job classification, and retention screening devices to select qualified candidates for military service. For example, pre-enlistment screens include requirements that recruits score at or above a specified minimum on a cognitive test and be within a certain height or weight range. Other standards may be occupation-specific, such as requiring recruits entering electronics occupations to demonstrate aptitude in the field of electronics. Military Services Differ in How They Classify Recruits for Physically Demanding Jobs DOD has left it to the services to determine how to classify servicemembers into physically demanding occupations. The Air Force is the only service that requires recruits to take a strength aptitude test. Each Air Force enlisted occupation is categorized into one of eight strength categories, and recruits’ test scores are used to screen them for their military occupations. The other services permit virtually any recruit to fill nearly all physically demanding occupations provided they meet cognitive, height/weight, and other standards unrelated to strength capacity and restrict women only from occupations closed by combat exclusion policies. In 1976, we recommended that DOD develop standards for measuring recruits’ ability to meet strength, stamina, and operational requirements because we found that some servicemembers were unable to do physically demanding tasks. In response, the Army categorized each enlisted occupational specialty into one of five categories based on physical demand. It required new recruits to take a strength test using the “incremental lifting machine,” a weight-lifting machine developed and used by the Air Force. The Army concluded that although the test helped to better match recruits’ physical capabilities to requirements of physically demanding occupations, it also prevented more women than men from serving in certain occupations. Consequently, test results were used only to counsel applicants about job assignments. The Army discontinued the test in 1990. In the 1970s, the Air Force adopted an earlier version of the test and by 1987 categorized each of its enlisted occupations into one of eight physical demand categories. The Air Force currently requires all recruits to take the strength aptitude test at a military entrance processing station. The test requires recruits to lift weights on the incremental lifting machine starting at 40 pounds; the weight is then increased in 10-pound increments until the recruit (1) cannot complete a lift, (2) asks to stop, or (3) lifts 110 pounds (the maximum for any occupation in the Air Force). An Air Force counselor uses the results to match recruits to occupations based on the eight physical demand categories and screens out applicants who the test results indicate would have difficulty performing physically demanding jobs. The Navy considered using a strength test to screen applicants for entry into physically demanding military occupations and concluded that more women than men would have been excluded from such jobs. The Navy concluded, however, that women were already meeting the physical demands of their occupations and, for that reason, did not implement its test or categorize its occupations by physical demand. Similarly, the Marine Corps has not adopted an occupationally based strength test or categorized its occupations by physical demand. The Services Have Little Data to Assess Capability to Perform Physically Demanding Tasks Except for the Army, the services have not collected data on servicemembers’ ability to do physically demanding jobs and have little basis on which to conclude that servicemembers are not having problems. We are concerned that some servicemembers may have difficulty doing some physically demanding tasks based on the results of a limited survey conducted by the Army Research Institute (ARI) and anecdotal information we obtained in interviews with servicemembers. However, given limitations on the ARI survey and our interviews, we were not able to assess the significance of the problem. In 1989, 1994, and 1995, ARI surveyed servicemembers in selected Army occupations. In 1989, ARI surveyed 21 combat and noncombat occupations and found that 59 to 84 percent of male and female servicemembers in 7 selected noncombat occupations reported no difficulty in lifting objects. In the 1994-95 follow-on survey of 10 of 267 occupations, ARI found that 51 to 79 percent of servicemembers reported no difficulty in lifting objects in some of the same occupations as those looked at in the 1989 survey. Because the surveys did not address the significance of the problem and rely on self-reported data, the results must be used with caution. On the other hand, the results also suggest that the Army may have servicemembers who have had difficulty doing physical tasks. The other services have not done any systematic assessment of the capability of their personnel to perform the physically demanding aspects of their jobs. According to DOD and Army officials, the services rely upon the absence of complaints filtering up from operational units as an indicator that widespread performance problems do not exist. Supervisory personnel we spoke with, however, indicated that they would work around individual performance capability problems or redistribute tasks and that it was unlikely such information would be channeled to higher levels unless widespread problems were encountered. Our discussions with about 100 Army personnel in 5 occupational specialties (2 of which were used in ARI’s survey) anecdotally supported ARI’s finding that some soldiers were having difficulty completing some physically demanding tasks. In addition, in discussions with over 300 military personnel in the Air Force, the Navy, and the Marine Corps, some individuals stated that at one time or another, they had difficulty with some aspect of their job. Given the limited number of personnel we interviewed and the limited number of military specialties we reviewed, we were unable to determine whether such problems were widespread. All four services told us that they have the capability and infrastructure already in place to collect data on physical demands of occupations at little or no additional cost. Each of the services has ongoing processes through which they can identify occupational tasks in each specialty in order to revise training curriculums and which they use for other reasons. However, the services do not collect data on the physical demands of jobs with these processes. Surveys, identification of physically demanding tasks, or other data collection efforts, could be used as a first step in identifying occupations in which servicemembers have difficulty and can identify occupations that are candidates for reengineering to reduce the physical demands placed on servicemembers. For example, the Army Research Laboratory has a pilot reengineering project underway that attempts to identify opportunities to reengineer selected occupations to reduce the physical demands and enhance job sustainment, safety, and personnel utilization. In addition, the Air Force has a number of reengineering studies underway. Systematic data collection on physically demanding tasks could be used to develop occupation-specific physical strength training. For example, the Army’s Training and Doctrine Command (TRADOC) has commissioned the Army’s Research Institute of Environmental Medicine to develop a database of physically demanding tasks in Army occupations. TRADOC is considering using the database to establish specific physical strength training to help servicemembers meet the physical demands of their jobs. According to DOD, current training consists of classroom training that tends to be less physically oriented than on-the-job training. Once in their duty assignments, servicemembers continue their on-the-job training. According to DOD, training standards are based on tasks, duties, and knowledge required to perform in an occupation and men and women are held to the same standards. The Air Force Strength Aptitude Test Program May Not Be Valid The Air Force is the only service that uses strength aptitude testing as a prerequisite for entry into specific military occupations. Air Force recruits take the Armed Services Vocational Aptitude Battery (ASVAB) and must pass a physical given at a military entrance processing station. If they pass the physical, recruits then take the strength aptitude test, and their scores are recorded in their medical records. Finally, recruits meet with an Air Force counselor who matches them to a military occupation based on the ASVAB and strength aptitude test scores, their interests, and the needs of the Air Force. However, Army, Navy, and independent research raises questions about the predictive validity of the test currently used by the Air Force, and we found several problems with implementation of the Air Force testing program. Research Questions the Validity of Test Results Obtained With the Incremental Lifting Machine Since 1982, at least nine studies have been published or presented that raise questions about the validity of the incremental lifting machine test as a predictor of performance in military occupations, particularly if the test is relied upon as the sole measure of predicted performance. A 1982 study sponsored by the Air Force reported that the incremental lifting machine was the best single predictor of task performance. The result was based upon transformation of the combined male and female scores that minimized the differences in those scores but resulted in giving the appearance of improving the predictive power of the incremental lifting machine beyond the experimental results. A 1984 study done for the Army found that the incremental lifting machine was a good predictor of a set of Army simulated occupational tasks, accounting for 67 percent of the explanation of scores on the tasks. However, the study misstated the relationship because it combined significantly different male and female lifting scores to determine the predictive power of the incremental lifting machine scores. When we examined the reported scores by gender, the correlation of the incremental lifting machine with each simulated task was considerably lower for male and female scores than reported for the aggregated score. A 1985 Navy study stated that combining male and female incremental lifting machine scores would involve making an assumption that male and female scores are evenly distributed throughout the entire group, a tenuous assumption according to the text. By using separate male and female scores, the study compared 7 strength test measures, including 3 different incremental lifting machine lifts, with 19 shipboard tasks and concluded that “some of the best correlates of shipboard performance are the armpull, ergometer, and body weight,” which are 3 nonincremental lifting machine measures. A 1985 study conducted by the Army’s Research Institute of Environmental Medicine found that women tended to be shorter than men and thus were required to spend relatively more time lifting with their upper body than males and consequently scored lower in tests using the incremental lifting machine (given that women tend to have less upper body strength than men, according to this and other research). On the other hand, the study found that an alternate strength test that focused more on the use of the lower body produced female scores that were closer to those of males in the study population. An ARI study in 1993 concluded that variables such as job performance and the Army’s physical readiness test were not strongly related to scores on the incremental lifting machine. According to the study, the Army should not place great confidence in the use of a single lifting test as a selection measure of physical fitness and should consider a more comprehensive approach to physical screening. A Canadian research team produced four sequential studies since 1990and concluded that gender differences in incremental lifting machine scores and box-lifting tasks were heightened by an incremental lifting machine test protocol prohibiting subjects from moving their feet or shifting their weight to achieve a more comfortable lifting posture. When subjects were allowed to lift in their most comfortable method, they could lift heavier boxes. For female subjects, the incremental lifting machine score became less related to their box-lifting scores as the constraints were relaxed. Recruits Are Not Tested to Their Full Potential Many recruits that took the strength aptitude test at a military entrance processing station scored lower during their initial tests than they did when retested during basic military training. For example, our analysis of data provided by the Air Force showed that between December 1995 and February 1996, 244 females were retested during the second week of basic military training and lifted an average of nearly 18 pounds more than they did initially; the 211 males’ average increase was nearly 15 pounds. Of the 455 recruits who retested, 3 lifted 10 percent more, and all but 10 lifted from about 11 percent to 120-percent more or an average of 23.3 percent more (10 lifted less). A study conducted by the Air Force concluded that servicemembers who engaged in physical training programs of about 9 weeks increased arm strength by just 6 percent. According to Air Force officials, nearly all of another approximately 3,900 recruits retested at basic military training between April 1994 and November 1995 also scored higher, although individual scores were not readily available. According to the researcher who oversees the strength aptitude program, some increases in test scores are attributable to increased motivation on the part of the recruit at basic training or by permitting recruits to adopt a lifting technique not in accordance with the test protocol. However, the researcher concluded that neither increased motivation nor a change in the test protocol can explain the magnitude of the increase we found. Because nearly all of those who retook the strength aptitude test scored higher, we question the validity of the scores of recruits who were not given an opportunity to retake the test. With the exception of those who initially lifted 110 pounds (the maximum weight requirement for any Air Force occupation), the Air Force cannot ensure that everyone else has also been tested to their full potential. Physical Strength Standards for Most Air Force Specialties Have Not Been Updated According to the researcher who oversees the strength aptitude program, occupational specialty strength standards must be kept current to maintain the program’s validity. However, since 1986, the Air Force has updated the strength standards for only 12 specialties. In addition, 16 more were being resurveyed at the time of our report. For the remaining Air Force specialties, strength standards are based on data gathered between 1978 and 1982. According to the researcher, unless something in the job changes, the strength standard is still current. We were unable to evaluate whether changes may have been made in any of the remaining 227 Air Force specialties because the original data is stored on computer tape in a format not readable by computers now in use in the Air Force. We were told that a contractor might be able to convert the data to readable form, but the task could be costly and potentially time-consuming. According to a 1995 Air Force Aerospace Armstrong Medical Laboratory memorandum, the strength requirement should be resurveyed whenever two or more occupations with different strength standards are merged. However, since October 1993, the Air Force has merged or split 11 occupations within differing strength categories. In addition, the researcher who oversees the strength aptitude program has identified another 11 specialties that also need to be resurveyed. As a result, the Air Force has not determined the current strength requirement for 22 merged, split, or changed occupations. The Air Force will run the risk of denying servicemembers’ entry into occupations based on invalid or outdated strength requirements in those merged occupations that have not been resurveyed. Recommendations Because the services have little systemically collected data on the ability of servicemembers to meet the physical demands of occupational tasks, we recommend that the Secretary of Defense require the services to assess whether a significant problem exists in physically demanding occupations and identify solutions, if needed. Such solutions could include redesigning job tasks to reduce the physical demands, providing additional training, or establishing valid performance standards to enhance job sustainment, safety, and personnel utilization. Given the questions concerning the validity of the strength aptitude test and the implementation problems we found, we recommend that the Secretary of the Air Force reassess the use of the strength aptitude test as a means of predicting future performance in physically demanding occupations. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD generally concurred with our findings and recommendations. In response to our first recommendation, DOD stated that it will direct the services to (1) collect data systematically on job performance difficulties and (2) focus on physically demanding occupations with a history of strength-related injuries and occupations recently opened to women. We are concerned, however, that such a narrow focus will not identify all occupations where problems exist. First, because supervisory personnel told us they may assign persons having difficulty to lighter tasks, occupations where servicemembers are having difficulty may not necessarily lead to a higher incidence of strength-related injuries. Working around a problem may prevent injuries, thus limiting the usefulness of medical data for DOD’s purpose. Second, if DOD focuses only on occupations recently opened to women, it may overlook strength-related performance problems in occupations open only to men. DOD needs to review all physically demanding occupations and use appropriate data in its study. In its response to our second recommendation, DOD stated that it will (1) make every effort to comply with generally accepted professional standards for test development and implementation and (2) direct the Air Force to continue its “periodic validation efforts.” However, while the Air Force may have attempted to validate the strength aptitude test periodically, our review did not disclose any study that demonstrated that the incremental lifting machine test had predictive validity. DOD’s comments are reprinted in appendix I. DOD also provided several technical corrections that we have incorporated into the text of our report as appropriate. Scope and Methodology We reviewed DOD’s 1995 report to Congress on gender-neutral performance standards; service orders, regulations, and manuals; and research studies undertaken within the services and by independent researchers. We interviewed officials and obtained documents from the Office of the Secretary of Defense (Accessions Policy) and met with officials from the Defense Advisory Committee on Women in the Services in Washington, D.C. To complete our work with the Army, we interviewed officials and obtained documents from the Office of the Assistant Secretary of the Army (Manpower and Reserve Affairs), Office of the Deputy Chief of Staff for Personnel, Personnel Command, Training and Doctrine Command, Combined Arms Support Command, Army Transportation Center, Army Research Institute, Army Research Laboratory, and Army Research Institute of Environmental Medicine. To complete our review of the Navy, we met with officials from the Office of the Assistant Secretary of the Navy (Manpower and Reserve Affairs); Bureau of Naval Personnel, including the Special Assistant for Women’s Policy; the Commander in Chief, Atlantic Fleet; the Naval Manpower Analysis Center; and the Center for Naval Education and Training. To complete our Air Force work, we met with officials of the Headquarters of the Air Force (Directorate of Military Personnel Policy), the Air Force Personnel Center, the Air Force Recruiting Service, the Air Force Education and Training Command, the Armstrong Aerospace Medical Research Laboratory, the Occupational Measurements Squadron, and the Military Entrance Processing Command, which administers the strength aptitude test for the Air Force. We also interviewed officials and observed Air Force recruits taking the strength aptitude test at the Military Entrance Processing Stations in Baltimore, Maryland, and Richmond, Virginia. To complete our Marine Corps work, we met with officials of the Office of Accessions Policy and Combat Development Command. To assess whether the services have a system for identifying demanding tasks that exceed servicemembers’ physical capabilities to perform them and identify difficult tasks, we observed activities and met over 400 service personnel employed as instructors, students, operational unit commanders, and enlisted personnel at Forts Eustis and Lee in Virginia; Fort Bragg, Marine Corps Air Station Cherry Point, Camp Lejeune, and Seymour Johnson Air Force Base in North Carolina; Lackland Air Force Base in Texas; Fort Leonard Wood in Missouri; Naval Air Station Memphis in Tennessee; and aboard the aircraft carrier USS John C. Stennis. As agreed with your office, we concentrated on the occupational areas of bridge engineer, food service specialist, aviation ordnance, and motor transport. We conducted our work from November 1995 to June 1996 in accordance with generally accepted government auditing standards. We will send copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Executive Director, Defense Advisory Committee on Women in the Services; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix II. If you have any questions about this report, please contact me on (202) 512-5140. Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. Sharon A. Cekala William E. Beusse Brian J. Lepore Martin E. Scire Arthur L. James, Jr. Office of the General Counsel, Washington, D.C. Lawrence E. Dixon Janine M. Cantin Sharon L. Reid Paul A. Gvoth, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the use and development of gender-neutral occupational performance standards in the military services, focusing on how the services implement and evaluate standards. GAO found that: (1) each service takes a different approach to screening members' physical fitness; (2) the Air Force is the only service that requires new recruits to take a strength aptitude test; (3) the Air Force uses the results to qualify individuals for their military occupations; (4) the services believe that their approaches to assigning members to physically demanding tasks are appropriate, because they receive few complaints from members about such tasks; (5) the services have little data to assess a member's capability to perform tasks; (6) the Army has systematically collected physical performance data since 1989; (7) the data show that at least 84 percent of the Army members had no problems in completing their tasks; (8) a 1994-1995 survey determined that 51 to 79 percent of members have no problem in completing physically demanding tasks; and (9) the validity of the Air Force's strength aptitude test is questionable because of concerns about the administration, accuracy, and relevance of the test's physical requirements.
DOD’s Approach to Managing Follow-on Modernization May Hinder Transparency and Oversight The F-35 program has begun planning and funding the development of new capabilities, known as follow-on modernization, but our ongoing work indicates that DOD’s current plan for managing the development of these new capabilities may limit transparency and oversight. The current F-35 development program is projected to end in 2017, when Block 3F developmental flight testing is complete, with a total development cost of $55 billion. The first increment of follow-on modernization, known as Block 4, is expected to add new capabilities and correct deficiencies of 9 capabilities carried over from the current development program such as the prognostics health management system down-link and communication capabilities. Although the requirements are not yet final and no official cost estimate has been developed for Block 4, DOD’s fiscal year 2017 budget request indicates that the department expects to spend nearly $3 billion on these development efforts over the next 6 years (see figure 1). Our preliminary analysis indicates that F-35 Block 4 development costs of this magnitude would exceed the statutory and regulatory thresholds for what constitutes a major defense acquisition program (MDAP), and it would be larger than many of the MDAPs in DOD’s current portfolio. However, in August 2015, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued an Acquisition Decision Memorandum directing the F-35 program office to manage Block 4 development under the existing F-35 acquisition program baseline and not as a separate incremental acquisition program. As a result, DOD will not hold a Milestone B review—the decision point in which program officials would present a business case in order to initiate system development. A Milestone B review would also set in motion oversight mechanisms including an acquisition program baseline; Nunn-McCurdy unit cost growth thresholds; and periodic reporting of the program’s cost, schedule, and performance progress. These mechanisms form the basic business case and oversight framework to ensure that a program is executable and that Congress and DOD decision makers are informed about the program’s progress. Best practices recommend an incremental approach in which new development efforts are structured and managed as separate acquisition programs and that a business case should match requirements with resources—proven technologies, sufficient engineering capabilities, time, and funding—before undertaking a new product development. Because DOD does not yet have approved requirements and is not planning to hold a Milestone B review, its approach for Block 4 modernization will not require the program to have such important cost, schedule, and performance reporting and oversight mechanisms in place. Based on our ongoing work, we have concerns about DOD’s approach to Block 4 that are partly rooted in our assessment of a similar case with the F-22 modernization program. In March 2005 we found that the Air Force was managing its multi-billion dollar F-22 modernization efforts as part of the program’s existing acquisition baseline and had not established a separate knowledge-based business case. As a result, the F-22 baseline and schedule were adjusted to reflect the new timeframes and additional costs, comingling the funding and some content for the baseline development and modernization efforts—some content that had not been achieved under the baseline program were deferred into the modernization program. When the content, scope, and phasing of modernization capabilities changed over time, it appeared that the F-22 program was fraught with new schedule delays and further cost overruns. The comingling of modernization efforts with the existing baseline reduced transparency and Congress could not distinguish the new costs associated with modernization funding from cost growth in the original baseline. We recommended that the Air Force structure and manage F- 22 modernization as a separate acquisition program with its own business case—matching requirements with resources—and acquisition program baseline. Eventually, the department separated the F-22 modernization program from the baseline program with a Milestone B review, in line with our recommendation, which increased transparency and better facilitated oversight. The department has the opportunity to apply similar lessons learned to the F-35 Block 4 program. Program Continues to Face Affordability Challenges Although the estimated F-35 program’s total acquisition costs have decreased since 2014, the program continues to face affordability challenges. As of March 2016, DOD’s estimated total acquisition cost for the F-35 program is $379 billion, or $12.1 billion less than it reported in 2014. The program will require an average of $12.7 billion per year to complete the procurement of aircraft through 2038 (see figure 2). The program expects to reach peak production rates for U.S. aircraft in 2022, at which point DOD expects to spend more than $14 billion a year on average for a decade. At the same time, DOD will be operating and sustaining an increasing number of fielded F-35 aircraft. DOD officials we spoke with for our September 2014 report, stated that the current F-35 sustainment strategy with cost estimates around $1 trillion is not affordable. When acquisition and sustainment funds are combined, annual funding requirements could easily approach $30 billion in some years. Our preliminary results indicate that affordability challenges will compound as the program competes with other large acquisition programs including the long range strike bomber, KC-46A Tanker, Ohio Class Submarine Replacement and the DDG-51 Class Destroyer. In recent years, affordability challenges, in part, have forced the Air Force to defer F-35 aircraft procurements to later years. Since 2014, the Air Force has deferred 45 aircraft between 2017 and 2021 to later years. This will likely require the military service to make unplanned investments in extending the service life of their current fighter aircraft. The cost of extending the lives of current fighter aircraft and acquiring other major weapon systems, while continuing to produce and field new F-35 aircraft, poses significant affordability risks in a period of austere defense budgets. Developmental Flight Testing Is Nearing Completion with Challenging Mission Systems Software Testing Remaining The F-35 program is nearing the completion of the initial developmental test program with about 20 percent of its flight sciences and mission systems testing remaining; however our ongoing work indicates that the remaining testing is likely to be challenging as it will require complex missions and stressing environments. Developmental flight testing is separated into two key areas referred to as flight sciences and mission systems. Developmental flight science testing is done to verify the aircraft’s basic flying capabilities, while mission systems testing is done to verify that the software and systems that provide warfighting capabilities function properly and meet requirements. The F-35 program is nearing the completion of developmental flight testing with only 20 percent of its total planned test points remaining. Before completing the remaining high speed and high altitude flight science testing, Lockheed Martin officials noted that they will incorporate a pressure relief valve into the aircraft’s fuel system to allow the aircraft to fly at altitudes and speeds that are currently restricted due to fuel pressure concerns. As we have reported in the past, DOD is developing, testing, and fielding mission systems capabilities in software blocks (see figure 3). The full warfighting capability for the F-35 is to be attained with the completion of Block 3F, the final software block in the current development program. As indicated by the percent of test points completed, all of the blocks leading up to 3F have been completed, although they experienced delays in getting to this point. Block 3F has completed 18 percent of its test points. Our preliminary findings show that the program completed all of the mission systems software testing planned in 2015, but completion of Block 3F testing could be challenging given the complexity of the missions and the stressing environments that remain to be tested. Program officials believe that the completion of 3F developmental testing could be delayed by about 2-3 months. As of December 2015, our preliminary analysis of program data indicated that Block 3F testing could be delayed by as much as 6 months if the program performs at the same rate it has in the past and is executed according to the current plan with no additional test point growth. Delays could be exacerbated by the current mission system software stability issues and large number of remaining weapon delivery accuracy events that must take place. Our preliminary work indicates that in 2015 program officials continued to address many of the key technical risks that we have highlighted in the past—including an engine seal and the helmet mounted display—and they identified some new risks. Problems with the engine seal were addressed through a design change that was incorporated into production, and as of September 2015, 69 of 180 engines had undergone retrofits. A new helmet—known as the Gen III helmet—that is intended to address shortfalls in night vision capability, among others, was developed and delivered to the program in 2015. Developmental testing of the new helmet is mostly complete, with final verification testing planned in 2016. The program also identified new risks with the ejection seat and cracking in the F-35C wing structure. Program officials discovered that pilots less than 136 pounds could possibly suffer neck injuries during ejection. Officials noted that although the problem was discovered during testing of the new helmet, the helmet’s weight was not the root cause. The program is exploring a number of possible solutions to ensure pilot safety. In addition, program officials discovered cracking in the wing structure of the F-35C structural test aircraft during durability testing. Structural testing was halted for about 3 months, and Lockheed Martin officials we spoke with stated that a long-term fix had not been identified. Although improvements have been made, ALIS continues to pose technical risks. Recognizing that a fully functional ALIS is critical to the program’s overall success, in October 2015, the F-35 executive program officer testified before Congress that ALIS is one of the most significant technical and schedule risks to the program. ALIS is a complex system of systems that supports operations, mission planning, supply-chain management, maintenance, and other processes. In the past, we have reported that ALIS software has not been delivered on time and has not functioned as expected when it is delivered. In addition to continuing software problems, our ongoing work indicates that the F-35 program faces other key challenges related to ALIS. For example, some equipment management data is inaccurate or incomplete and engine health information is not included in the current version of ALIS. In addition, the system may not be deployable and does not have a backup in case the hardware system was to fail. Ongoing Manufacturing and Reliability Progress Continue Our ongoing work has shown that the F-35 airframe and engine contractors continue to report improved efficiency and supply chain performance, and program data indicates that reliability and maintainability are also improving. Since 2011, a total of 154 aircraft have been delivered to DOD and international partners, 45 of which were delivered in 2015. As Lockheed Martin continues to deliver more aircraft, the number of hours needed to manufacture each aircraft continues to decline. Although prior to 2015 Lockheed Martin had only delivered one aircraft on or ahead of its contracted delivery date, the contractor has been making progress and in 2015 the contractor was able to deliver 15 of the 45 aircraft on time or early. Other manufacturing data are also trending in a positive direction. For example, scrap, rework, and repair hours, and time spent on work conducted out of sequence continue to decrease. Although it has improved, Lockheed Martin’s supply chain continues to deliver parts late to production, resulting in inefficiencies and requiring workarounds. Engine manufacturing deliveries remain steady and 218 engines have been delivered to date. The labor hours required for assembling engines has remained steady and very little additional efficiency is expected. As a result, Pratt & Whitney is looking for additional ways to save cost. Scrap, rework, and repair costs have remained steady over the last year and engineering design changes is relatively low and continues to decrease. Pratt & Whitney is conducting production reviews of its supply chain and is managing supplier quality initiatives to address shortfalls, according to officials. Our ongoing work shows that although the program has made progress in improving some reliability and maintainability measures, the program continues to fall short in some measures as shown in figure 4. While the metrics in most areas were trending in the right direction, the F- 35 program office’s own assessment indicated that as of August 2015 the F-35 fleet was falling short of reliability and maintainability expectations in 9 of 19 areas. The program has time to improve. As of August 2015, the F-35 fleet had only flown a cumulative total of 35,940 hours of the 200,000 cumulative flight hours required for system maturity. Similarly, although engine reliability improved significantly in 2015, the engine was still not performing at expected levels. In 2014, Pratt and Whitney data indicated that engine reliability—measured as mean flight hours between failure (design controllable)—was very poor and we reported in April 2015 that the engine would likely require additional design changes and retrofits. While Pratt & Whitney has implemented a number of design changes that have resulted in significant reliability improvements, the F-35A and F-35B engines are still at about 55 percent and 63 percent, respectively, of where the program expected them to be at this point. Program and contractor officials continue to identify ways to further improve engine reliability. In conclusion, our preliminary results indicate that, although the F-35 development program is nearing completion, the program is not without risks. The remaining significant and complex 3F mission systems software developmental testing, continuing issues with ALIS, and new issues with the ejection seat and F-35C wing structures pose ongoing risks. Going forward, the program will likely continue to experience affordability and oversight challenges. DOD expects that beginning in 2022 it will need more than $14 billion a year on average for a decade to procure aircraft. It is unlikely that the program will be able to receive and sustain such a high level of funding over this extended period, especially given DOD’s competing resources such as the long range strike bomber and KC-46A tanker. DOD’s plan to manage Block 4 under the current acquisition program baseline presents oversight challenges because key reporting requirements and oversight mechanisms will not be initiated; therefore, the two efforts will be comingled. Without setting up the modernization as a separate program with its own baseline and regular reporting as best practices recommend, it will be difficult for Congress to hold DOD accountable for achieving F-35 Block 4 cost, schedule, and performance goals. It also makes it easier to re-categorize work planned for the baseline program as modernization. In light of our ongoing work, we are not making any recommendations to DOD at this time. We plan to issue our final report in April 2016. Chairman Turner, Ranking Member Sanchez, and members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have. We look forward to continuing to work with the Congress as we to continue to monitor and report on the progress of the F-35 program. GAO Contact and Staff Acknowledgments For further information on this statement, please contact Michael Sullivan at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement are Travis Masters, Peter Anderson, Jillena Roberts, and Megan Setser. Appendix I: Changes in Reported F-35 Joint Strike Fighter Cost, Quantity, and Deliveries, 2001-2015 Annual projected cost estimates expressed in then-year dollars reflect inflation assumptions made by a program. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
With estimated acquisition costs of nearly $400 billion, the F-35 Joint Strike Fighter—also known as the Lightning II — is DOD's most costly acquisition program. Since 2001, GAO has reported extensively on the F-35 program's cost, schedule, and performance problems. The program plans to begin increasing production rates over the next few years. The National Defense Authorization Act for Fiscal Year 2015 contains a provision for GAO to annually review the F-35 acquisition program. Today's testimony is based on ongoing work for the first report under this mandate, which GAO expects to issue in April 2016.This testimony focuses on GAO's preliminary observations regarding the F-35 program's (1) future modernization (2) affordability, remaining development, and ongoing manufacturing plans. GAO analyzed program documentation including management reports, test data and results, and internal DOD program analyses. GAO collected data on F-35 development and test progress, and analyzed total program funding requirements. GAO also collected and analyzed production and supply chain performance data, and interviewed DOD, program, and contractor officials. In light of its ongoing work, GAO is not making any recommendations at this time. GAO's ongoing work on the F-35 Joint Strike Fighter (F-35) program shows that the Department of Defense (DOD) has begun planning and funding significant new development work to add to the F-35's capabilities, an effort known as Block 4. The funding needed for this effort is projected to be nearly $3 billion over the next 6 years (see figure below), which would qualify it as a major defense acquisition program in its own right. DOD does not currently plan to manage Block 4 as a separate program with its own acquisition program baseline but rather as part of the existing baseline. As a result, Block 4 will not be subject to key statutory and regulatory oversight requirements, such as providing Congress with regular, formal reports on program cost and schedule performance. A similar approach was initially followed on the F-22 Raptor modernization program, in which the funding and content were comingled making it difficult to separate the performance and cost of the modernization from the baseline program. Best practices recommend an incremental approach in which new development efforts are structured and managed as separate acquisition programs with their own requirements and acquisition program baselines. The F-22 eventually adopted such an approach. If the Block 4 effort is not established as a separate acquisition program, cost, schedules, and the scope of the baseline and modernization efforts will be comingled. Therefore, it will be difficult for Congress to hold DOD accountable for achieving its cost, schedule, and performance requirements. GAO's ongoing work indicates that although the F-35 total program acquisition costs have decreased since 2014, the program continues to face significant affordability challenges. DOD plans to begin increasing production and expects to spend more than $14 billion annually for nearly a decade on procurement of F-35 aircraft. Currently, the program has around 20 percent of development testing remaining, including complex mission systems software testing, which will be challenging. Program officials continued to address many of the key technical risks, but the Autonomic Logistics Information System continues to be a challenge. At the same time, the contractors that build the F-35 airframes and engines continue to report improved manufacturing efficiency and supply chain performance.
Background In response to public concerns about safety and maintenance, the District of Columbia undertook a major effort to renovate and modernize its public schools in 1998. It has budgeted $1.3 billion for the renovations from fiscal year 1998 through 2007. The school system began this effort by entering into a Memorandum of Agreement with the Army Corps of Engineers. The agreement was for engineering, procurement, and technical assistance to ensure that construction contracts were awarded and managed so that schools could open in the fall of that year. In the Fiscal Year 1999 District of Columbia Appropriations Act, Congress authorized the Corps of Engineers to provide the school system with engineering, construction, and related services. Until fiscal year 2001, the District of Columbia’s Office of Contracting and Procurement was the central authority for procurements made by the various city agencies, including the school system. In October 2000, the school system obtained its own procurement authority. It assumed responsibility for about a third of the school renovation projects on the fiscal year 2001 capital projects list, while the Corps of Engineers was responsible for the remainder. To obtain renovation services for the repairs under its purview, the school system has almost exclusively used a GSA areawide public utility contract with Washington Gas for gas, gas transportation, and energy management services. GSA entered into the contract with Washington Gas, a regulated public utility, without competition because the company has an exclusive franchise by law to provide certain utility services in its service area. The “energy management” services available from the contract could, if the contractor has these services on file with the Public Service Commission, include services intended to provide energy savings, efficiency improvements, energy audits, conservation measures such as lighting control and boiler control improvements, and water conservation device installation. Contract Was Improperly Used and Precluded Competition The school system improperly used the gas utility contract as a vehicle for obtaining a broad range of facility improvements and maintenance work. In doing so, it precluded competition by awarding all of the work on a sole-source basis to Washington Gas as the prime contractor. The types of services provided under the Washington Gas contract could have been performed by licensed plumbing, heating, electrical, or general contractor firms. The GSA contract with Washington Gas is limited to the provision of regulated gas utility and energy management services. In contrast, our analysis of completed projects from August 2000 through March 2001 shows that the school system has paid Washington Gas $25 million for a range of projects, including painting, carpeting, and electrical work; boiler, air conditioning, heating, and structural repairs; bathroom, auditorium, and swimming pool renovations; and flag pole refurbishments. Figure 1 shows the major categories of services. Based on our reading of the contract, the governing regulation, and discussions with GSA and the District of Columbia Public Service Commission, we do not believe these services were within the scope of the GSA contract because none of the work or services were regulated utility services or otherwise on file with the Public Service Commission. Appendix V contains details on our analysis. The school system first started using the GSA areawide contract with Washington Gas in the last months of calendar year 1997 to provide emergency boiler repairs and temporary boiler rentals. These services were outside the scope of utility services described in the contract. In 2000 and 2001, the range of services expanded to include many other types of projects ordered by the school system, all of which were also outside the contract’s scope. Washington Gas marketed its project management services to the school system and performed these services without regard to the scope of its contract with GSA. The school system’s chief contracting officer explained to us that the Washington Gas contract was used because it was an existing source of supply that could be quickly implemented to keep the schools open. In contrast, according to the contracting officer, the typical lengthy procurement process using solicited competitive bids would have prevented the timely acquisition of the needed work and services. The contracting officer considered the GSA contract to be available for use because the type of work was, in the contracting officer’s view, energy- related. Other contracting options were available to the school system. For example, the school system could have used the Army Corps of Engineers to perform school renovations. The Corps carried out most of the renovation work for the school system from fiscal years 1999 through 2001. Further, the Corps had alternative contract vehicles for which it had well-defined statements of work, independent cost estimates, and negotiated contractor fees. The then-Chief Facilities Officer informed us that he was reluctant to give additional work to the Corps, however, because he believed the Corps processes and procedures were too slow given the crisis atmosphere and pressure from the community to carry out renovations quickly. GSA Concerns With School System’s Use of the Contract After we raised questions about the school system’s use of the Washington Gas contract, the GSA contracting officer responsible for the contract sent a May 2001 letter to the school system’s contracting officer stating that (1) the contract is not intended to provide general facility improvements and maintenance that are not energy-related and (2) continued use of the contract for services outside the scope and intent would jeopardize the school system’s ability to continue using the contract. The GSA contracting officer was unaware of the scope of services for which the school system was contracting with Washington Gas because neither the school system nor Washington Gas had reported use of the contract to GSA as required. An Assistant General Counsel at the General Services Administration also told us that many of the construction services provided by Washington Gas to the school system clearly fell outside the scope of the areawide contract because these services dealt with general construction as opposed to gas, gas transportation, or energy management services. Further, the Counsel explained that energy management services must result in documented energy cost savings or a reduction in energy usage. To ensure that the contract is properly used in the future, the GSA contracting officer referred the school system’s contracting officer to GSA guidance on areawide utility contracts. However, we believe this guidance in insufficient and unclear. For example, the guidance could be interpreted as allowing the school system to order any service Washington Gas, or more precisely, its subsidiaries and subcontractors, might have to offer. It also lists many energy management projects that are not regulated utility services that could be provided under areawide contracts, such as window and air conditioning replacements. We are sending a separate letter to GSA detailing our concerns with its guidance and providing recommendations on improving oversight and guidance on areawide utility contracts. Sound Practices Not Followed in Using the Washington Gas Contract Our review also revealed serious breakdowns in internal controls and gross shortcomings in the way the work under the gas utility contract was ordered and handled. The school system failed to adhere to review and oversight requirements. School system personnel inappropriately chose a select group of subcontractors to perform the work. The school system did not take steps to obtain fair and reasonable prices and failed to perform adequate and effective contract administration. The absence of effective controls and oversight has put the $32.9 million already spent on the renovation work—as well as some $10.2 million in outstanding orders—at considerable risk of improper billing, poor quality work, and high prices. Management and Oversight Practices Fundamental to Successful Contracting Establishing and following strong management and oversight practices is critical to successful contracting efforts. To ensure that they get the best deal possible agencies should fully consider risks as well as alternative solutions. They generally should compete the work they want done. In noncompetitive contracting situations, particular care needs to be taken to ensure fair and reasonable prices. Moreover, once a contract is awarded, agencies need to take steps to effectively oversee their contractors. For example, they should have effective plans for assuring the quality of the work performed by the contractor. When these controls are not in place, agencies assume undue risk and could end up paying more than they should. The District of Columbia has controls in place to ensure that it obtains fair and reasonable prices and to provide contract oversight. For example, District agencies are required to perform procurement planning and conduct market surveys to promote and provide for competition for supplies and services. In addition, until fiscal year 2001, the District of Columbia’s Office of Contracting and Procurement was required to review contracting actions (including the school system’s) totaling $50,000 or more. Currently, the school system’s Office of General Counsel is required to review contract actions of $25,000 or more. Among other things, these reviews require evidence that independent cost estimates have been performed, work has been competed or justified as a sole-source procurement, and a legal review has been performed. School System Did Not Adhere to Review and Oversight Requirements The school system did not adhere to a number of oversight requirements in carrying out the renovation work. Such requirements are in place to ensure that the District obtains the best price and service, contracts are legally sound, payments to contractors are justified, and work has adequately been competed. Specifically: The school system did not obtain required reviews and approvals from the District of Columbia’s Office of Contracting and Procurement. When the school system began using the Washington Gas contract, it had not yet been granted it own contracting authority. As such, it was required to submit actions totaling $50,000 or more to the Office of Contracting and Procurement up until October 2000. This review considered such things as the cost of the work, whether the contract was legally sound, whether the work was competed, and whether sole-source procurements were adequately justified. However, only 2 of 20 actions that were subject to this review—representing $1.3 million of $14.9 million—were submitted. The school system did not obtain required approvals from its General Counsel. When the school system obtained its contracting authority in October 2000, the school system’s guidance required contract actions totaling $25,000 or more to be submitted to the General Counsel for review and approval. However, the contracting officer ignored this requirement and did not submit $28.2 million of orders that met this review threshold. General Counsel officials told us that they were unaware of the extent to which the school system was using the Washington Gas contract. The school system bypassed the City Council approval process. District of Columbia law requires City Council approval of any proposed District government contract (including orders under an existing contract) having a value of more than $1 million. The proposed contract is to be accompanied by a summary that includes a description of the selection process and a certification that the proposed contract is legally sufficient and has been reviewed by the District of Columbia Office of the Corporation Counsel. The school system bypassed this process by grouping about $43 million of renovation work into orders of $950,000 each—just under the $1 million threshold. Sometimes, the school system issued as many as three such orders in a single day. Table 1 details the value and dates of the specific orders. Required approvals from the District of Columbia’s Financial Responsibility and Management Assistance Authority were not obtained. The authority, also known as the Control Board, was established in 1995 to repair the District’s failing financial conditions and to improve the effectiveness of its various entities. The Board is responsible for reviewing and approving certain contracts awarded by the District. One criterion triggering review by the Board is contracts awarded on a sole- source basis. Officials on the Board told us that they should have reviewed all of the orders placed under the Washington Gas contract because they consider the contract to be a sole source procurement. They told us that they had reviewed only one of the orders, for emergency boiler repairs in 1997. After that time, the school system did not forward any subsequent orders under the contract to the Board for review. An additional oversight mechanism within the school system is the Office of Finance, which is concerned with the District’s financial health and approves funding for contract orders as well as payments to contractors. The school system’s Office of Finance questioned the use of the Washington Gas contract in July and August 2000 because of the large number of orders being made to Washington Gas. However, it approved orders after receiving assurances from the contracting officer that the orders were justified. We believe these assurances were insufficient because they did not show that the work was within the scope of the contract and they were not supported by justification for using a sole- source contract or pricing analyses. As figure 2 illustrates, the school system substantially increased the value of the orders once the Office of Finance continued approving orders based on the assurances of the contracting officer. By not following oversight requirements, the school system put the renovation work at considerable risk of improper billing and high prices. In fact, we found that the school system was overcharged by about $1.9 million because of duplicate billings and billings for work not completed. We found 11 cases where Washington Gas had billed the school system twice for the same work. These duplicate billings totaled $243,174. For example, Washington Gas billed the school system twice for $18,250 for painting performed by a subcontractor at M.M. Washington Senior High School and for $62,000 for lighting work at Aiton Elementary School. In other cases, Washington Gas billed the school system for the full cost of the work before subcontractors had completed the work. These improper billings totaled about $1.7 million. School System Inappropriately Selected Subcontractors to Perform Work All of the work performed by subcontractors under the Washington Gas contract could have been awarded on a competitive basis. However, school system officials chose to rely for the most part on a select group of subcontractors to perform the work. Subcontractors were frequently preselected based on their area of expertise. For example, if carpeting was needed, a certain company usually received the work. Another company was usually called to do painting, a third for electrical repairs, and a fourth for general construction. Figure 3 shows the amount of work awarded to these four subcontractors from August 2000 through March 2001. School System Did Not Take Steps to Ensure That Prices Were Fair and Reasonable Reliable cost estimates and pricing analyses are central to determining whether the price of a product or service is fair and reasonable. As such, they are required as part of the District’s contract oversight requirements. Nevertheless, we found that independent cost estimates and pricing analyses were prepared for almost none of the orders under the Washington Gas contract. The school system relied on its facilities staff, not contracting officials, to conduct these reviews, but the staff did not do them. Additionally, the school system did not determine that Washington Gas’ prices and fees for the renovation work were fair and reasonable, as required, or negotiate the fee charged by Washington Gas. The Washington Gas contract does not establish prices or fees for any of the work ordered by the school system. Rather, prices were to be negotiated between the contractor and the school system. In 1999 and 2000, Washington Gas generally charged the school system a 20-percent fee on each project conducted under the contract. The fee included services such as project management, engineering design, inspection services, and administrative services, as well as overhead and profit. The fee was not negotiated. In fact, it applied to all renovations orders regardless of size or complexity of work and regardless of the extent of Washington Gas’ role in individual projects. Washington Gas increased its fee at the beginning of fiscal year 2001 from 20 to 25 percent because the school system was requesting additional work and larger projects. This increase was also not negotiated, and the 25-percent fee was applied as a flat rate to all projects. The school system’s contracting officer was unaware that the fee had been raised until we notified her. The then-Chief Facilities Officer raised concern about the fee with Washington Gas and requested that it be lowered. However, no action was taken. Lastly, for much of the boiler work, the school system paid a fee to a subsidiary of Washington Gas (American Combustion Industries, Inc.), in addition to the 25-percent fee charged by Washington Gas. In some cases, as a result of this situation the school system paid fees of up to 50 percent—with half going to Washington Gas and half to its subsidiary. Because the school system failed to use competitive procedures, neglected to prepare reliable cost estimates and pricing analyses, and failed to negotiate fees, it had no way of knowing whether prices were fair and reasonable. In fact, the school system has paid Washington Gas a total of $6 million in fees for very limited program management services. For the most part, only 4 employees at Washington Gas worked on school renovation-related efforts. One employee served as a liaison with the school system; another prepared the listings of work to be completed and informed the subcontractor to begin work; a third inspected the work; and a fourth ensured that the subcontractors were paid. Further, on many projects, Washington Gas did not provide all of the services that, according to Washington Gas officials, formed the basis for its fee. For example, Washington Gas collected $74,448 in fees for a $297,795 parking lot and playground renovation project at Hendley Elementary School. For its 25-percent fee, Washington Gas just prepared the listing of renovation work to be completed, told the subcontractor to begin work, inspected the work, and paid the subcontractor. School System Did Not Adequately Administer the Contract Facilities staff took on duties normally belonging to the prime contractor and contracting officer without any authority to do so. Specifically, school system facility staff who did not have contracting authority were intimately involved in selecting subcontractors, approving proposed prices, making changes to the work, assuring the quality of the work performed, and approving invoices for payment. For example: The prime contractor normally selects subcontractors, approves their prices, and defines their scopes of work. In this case, however, the school system facilities staff—not Washington Gas, the prime contractor— solicited and approved subcontractor proposals. At times, these proposals were vague and broad in scope, making it difficult to determine how prices were established and approved. For example, one proposal, for drain cleaning at “various D.C. public schools,” offers to snake and clean various drains for a not-to-exceed price of $100,000. The schools were not listed, nor was the extent of the work detailed. A $25,421 proposal to install carpet in five rooms at Shaw Junior High School did not indicate the area of carpet; therefore, a realistic evaluation of the price could not be made. District contracting officers normally ensure that procurements of supplies, services, or construction conform to the quality and quantity requirements of the contract. In this case, however, the facilities staff performed quality assurance without any official delegation of responsibility to do so. Many times, the facilities official who had selected the subcontractor and approved the subcontractor’s work and price also inspected the work and authorized payment. Furthermore, many inspections were not well-documented, and for some projects, facilities staff approved completed work by simply signing the subcontractor’s invoices without indicating that the work had been inspected and deemed acceptable. Facilities staff directed Washington Gas to adjust contracted work for 276 projects, totaling $7.4 million, between August 2000 and March 2001. These projects comprised about 29 percent of the $25.4 million paid to Washington Gas during that time period. (See fig. 4). Nevertheless, the contracting officer did not officially modify contracted work, as required, to reflect these changes. In fact, the contracting officer, who was responsible for making contract modifications, was unaware of the changes. Contracting officials sometimes provide program staff with limited authority to perform such duties as monitoring technical performance and reporting any potential or actual problems to the contracting officer. In these cases, the contracting officer formally designates this authority, and in doing so, provides the program staff with detailed guidance on what these duties entail and do not entail. In particular, the guidance stresses that program staff are not empowered to authorize, agree to, or sign any modifications to the work. However, for work done under the Washington Gas contract, the school system’s contracting officer did not delegate this authority to the facilities staff or provide them with guidance about their roles and responsibilities. Actions Being Taken by the School System to Address Problems School system officials have recognized that the renovation work was not properly managed and informed us that they are aggressively pursing a number of corrective actions related to the Washington Gas contract: The school system has discontinued use of the Washington Gas contract for all projects that fall outside the school system’s legally-defined scope of the contract. Further, all outstanding invoices for completed work will be paid contingent on Washington Gas subcontractors submitting proper documentation. For any remaining safety-related work directed to Washington Gas, the request for services will include a statement of work that the Washington Gas company will use to develop proposals and compete the work. The school system will explore appropriate actions, including possible legal recourse against Washington Gas for overpayments under the contract. The practice of bundling projects in groups of $950,000 will be discontinued. School system officials also advised us that, in the longer term, they are taking a number of steps to strengthen contract review and administration in general: All contract orders are being assigned to competitively-selected contractors. The school system has hired a team of contracting staff with expertise in construction. The team is responsible for assuring that well-defined proposals and independent cost estimates are prepared and for negotiating contractors’ fees. Limited contract administrative functions will be provided to the facilities staff. The school system’s General Counsel will review all construction and renovation contracts of $25,000 and above. The Office of General Counsel will determine whether available contracting alternatives have been considered, cost estimates are reasonable, work has been adequately completed, and contracts are legally sound. All contract orders will be evaluated to ensure that they fall within the scope of the contract and that appropriate types of funding are used for each action. Participants in this review now include the Office of Contracts and Acquisitions, the Office of the General Counsel, the Office of Facilities Management, and the Office of the Chief Financial Officer. The Office of Facilities Management now prepares government estimates and scopes of work for all construction work exceeding $5,000. The school system has hired construction inspectors to ensure compliance with contract documents and quality requirements. Final inspection is required prior to making the final payment to the contractor. The school system is also establishing an Office of Compliance to monitor the overall facilities contracting process. The Finance Office is reviewing the revised contract review procedures to see if improvements can be made. The Board of Education will now review all contracts over $100,000. Conclusion The school system was facing a crisis situation when it undertook its school renovation effort in 1997, and it was under considerable pressure to quickly get the schools upgraded and in safe condition. Obtaining services under GSA’s contract with Washington Gas may have offered a quick and convenient way of fixing the school system’s immediate problem. However, this approach was inappropriate because it went well beyond the scope of the contract. It also undercut competition and was used without determining that prices were fair and reasonable. Therefore, we have serious doubts that the school system has received the best value for its money. Moreover, the contract was administered without regard to management and oversight controls that are in place to ensure that proper procurement practices were followed and that the work performed was of good quality. Unless actions are taken to improve controls over the actions remaining under the Washington Gas contract and procurement planning for future school renovation contracts, the hundreds of millions of dollars being spent to fix the schools—including the $10.2 million in outstanding orders under the Washington Gas contract—will remain at risk to the same problems. The measures the school system is planning should help mitigate this risk. However, the school system will need to make a concerted effort to ensure that they are quickly and effectively implemented and sustained throughout future renovation contracts. Recommendations We recommend that the Superintendent of the District of Columbia School system ensure that the school system’s planned corrective actions are implemented in a timely manner. In addition, we recommend that the Superintendent ensure that the contracting officer complies with District of Columbia procurement procedures in contracting for the remaining school renovation work, including procurement planning, use of competitive acquisition procedures, and ensuring that contractor prices and fees are fair and reasonable. We also recommend that, specifically regarding the Washington Gas contract, the Superintendent direct the contracting officer to terminate, if cost-beneficial, outstanding orders that are beyond the scope of the contract and properly procure the replacement work through available government sources or competitive procedures. Lastly, we recommend that the school system’s Chief Financial Officer take a more active role in the contract review process. Because this official is concerned with ensuring the District’s financial health, the official should notify appropriate authorities about irregular contracting activities that may be taking place. Agency Comments and Our Evaluation The District of Columbia Public Schools, Washington Gas, and GSA provided written comments on a draft of this report. The comments, along with our responses, appear in appendixes II, III, and IV, respectively. The school system did not take exception to our findings or recommendations. It noted that many organizational and procedural changes have occurred over the past year to correct the problems identified in the report. We have incorporated references to these actions in the report where appropriate. In our future work on school renovation and modernization efforts, we will evaluate the actions taken by the school system. The school system stated that it has discontinued use of Washington Gas for projects that fall outside the legally-defined scope of the contract. However, we remain concerned that the legal definition of the scope of work applied by the school system is not consistent with our interpretation of the Washington Gas contract. As discussed in our comments on GSA’s response to our report (appendix IV), as well as our analysis presented in appendix V, in our opinion none of the work Washington Gas has performed for the school system is within the scope of the contract. In an e-mail follow-up to its written response, the school system stated that it has not canceled any of the outstanding orders for which purchase orders had been issued because the majority of the work was deemed necessary for life/health/safety requirements or was critical to the opening of schools in the fall of 2001. In addition, the school system stated that there was insufficient time to stop the work in progress without incurring substantial penalties and potential liability. Further GAO comments on the school system’s response appear in appendix II. Washington Gas took exception to our findings dealing with the scope of services, its role as a prime contractor, the fee charged under the contract, and billing issues. For example, it stated that we were incorrect in our assertions that fees were not negotiated and that overcharges occurred. There is no evidence in school system’s or Washington Gas’ files, or from discussions with any of the officials involved with this contract, that the fees were ever negotiated. On the issue of overcharges, in an August 13, 2001, letter to the school system’s superintendent, Washington Gas stated that it had verified that duplicate billings did occur. It intends to credit the school system for the overbillings in its June 2001 invoice; however, as of the time of this report, neither the invoice nor the credit had been submitted to the school system. In a document provided to us, Washington Gas indicated that duplicate billings totaled $482,915. Given these facts, we do not understand how Washington Gas can assert that overcharges did not occur. Further GAO comments in response to Washington Gas’ letter appear in appendix III. GSA did not take issue with our findings that the school system improperly used the Washington Gas contract to accomplish general construction or to our findings with regard to internal contracting practices in the school system. However, both GSA and Washington Gas disagreed with our position that none of the renovation work performed by Washington Gas was within the scope of the areawide utility contract. GSA pointed out that one of the exceptions to the Competition in Contracting Act provides that other than competitive procedures may be used when a statute expressly authorizes or requires that the procurement be made from a specified source. GSA stated that the Energy Policy Act provides such authorization. We stand by our position that work performed on a sole-source basis under areawide utility contracts must be regulated by a public regulatory authority, and that this was not the case for any of the work ordered by the school system from Washington Gas. We provide additional details on this issue in our response to GSA’s comments in appendix IV. We will soon issue a report to GSA addressing its guidance on areawide utility contracts and its oversight of the school system’s use of the Washington Gas contract. We are sending copies of this report to other interested congressional committees; the Administrator, General Services Administration; the Mayor of the District of Columbia; the Chair of the City Council; the District of Columbia Board of Education; the Chief Financial Officer, District of Columbia Public Schools; the Superintendent of District of Columbia Public Schools; and the Vice President and General Counsel, Washington Gas Light Company. If you have any questions regarding this report, please contact me on (202) 512-4181. An additional contact and staff acknowledgments are listed in appendix IV. Appendix I: Scope and Methodology To identify the review process for using the Washington Gas contract, we reviewed the District of Columbia Code; title 27 of the District of Columbia’s Municipal Regulations; and policies and procedures issued by the Office of Contracting and Procurement and DCPS. We held discussions with officials in the Office of the Chief Counsel, District of Columbia; the Control Board; the Office of Contracting and Procurement; and the school system’s General Counsel, Office of Finance, and Office of Contracts and Acquisitions. To determine whether the work was within the scope of the GSA areawide contract, we reviewed the contract and applicable regulations. We analyzed billing records from Washington Gas to DCPS for about 600 projects from August 2000 through March 2001 and checked these records against subcontractor proposals and invoices to determine the types of services obtained. We also interviewed the contracting officer in GSA’s Public Utilities/Energy Center of Expertise and the GSA Assistant General Counsel; officials in the District of Columbia Public Service Commission; the Office of Contracting and Procurement; the school system’s General Counsel; the former Chief Facilities Officer; and the current Deputy Directors in the Facilities Division. To assess the internal controls in place to administer the contract within the Facilities Division, we reviewed records maintained in the Division and the school system’s contracting and finance offices. We held discussions with officials in the District of Columbia Office of Contracting and Procurement; the former Chief Facilities Officer and the current Deputy Directors; the project managers in the Facilities Division; the school system’s contracting and finance offices; and the Washington Gas Light Company. To evaluate the fee charged by Washington Gas for services provided under the contract, we analyzed Washington Gas’s billing records to the school system as well as subcontractor proposals and invoices. We interviewed and obtained information from officials at Washington Gas and the school system’s contracting, facilities, and finance offices. To identify duplicate billings, we identified projects for which Washington Gas had billed the school system twice where the dollar amount, school, and service were identical. We considered double billings to have occurred when we could find only one proposal for the 2 billings. To identify cases where Washington Gas had billed the school system for work not completed, we reviewed the company’s official files to determine whether progress payments had been made or whether the subcontractor submitted invoices. In addition, we checked to see whether the subcontractor had been paid by Washington Gas. Washington Gas’ normal policy was to pay the subcontractors before billing the school system. To determine whether contracted work was adjusted, we compared the listing of projects compiled by Washington Gas under each order under the contract with the listing of projects accompanying Washington Gas’ bill to the school system for each specific order. When projects were billed but not listed on the original order, we considered them to be contract adjustments. We performed our work from March through August 2001 in accordance with generally accepted government auditing standards. Appendix II: Comments from the District of Columbia Public Schools The following are GAO’s comments on the District of Columbia Public Schools’ letter dated September 21, 2001. GAO Comments 1. In an e-mail follow-up to its letter, the school system clarified that the contracting officer and the Office of Facilities Management are currently conducting an analysis of unauthorized work orders that were sent to Washington Gas to determine which, if any, have been executed and which have not. In addition, the school system responded to our third recommendation by stating that it did not cancel contracts for which purchase orders had been issued because the majority of the work was deemed necessary to fulfilling life/health/safety requirements or was critical to the opening of schools in the fall. The school system stated that there was insufficient time to stop the work in progress without running the risk of essential work being completed in time and the school system’s incurring substantial penalties and potential liability. 2. The statement is based on several discussions with the then-Chief Facilities Officer. Appendix III: Comments from the Washington Gas Light Company The following are GAO’s comments on the Washington Gas Light Company’s letter dated September 17, 2001. GAO Comments Washington Gas claims that some of the work performed for the school system under the areawide contract was within the contract’s scope and asserts that several of our statements, such as those concerning fees and overcharges, are incorrect. We disagree with Washington Gas on each point, as discussed below. 1. We disagree with Washington Gas on the issue of the scope of the areawide utility contract. We stand by our position that none of the services provided by Washington Gas to the school system fell within the contract’s scope. These were not regulated services, but rather work that could have been performed by heating, plumbing, or general contractors. We address this issue further in our response to GSA’s comments (app. IV). 2. Contrary to Washington Gas’ response, our report does not state that the fee was excessive. However, because the school system did not use competitive procedures, prepare cost estimates or pricing analyses, or negotiate the fee, it had no way of knowing whether Washington Gas’ prices were fair and reasonable. We rightly point out that Washington Gas charged the school system a flat fee for each project, despite the fact that the scope of work varied widely by project. Further, we are correct in stating that fees were not negotiated, that fees were charged by Washington Gas and its affiliate, and that there were overcharges (see comments 4, 8, and 9 below). 3. We do not speculate that the Army Corps of Engineers would have provided better quality work at lower prices. Rather, we point out that existing Army Corps contracting mechanisms provided an option to using the sole-source Washington Gas contract. 4. The fees were not, in fact, negotiated. The school system inappropriately paid the fee without undergoing the normal negotiations with the contractor. Merely accepting the fee is not equivalent to negotiating it. 5. Washington Gas incorrectly states that its fees were fully disclosed as part of each proposal and invoice submitted to the school system. Washington Gas indicated on its proposals and invoices the base cost and, in a separate column, the final price, which incorporated Washington Gas’ fee. However, the fee itself did not appear explicitly on the documents. The school system’s contracting officer was unaware that the fee had increased to 25 percent. Only by calculating the difference between the base cost and the final price would one realize that the fee had increased. Further, the Control Board approved only one contract action, for emergency boiler work in 1997. As we note in the report, the school system failed to submit all subsequent orders to the Control Board for review, contrary to the Board’s requirements. Any implication that the Control Board was aware of—or approved—Washington Gas’ fee for any other than the initial contract action is misleading. 6. Our report outlines the elements included in Washington Gas’ mark-up, based on documents provided by Washington Gas. Further, we do not refer to the mark-up as a “profit,” but rather as a “fee.” 7. We recognize that the school system asked Washington Gas to provide a limited amount of services on certain projects. However, we point out that Washington Gas charged a flat 25-percent fee—and that the school system paid this fee—for every project, even though the needs of each project varied widely. 8. Contrary to Washington Gas’ statement, our report does not contend that the company improperly marked up sales by its affiliate, American Combustion Industries, Inc. We correctly point out that both Washington Gas and its affiliate charged a fee to the school system and that, in addition to paying Washington Gas’ 25-percent fee, the school system paid an additional fee to American Combustion Industries, Inc. 9. Washington Gas refers to “alleged” overbillings. In fact, overbillings did occur and Washington Gas has explicitly acknowledged them. After we identified $243,174 in duplicate billings from Washington Gas to the school system from August 2000 through March 2001, Washington Gas hired an independent audit firm to confirm our findings. The firm discovered that Washington Gas had double-billed the school system in the amount of $482,915. In an August 13, 2001, letter to the school system’s superintendent, Washington Gas stated that it would credit the school system for the duplicate billings. Given this situation, we fail to understand how Washington Gas can imply that overcharges did not occur. 10. During our audit, we provided Washington Gas a list of projects, totaling $1.7 million, where it appeared that Washington Gas had billed the school system before work had been completed. As of the time of this report, the accounting firm hired by Washington Gas had not completed its review of these projects to determine whether these improper billings had occurred and, if so, the extent of the errors. Appendix IV: Comments from the General Services Administration The following are GAO’s comments on the General Services Administration’s letter dated September 19, 2001. GAO Comments 1. We agree with GSA that areawide contracts may be appropriate vehicles for carrying out the federal energy management goals of the Energy Policy Act. However, we continue to believe that any exception to the government’s competitive contracting requirements for agency participation in utility incentive programs is limited to regulated services or to services for which the utility is the only available source. None of the arguments or information GSA has provided in its comments is inconsistent with our position or convinces us otherwise. As we will more fully discuss in our forthcoming letter to GSA, our view is further supported not only by the Energy Policy Act itself, but by the definition of “utility” in an Executive Order requiring agencies to reduce energy usage and cost through use of alternative financing and contracting mechanisms. Indeed, the language “generally available to customers” that GSA refers to itself appears to indicate that the utility incentive programs in which federal agencies are authorized to participate are subject to applicable public utility regulatory authority. Accordingly, we stand by our position that the government’s competitive contracting requirements limit the use of GSA areawide contracts to utility services, including energy efficiency services, subject to public utility regulatory authority or for which the utility is the only available source. 2. GSA misconstrues our position. The language to which GSA refers was not meant to signify that only the regulated utility could perform regulated services. We are not aware of any prohibition on a utility company’s subsidiaries or subcontractors performing such services if approved or authorized under regulatory authority. Instead, our point was that, in the case of the school system, the subsidiaries and subcontractors were providing unregulated services not authorized under the areawide contract rather than services that could be authorized under the contract if subject to public utility regulatory authority. Further, as we pointed out, Washington Gas was acting as a general contractor by performing a project management role over its unregulated subsidiaries and subcontractors, a role that was not authorized under the GSA contract. 3. GSA acknowledges that the areawide contract with Washington Gas is for regulated utility services, as we maintain. Because of deregulation in the utility industry, GSA states that it is now reviewing its areawide contracts for possible modification in light of “current industry practice.” We also recognize that deregulation may limit the services that may be ordered under an areawide contract because certain services may no longer be subject to regulation (but are available from more than one source in the marketplace). Since areawide contracts are entered into without competition due to the regulated nature of the utility industry, we reiterate our position that the contracts remain limited to regulated services or services for which the utility is the only available source. As we have indicated, we believe the government’s competitive procurement requirements are violated if an areawide contract is used for utility services, including energy efficiency services, that are not subject to public utility regulatory authority or for which the utility is not the only available source. To the extent utility services are available from more than one source, the acquisition of such services should be through competitive procedures, as already required under the Federal Acquisition Regulation. In modifying any areawide contracts to reflect “current industry practice,” GSA must ensure that it complies fully with the government’s competitive procurement requirements. Appendix V: None of the School System’s Orders Fell Within Scope of the GSA Contract In determining whether an order is beyond the scope of a contract, GAO looks to whether there is a material difference between the task order and the contract. Here, there is a material difference between GSA’s contract with the Washington Gas Light Company for gas utility and energy management services and the orders placed by the school system under the contract. The GSA Contract On April 17, 1996, GSA executed an areawide utility services contract with the Washington Gas Light Company for federal agencies to use in obtaining natural gas, gas transportation, and energy management services. The contract is the master contract for acquisitions of these utility services by all federal agencies from Washington Gas for a period of 10 years (through April 16, 2006). Washington Gas has an exclusive franchise from government regulatory bodies (including the District of Columbia Public Service Commission) to provide natural gas service to customers in Washington, D.C., and adjoining areas of Maryland and Virginia. Due to the regulated nature of this public utility company, GSA entered into the contract with Washington Gas without competition. The GSA contract with Washington Gas authorizes agencies to order gas, gas transportation, and/or energy management services directly from the contractor. The contract defines “Energy Management Services” as: “any one or more of the services provided or to be provided by the Contractor pursuant to an Authorization in the form of EXHIBIT “C”, which services are within the knowledge and/or supervision of the Commission. Such services include any specific service intended to provide energy savings, efficiency improvements and/or demand reductions in Federal facilities, whether or not it involved financial incentives and/or rebates, specifically including (but not limited to): energy audits and energy conservation measures such as lighting control and boiler control improvements, cooling tower retrofits, solar air preheating systems, demand side management initiatives, fuel cell installation, and water conservation device installation.” To obtain energy management services from the contractor, the ordering agency files an Exhibit “C” “Authorization for Energy Management Service” form with the contractor. The form, which is included in the contract, states that energy management service is required to be provided consistent with the “ontractor’s applicable tariffs, rates, rules, regulations, riders, practices, and/or terms and conditions of service, as modified, amended or supplemented by the Contractor and approved, to the extent required, by the Commission, and in the event that specific approval is not required by the Commission, service provided is required to be within the knowledge and/or supervision of the Commission.” Exhibit “C” listed the following energy management services that could be ordered if approved by or within the knowledge and/or supervision of the Public Service Commission: “Preliminary Energy Audit”; “Energy Conservation Project (ECP) Installation”; “ECP Feasibility Study”; “ECP Engineering & Design Study”; “Demand-Side Management (DSM) Project”; “Special Facilities”; and “Other”. If the “Other” box was checked, the ordering agency was to describe the service(s) purchased in the “Remarks” section of the form. School System’s Use of the GSA Contract The school system issued more than $43 million worth of orders to Washington Gas under the GSA areawide contract, all on Exhibit “C,” ostensibly as “energy management services” and all under the “Other” category of services. The school system would then list the nature of services ordered in the “Remarks” section of each Exhibit “C.” The school system first started issuing orders at the end of calendar year 1997 for emergency boiler repairs, rental of temporary boilers, purchase and installation of replacement boilers, and repair, replacement, and maintenance of heating, ventilation, and air conditioning (HVAC) equipment. Beginning in 1999, the nature of the work or services the school system ordered from Washington Gas began shifting to general maintenance, repair, construction, and to the procurement of other work related to building operations such as carpet installation and flooring repairs, painting and ceiling work, electrical and lighting upgrades, the purchase and installation of window air conditioning units, elevator renovations and upgrades, the purchase and installation of new public address and clock systems, generator replacement, replacement of security lights, installation of bathroom partitions, and plumbing work. Contract Did Not Contemplate the Work Ordered The GSA areawide contract with Washington Gas makes no reference whatsoever to any of these types of work or services or even to the general issue of boiler repair, rental, and replacement, or HVAC maintenance or repairs. This is not surprising, because the contract is specifically for the provision of regulated utility services, not the type of work that can otherwise be performed by a general contractor, maintenance firm, or licensed plumbing or heating contractor. We do not view the GSA contract as contemplating the type of boiler and HVAC repair and replacement, minor construction and building maintenance, and other work and services that could be provided competitively by many available sources. Indeed, Federal Acquisition Regulation (FAR) part 41, which establishes procedures for federal agencies to acquire utility services, states that its provisions, including those related to GSA areawide utility contracts, do not apply to construction and maintenance of government- owned facilities. To the extent that school system orders involved construction and maintenance, these orders are clearly contrary to the governing regulation. The GSA contract also requires than an “Energy Management Service” provided by the contractor must be within the knowledge and/or supervision of the Public Service Commission having jurisdiction over the contractor’s service area. Specifically, the contract defines the term “Service” as: “any commodities, financial incentives, goods, and/or services generally available from the Contractor pursuant to its tariffs, rates, rules, regulations, riders, practices, or terms and conditions of service, as may be modified, amended, or supplemented by the Contractor and approved from time to time by the Commission, and the rules and regulations adopted by the Commission.” As we read the GSA contract, if the contractor has not notified the Public Service Commission of its intention to provide the service (which then may be subject to Commission regulation/approval), the service cannot properly be provided under the GSA contract. Based on our reading of the contract and our discussion with representatives of the District of Columbia Public Service Commission, we believe that any energy management services provided by Washington Gas to the school system under the GSA areawide contract would have to have been “within the knowledge and/or supervision” of the Public Service Commission. However, there is no indication that any of the work or specific services provided by Washington Gas to the school system were “within the knowledge and/or supervision” of the Public Service Commission, through a tariff filing or otherwise. Rather, the only services relevant here that could have been construed as related to “Energy Management Services” that the Public Service Commission had on file from Washington Gas were “non-residential full scale conservation programs” that included a municipal boiler/furnace installation assistance program for space and water heating. These programs were considered by the Commission to be a “least cost planning program,” the costs of which were included in the rate base passed on to ratepayers. Since these programs affected rates paid by Washington Gas customers, the programs were described in a tariff filed by Washington Gas with the Public Service Commission. The Commission provided us a copy of the relevant portion of the tariff. Under the tariff, effective for service rendered after March 1994, the municipal boiler/furnace installation assistance program provided for cash incentives to the District Government for replacing boilers, furnaces, and hot water heaters with new gas-fired high efficiency equipment. The tariff did not indicate that Washington Gas itself was to provide the equipment replacement but merely that upon verification of the installation of the equipment, Washington Gas was to provide a cash incentive to the District for each eligible replacement of equipment. The tariff also authorized Washington Gas to conduct energy surveys of the buildings of customers participating in the programs at no cost to the customer. Because these services were provided in a tariff effective when the GSA areawide contract with Washington Gas was executed in 1996, these limited services could be the only ”Energy Management Services” contemplated by the contract as being within the knowledge or supervision of the Public Service Commission. Beginning in 1999, even these limited “Energy Management Services” were no longer authorized by the Public Service Commission in Washington Gas’s capacity as a regulated public utility. In an order dated December 21, 1998, the Commission approved the elimination of least-cost planning costs from the utility rate base, thus terminating filing requirements related to least-cost planning, including the municipal boiler/furnace installation assistance program. The Commission does not have on file from Washington Gas any other energy management service programs applicable to the GSA contract. Accordingly, since December 1998, there have been no authorized energy management services “within the knowledge and/or supervision” of the Public Service Commission that Washington Gas can provide under the GSA contract. Thus, all the orders issued by the school system to Washington Gas for “energy management services” after that date were for work and services not “within the knowledge and/or supervision” of the Public Service Commission, and thus outside the scope of the GSA contract. The orders issued by the school system to Washington Gas prior to December 21, 1998, also are outside the scope of the GSA contract. Based on the information provided to us, these orders were primarily for boiler (heating) repairs, temporary boiler rentals, new boilers, and chiller (air conditioning) repairs. None of these goods and services are authorized under the GSA contract because none of Washington Gas’s energy management services on file with the Public Service Commission included heating and air conditioning repair services, renting temporary boilers, and selling and installing new boilers. Because these services were not within the knowledge or supervision of the Public Service Commission, they could not be obtained under the GSA contract, and the school system’s orders for these services were beyond the scope of the GSA contract. Orders Not Performed by Regulated Utility Another indication that the work ordered by the school system fell outside the scope of the GSA contract was the fact that in no instance was any of the work ordered actually performed in the schools by Washington Gas Light Co. itself. For every order, Washington Gas Light Co. either had the work performed by a subsidiary (such as American Combustion Industries, Inc) or a subcontractor, usually at the behest of school system personnel (who specifically requested Washington Gas to subcontract with certain firms). These subcontractors included general contractors and heating and plumbing contractors. The significance of the involvement of these subsidiaries and subcontractors is that the school system was not ordering from or having the worked performed by the regulated utility provider (Washington Gas Light Co.) but by unregulated subsidiaries or subcontractors. Because these subsidiaries or subcontractors were providing unregulated services rather than the energy management services authorized to be performed by the regulated utility (that is, by Washington Gas Light Co. itself) the services performed by the subsidiaries and subcontractor fell outside the scope of the GSA contract. Further, Washington Gas representatives and school system officials with whom we spoke characterized the services provided by Washington Gas employees as “project management.” The project management role was more akin to the responsibilities of a general contractor over its subcontractors rather than to any provision of actual utility services. Indeed, Washington Gas representatives told us the school system officials had requested the company to play the role of project manager for the renovations to the schools. Such project management services primarily involved Washington Gas personnel aggregating work requested by the schools into orders for the school system’s contracting officer to issue and administrative oversight of the subsidiaries and subcontractors actually performing the ordered work at the schools. Nowhere in the GSA contract is “project management” listed or otherwise contemplated by the contract as an “energy management service.” Areawide Contracts Limited to Regulated Utility Services We believe our conclusion that only a very limited scope of services can be provided under the GSA contract with Washington Gas is warranted because of the unique nature of GSA areawide utility contracts in the context of the federal government’s competitive procurement requirements. Under the Federal Acquisition Regulation, GSA’s areawide contracts are limited to regulated utility services, or at least to utility services for which there is no other source within the service area. This is because GSA enters into areawide utility contracts without competition due to the fact that competition for the utility services is not available within the geographic area (the “franchise territory”) covered by an areawide contract because provision of the services is based upon a franchise, a certificate of public convenience and necessity, or other legal means. Agencies needing utility services within an area covered by an areawide contract are required to use the contract to acquire those services unless service is available from more than one supplier. If service is available from more than one supplier, agencies are required to procure the service using competitive acquisition procedures. In our opinion, allowing agencies to order services under an areawide contract for which the utility provider is not the only available source would run counter to the requirements of the Competition in Contracting Act of 1984, which requires federal agencies, including GSA, to conduct procurements using full and open competition. Competition requirements are evaded if areawide contractors, which are awarded their contracts by virtue of their status as regulated utilities, are allowed to provide unregulated services to ordering agencies that are available from other sources using competitive procedures. Further, utility providers could take advantage of their status by marketing these unregulated services (typically provided by subsidiaries) to agencies required or authorized to use an areawide contract. We view this as contrary to the basic competitive contracting rules applicable to the Federal and District of Columbia government. Conclusion We conclude that all the orders placed by the school system under the GSA areawide utility contract with Washington Gas improperly exceeded the contract’s scope. It appears that all of the work ordered by the school system is not of the type of that only Washington Gas Light Co. itself could have provided as a regulated utility. Rather, the work ordered is of the type that could be performed by competent general contractors, maintenance firms, or licensed plumbing, electrical or heating contractors. Where a task order is beyond the scope of the underlying contract, the work covered by the order would otherwise be subject to the statutory requirements for competition. Here, because all of the work ordered was beyond the scope of GSA’s contract with Washington Gas, more than $43 million worth of work was acquired by the school system without competition. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the name above, Cristina T. Chaplain; Robert Davis, Jr.; Charles D. Groves; John D. Heere; William Petrick, Jr.; Christopher Pothoven; and Adam Vodraska made key contributions to this report.
By the mid 1990s, most of the District of Columbia's public schools were more than 50 years old and in poor condition. Deferred maintenance had led to a host of safety problems, from fire code violations to leaky roofs. GAO found that the D.C. school system mismanaged a contract with the Washington Gas Light Company. GAO found the use of the gas contract to obtain school renovation services, including painting, carpeting, plumbing, and electrical work, was outside the scope of the contract. In addition, in carrying out the renovation work, the D.C. school system failed to adhere to controls and procedures intended to (1) ensure that the District obtained the best price and services and (2) maintain a proper relationship between the contractors and the D.C. government. These problems raise serious doubts about whether the District obtained fair and reasonable prices on the renovations and whether the school system should continue the gas utility contract.
Background Several studies on maritime security conducted by federal, academic, nonprofit, and business organizations have concluded that the movement of oceangoing cargo in containers is vulnerable to some form of terrorist action, largely because of the movement of shipments throughout the supply chain. Relatively few importers own and operate all key aspects of the cargo container transportation process, which includes overseas manufacturing and warehouse facilities, carrier ships to transport goods, and the transportation operation to receive the goods upon arrival. Most importers must rely on second-hand parties to move cargo in containers and prepare various transportation documents. Second-hand parties within the cargo container supply chain may include exporters, freight forwarders, customs brokers, inland transportation providers, port operators, and ocean carriers. Every time responsibility for cargo in containers changes hands along the supply chain, there is the potential for a security breach; specifically, this change in responsibility creates opportunities for contraband to be placed in containers and opportunities for fraudulent documents to be prepared. According to the U.S. Department of Transportation’s Volpe National Transportation Systems Center, importers who own and operate all aspects of the supply chain suffer the fewest security breaches because of their increased level of control. While CBP has noted that the likelihood of terrorists smuggling WMD into the United States in cargo containers is low, the nation’s vulnerability to this activity and the consequence of such a disaster are high. With about 90 percent of the world’s maritime cargo moving by containers, terrorist action related to cargo containers could paralyze the maritime trading system and quickly disrupt U.S. and global commerce. In a strategic simulation of a terrorist attack sponsored by the consulting firm Booz Allen Hamilton in 2002, representatives from government and industry organizations participated in a scenario involving terrorist activities at U.S. seaports. The scenario simulated the discovery and subsequent detonation of “dirty bombs”—explosive devices wrapped in radioactive material and designed to disperse radiological contamination—hidden in cargo containers at various locations around the country. These “events” led simulation participants to shut down every seaport in the United States over a period of 12 days. Booz Allen Hamilton published a report in October 2002 about the results of the simulation, which estimated that the 12-day closure would result in a loss of $58 billion in revenue to the United States’ economy, including spoilage, loss of sales, manufacturing slowdowns, and halts in production. Further, according to the report, it would take 52 days to clear the resulting backlog of vessels and 92 days to stabilize the container backlog, causing a significant disruption in the movement of international trade. CBP’s Targeting and Inspection Approach at Domestic Ports According to CBP, the large volume of imports and the bureau’s limited resources make it impractical to inspect all oceangoing containers without disrupting the flow of commerce. CBP also noted it is unrealistic to expect that all containers warrant such inspection because each container poses a different level of risk based on a number of factors including the exporter, the transportation providers, and the importer. CBP has implemented an approach to container security that attempts to focus resources on particularly risky cargo while allowing other cargo to proceed. CBP’s domestic efforts to target cargo to determine the risk it poses rely on intelligence, historical trends, and data provided by ocean carriers and importers. Pursuant to federal law, CBP requires ocean carriers to electronically transmit cargo manifests to CBP’s Automated Manifest System 24 hours before the cargo is loaded on a ship at a foreign port. This information is used by CBP’s Automated Targeting System (ATS). ATS is characterized by CBP as a rule-based expert system that serves as a decision support tool to assess the risk of sea cargo. In addition, CBP requires importers to provide entry-level data that are entered into the Automated Commercial System and also used by ATS. According to CBP officials, ATS uses this information to screen all containers to determine whether they pose a risk of containing WMD. As shown in figure 1, CBP targeters at domestic ports target containers by first accessing the bills of lading and their associated risk scores electronically. The assigned risk score helps the targeters determine the risk characterization of a container and the extent of documentary review or inspection that will be conducted. For example, containers characterized as high-risk are to be inspected. Containers characterized as medium-risk are to be further researched. That is, targeters are to consider intelligence alerts and research assistance provided by the National Targeting Center (NTC) to the ports, and their own experience and intuition, in characterizing the final risk of shipments. Containers characterized as low-risk are generally to be released from the port without further documentary review or inspection. There are, generally, two types of inspections that CBP inspectors may employ when examining cargo containers—nonintrusive inspections and physical examinations. The nonintrusive inspection, at a minimum, involves the use of X-ray or gamma-ray scanning equipment. As shown in figure 2, the X-ray or gamma ray equipment is supposed to scan a container and generate an image of its contents. CBP inspectors are to review the image to detect any anomalies, such as if the density of the contents of the container is not consistent with the description of the contents. If an anomaly is apparent in the image of the container, CBP inspectors are to decide whether to conduct a physical examination of the container. According to CBP officials, they have a policy to determine the type of physical examination to be conducted depending on the location of the anomaly. CBP inspectors also are to use radiation detection devices to detect the presence of radioactive or nuclear material. If the detectors indicate the presence of radioactive material, CBP officials are to isolate the source and contact the appropriate agency, such as the Department of Energy, for further guidance. CBP Extended Its Targeting and Inspection Activities to Overseas Seaports Announced in January 2002, CSI was implemented to allow CBP officials to target containers at overseas seaports so that any high-risk containers may be inspected prior to their departure for U.S. destinations. According to the CSI strategic plan, strategic objectives for CSI include (1) pushing the United States’ zone of security beyond its physical borders to deter and combat the threat of terrorism; (2) targeting shipments for potential terrorists and terrorist weapons, through advanced and enhanced information and intelligence collection and analysis, and preventing those shipments from entering the United States; (3) enhancing homeland and border security while facilitating growth and economic development within the international trade community; and (4) utilizing available technologies to leverage resources and to conduct examinations of all containers posing a high risk for terrorist related activity. Another objective cited by CBP officials, although not included in the CSI strategic plan, is to raise the level of bilateral cooperation and international awareness regarding the need to secure global trade. To implement CSI, CBP negotiates and enters into bilateral arrangements with foreign governments, specifying the placement of CBP officials at foreign ports and the exchange of information between CBP and foreign customs administrations. CBP first solicited the participation of the 20 foreign ports that shipped the highest volume of ocean containers to the United States. These top 20 ports are located in 14 countries and regions and shipped a total of 66 percent of all containers that arrived in U.S. seaports in 2001. CBP has since expanded CSI to strategic ports, which may ship lesser amounts of cargo to the United States but may also have terrorism or geographical concerns. As shown in table 1, as of February 2005, CSI was operational at 34 ports, located in 17 countries or regions. For fiscal year 2004, the CSI budget was about $62 million, with a budget of about $126 million in fiscal year 2005 for the program. To participate in CSI, a host nation must meet several criteria. The host nation must utilize (1) a seaport that has regular, direct and substantial container traffic to ports in the United States; (2) customs staff with the authority and capability of inspecting cargo originating in or transiting through its country; and (3) nonintrusive inspection equipment with gamma- or X-ray capabilities and radiation detection equipment. Additionally, each potential CSI port must indicate a commitment to (1) establishing an automated risk management system; (2) sharing critical data, intelligence, and risk management information with CBP officials; (3) conducting a thorough port assessment to ascertain vulnerable links in a port’s infrastructure and commit to resolving those vulnerabilities; and (4) maintaining a program to prevent, identify, and combat breaches in employee integrity. To prepare for implementation of CSI, CBP sends an assessment team to each potential CSI port to collect information about the port’s physical and information infrastructure, the host country’s customs operations, and the port’s strategic significance to the United States. CBP then deploys a CSI team, which generally consists of three types of officials—special agents, targeters, and intelligence analysts. These officials come from either CBP or U.S. Immigration and Customs Enforcement (ICE). The team leader is a CBP officer or targeter who is assigned to serve as the immediate supervisor for all CSI team members and is responsible for coordinating with host government counterparts in the day-to-day operations. The team leader is also to prepare a weekly report on container targeting and inspection activity at the port. The targeters are team members responsible for targeting shipments and referring those shipments they determine are high-risk to host government officials for inspection. The targeter may also observe inspections of containers. The intelligence analyst is responsible for gathering information to support targeters in their efforts to target containers. In addition, the special agents are to coordinate all investigative activity resulting from CSI-related actions, as well as liaison with all appropriate U.S. embassy attachés. CSI Process for Targeting and Inspecting Cargo Containers Overseas Although the targeting of cargo at domestic ports is primarily dependent upon the ATS score, under CSI the targeting of cargo is largely dependent on CBP targeters’ review of the ATS score in conjunction with reviews of bills of lading, additional information provided by host government officials, and, in at least one country, a unique set of targeting rules developed jointly by CBP and host government officials. As shown in figure 3, on the basis of the initial review, CBP officials are to either (1) categorize shipments as low-risk, in which case the container holding the shipment is loaded onto the departing vessel without being inspected, or (2) conduct further research in order to properly characterize the risk level of the shipment. Referrals of shipments to the host government for inspection are handled in one of three ways—shipments are inspected or inspection is either waived or denied. After receiving a referral for inspection from CSI teams, host customs officials are to review the bills of lading of the shipments and the reasons for the referrals to determine whether or not to inspect the shipments. Some host governments collect information on U.S.-bound shipments independent of CSI, which host officials also consider in decisions of whether to inspect the referred shipments. Finally, if the host government officials determine, on the basis of their review, that a shipment is not high-risk, they will deny inspection of the shipment. For any high-risk shipment for which an inspection is waived or denied, CSI teams are to place a domestic hold on the shipment, so that it will be inspected upon arrival at its U.S. destination. However, if CSI team members are adamant that a cargo container poses an imminent risk to the carrier or U.S. port of arrival but cannot otherwise convince the host officials to inspect the container, CSI team members are to contact and coordinate with the National Targeting Center to issue a do-not-load order for national security. According to CBP officials, this order advises the carrier that the specified container will not be permitted to be unloaded in the United States until a time when any associated imminent risk to the cargo container is neutralized. Once the risk is neutralized, the container is to be loaded back onto the carrier and placed on hold for a domestic examination. According to CBP officials, this type of do not load order has been implemented six times since the inception of CSI. As in the domestic inspection process, there are, generally, two types of CSI inspections—nonintrusive inspections and physical inspections. However, since CBP officials do not have the legal authority to inspect U.S.-bound containers in foreign ports, the host government customs officials are to conduct the inspections. According to CBP, in general, CBP officials are to observe the inspections and document inspection results. In addition, CBP officials, along with host government officials, may review the images produced by the X-ray or gamma-ray equipment to detect any anomalies that may indicate the presence of WMD. Also in collaboration with host government officials, CBP officials are to review the output produced by radiation detection devices to assess whether radioactive or nuclear material is present. On the basis of the results of the nonintrusive inspection, such as if an anomaly is apparent in the image of the container, the host government and CBP officials must decide whether to conduct a physical examination of the container. Our limited observations at three ports confirmed that host nation officials allowed CSI team members to observe the inspection process. CBP and host government officials at the four CSI ports we visited indicated that if WMD or related contraband were found during a CSI inspection, the host government would be responsible for taking appropriate enforcement measures and disposing of the hazardous material. While CBP Has Enhanced Its Ability to Target Containers Overseas, Limitations Remain We identified both positive and negative factors that affect CBP’s ability to target shipments at overseas seaports. According to CBP officials, the CSI program has produced factors that contribute to CBP’s ability to target shipments at overseas seaports, including improved information sharing between the CSI teams and host government officials regarding U.S.- bound shipments and a heightened level of bilateral cooperation on and international awareness of the need for securing the global shipping system. However, we found several factors that may limit the program’s effectiveness at some ports, including (1) staffing imbalances at CSI ports and (2) weaknesses in one source of data CBP relies upon to target shipments. CSI Successes Have Enhanced CBP’s Ability to Target Containers Overseas One of the factors assisting with targeting of cargo is improved information sharing between U.S. and host customs officials. CBP has successfully negotiated agreements with several foreign governments to allow for the operation of CSI at their overseas seaports. Through September 11, 2004, CSI teams were able to target about 65 percent of the shipments coming through 25 CSI ports to determine whether they were at risk of containing WMD. This represented about 43 percent of all oceangoing cargo container traffic to the United States. As of January 2005, CBP had expanded the program to 34 operational ports, with plans to further expand the program to a total of 45 ports by the end of fiscal year 2005. According to CBP officials, the overseas presence of CBP officials has led to effective information sharing between the team and host government officials regarding targeting of U.S.-bound shipments. For example, CBP targeters at one of the ports we visited said that the presence of CBP officials at CSI ports fosters cooperation by host nation customs officials, such that more shipments characterized as high-risk and referred for inspection would be denied inspection by the host government if CBP officials were not present. According to CBP officials, information from host government officials on U.S.-bound shipments has been beneficial to CBP’s efforts to target shipments. They noted that the additional information provided by host governments can be utilized to address threats posed by U.S.-bound shipments. Additionally, CBP officials noted that the CSI teams can provide this information to NTC to incorporate into ATS to enhance CBP’s targeting capabilities. During one of our port visits, host government officials noted that providing information to CSI teams allows CBP officials to make more informed decisions about which shipments are high-risk, reducing the number of shipments deemed high-risk and referred for inspection by the host government. Additionally, CBP and host government officials at this same port told us that host government information also results in additional inspections of U.S.-bound containers, beyond those referred by the CSI team. For example, they said that in 2003, this host government identified and inspected 30 high-risk U.S.-bound containers that were not identified as high-risk by the CSI team. Another positive factor reported to us is the level of bilateral cooperation and international awareness regarding the need to secure global trade. With the discovery and seizure of shipments under CSI of automatic weapons, ammunition, and other falsely identified contraband, CBP noted that many customs services around the world without strong law enforcement capabilities are currently seeking additional legal authority to strengthen their ability to fight terrorism. For example, CBP noted that in June 2002, the World Customs Organization (WCO) passed a resolution to enable ports in all of its member nations to begin to develop outbound targeting programs consistent with the CSI model. In addition, in April 2004 the European Union and the Department of Homeland Security signed an agreement that calls for intensifying and broadening the agreement on customs cooperation and mutual assistance in customs matters, to include cooperation on container security and related matters. For example, the measures adopted in the agreement include the creation of an information exchange network, an agreement on minimum requirements applicable for European ports that wish to participate in CSI, and identification of best practices concerning security controls of international trade. CBP Staffing Imbalances Prevent Targeting of All Containers from CSI Ports One factor negatively affecting CBP’s ability to target containers is staffing imbalances across ports and shortages at some ports. Although CBP’s goal is to target all U.S.-bound containers at CSI ports before they depart for the United States, it has not been able to place enough staff at some CSI ports to do so. CBP has developed a CSI staffing model to determine the staff needed to target containers. However, at some CSI ports CBP has been unable to staff the CSI teams at the levels called for in the CSI staffing model. In commenting on a draft of this report, DHS noted that the 35 percent of U.S.-bound shipments that were not targeted by CSI teams were deemed low-risk by ATS and thus required no further review at CSI ports. However, our discussions with CSI teams at two of the four ports we visited indicated that those teams did not prioritize shipments for targeting based on ATS score but instead prioritized shipments by departure time. As a result, there is no assurance that all high-risk shipments are targeted at CSI ports. CBP has been unable to staff the CSI teams at the levels called for in the CSI staffing model because of diplomatic and practical considerations. CBP officials told us it is unrealistic to expect that CBP can place the number of targeters indicated by its staffing model needed to review all shipments at every CSI port. In terms of diplomatic considerations, the host government may limit the overall number of U.S. government employees to be stationed in the country and may restrict the size of the CSI team. In terms of practical considerations, the host governments may not have enough workspace available for CSI staff and may thus restrict the size of the CSI team. The U.S. Department of State would also have to agree to the size of the CSI teams, a decision that has to be balanced with the mission priorities of the embassy, the programmatic and administrative costs associated with increases in staffing, and security issues related to the number of Americans posted overseas. According to the State Department, the average cost of putting an American position overseas will be approximately $430,000. One of the features of the CSI staffing model that may contribute to the staffing imbalance is its reliance on placing staff overseas at CSI ports. It does not consider whether some of the targeting functions could be performed in the United States. For example, the model does not consider what minimum number of targeters need to be physically located at CSI ports to carry out duties that require an overseas presence (such as coordinating with host government officials) as opposed to other duties that could be performed in the United States (such as reviewing manifests and databases). As we noted in our 2002 report on a staffing framework for use at U.S. embassies, federal agencies should consider options that improve operational efficiency and effectiveness and that minimize security risks, such as assessing which functions can occur in the United States, as part of their framework for determining the right number of staff to be placed overseas. CBP has acknowledged that it cannot fully implement the CSI staffing model and has supplemented staff at the CSI ports with domestic targeters at NTC. According to CBP officials, CSI teams may contact these NTC targeters and request that they help target specific shipments that CSI teams at the ports are unable to target. The NTC targeters, after targeting the shipments, are to notify the relevant CSI team with the results of their targeting, including whether the shipments are high-risk and should be referred to the host government for inspection. Although the NTC targeters are available to provide assistance to CSI teams 24 hours a day, 7 days a week, CBP officials noted that even with the addition of these targeters, the bureau has been unable to target every U.S.-bound shipment before it departed a CSI port. The use of domestic targeters demonstrates that CBP does not have to rely exclusively on overseas targeters as called for in its staffing model. Our observations at four CSI ports indicated that having CSI staff work directly with host nation customs officials was beneficial to both the targeting and the inspection processes. However, we also noted that the the targeters’ work focused on targeting ATS findings, as well as consulting various automated databases, and did not include much interaction with host government officials. For example, at two of the ports we visited CBP officials told us that typically only one or two CSI team members dealt directly with host customs officials. In addition, while CBP officials could not provide us with port-specific or average costs of the CSI port teams, they stated that it was more expensive to post staff overseas than in the United States. One Source of Targeting Data Has Limitations Another factor that negatively affects CBP’s ability to target shipments is the existence of limitations in one data source used. For CSI, CBP relies on manifest information to assess the risk level of U.S.-bound shipments. As we previously reported, terrorism experts, trade representatives, and CBP officials indicated that manifest data may contain unreliable information and are sometimes incomplete. We reported that manifests are produced by second-hand parties (ocean carriers), not the importers or exporters who have the most contact with and knowledge of the cargo. In addition, manifests have historically been used to disguise detailed information about containers’ contents, to prevent theft during transport of the cargo. This is particularly applicable to high-value products, such as electronics and apparel. In the same previous report, we also noted that manifest data can be amended up to 60 days after oceangoing vessels arrive at U.S. seaports, further limiting the use of manifest data for determining a definitive risk level before cargo arrives. CBP officials at CSI ports we visited indicated that despite the requirement that carriers submit accurate and complete manifests to CBP 24 hours prior to the cargo being loaded on the U.S.-bound vessel, some manifest data in ATS remain vague or incomplete. For example, a CBP official at one CSI port we visited said that in some cases the name of the freight forwarder was used in place of the actual names of the importer and consignee. Although CBP officials told us that the quality of the manifest data has improved, there is no method to routinely verify whether the manifest data accurately reflect the contents within the cargo container. CBP officials told us that to try to address the shortcomings of manifests, CSI teams consult other data to obtain information on shipments. As mentioned earlier, entry-level data are used. Some Containers Not Inspected for a Variety of Reasons Since the implementation of CSI through September 11, 2004, 28 percent (4,013) of containers referred to host government officials for inspection were not inspected, generally because of host government information that suggested the containers were not high-risk or operational limitations that prevented the containers from being inspected before they left the port. In 1 percent of these cases, host government officials denied inspections, generally because inspection requests were based on factors not related to security threats, such as drug smuggling. Containers designated as high-risk by CSI teams that are not inspected overseas are supposed to be referred for inspection upon arrival at the U.S. destination port. CBP officials noted that between July 2004 and September 2004, about 93 percent of shipments referred for domestic inspection were inspected at a U.S. port. CBP officials explained that some shipments designated as high-risk by CSI teams were not inspected domestically because inspectors at U.S. ports received additional information or entry information that lowered the risk characterization of the shipments or because the shipments remained aboard the carrier and were never offloaded at a U.S. port. For the 72 percent (10,343) of containers referred to host government officials for inspection that were inspected overseas, CBP officials told us there were some anomalies that led to law enforcement actions but that no WMD were discovered. However, considering that the inspection equipment used at CSI ports varies in detection capability and that there are no minimum requirements for the detection capability of equipment used for CSI, CBP has no absolute assurance that inspections conducted under CSI are effective at detecting and identifying WMD. Some Containers Not Inspected Overseas because of Host Government Information Some of the containers referred for inspection were not inspected because of additional information obtained by host government officials that lowered the risk characterization of the container. An important aspect of CSI is the information host government officials can provide in determining whether a U.S.-bound container is at high risk of containing WMD and should be inspected. For example, at one CSI port we visited, the host customs official told us that although CBP officials referred a shipment for inspection because the area from which the shipment originated had known terrorist activity, the host government’s customs officials had a thorough working history with the importer and believed the shipment did not pose a threat. On the basis of this information, the CSI team and the host nation customs officials agreed that the shipment did not pose a threat and that inspection was not necessary. Some Containers Not Inspected Overseas because of Operational Limitations Some containers were not inspected at CSI ports because of operational limitations that were generally beyond the control of CBP. For example, since the program’s inception through September 11, 2004, some referred containers were not inspected at CSI ports because the containers had already been loaded on departing vessels. CBP officials and host government customs officials explained that a container may already be loaded on a vessel prior to its being referred for inspection because the amount of time the container actually stays in the port—dwell time—may be brief. CSI teams are not always able to target such containers and refer them for inspection before they are loaded. According to CBP and host government officials with whom we met, terminal operators intentionally schedule the arrival and departure of containers in order to minimize dwell time. However, CSI teams may not always know when containers are due for departure. Host government customs officials at one of the ports we visited said that until recently, the CSI team did not have access to the port schedules for U.S.-bound containers; therefore, team members could not prioritize the order in which they reviewed bills of lading for U.S.-bound shipments based on container dwell time. However, as of July 2004, the CSI team at this port gained access to port schedule information and now prioritizes its review of bills of lading based on container departure time. Host government officials noted that this practice decreases the number of containers waived for inspection. Host Nations Deny Inspections for Some Containers Referred by CSI Teams In addition to operational limitations that prevent referred containers from being inspected at CSI ports, host government officials have denied inspection for about 1 percent of the containers referred to them by CBP officials. According to CBP officials, the majority of these denials occurred early in the program’s operation as both CSI teams and host government officials implemented the program. For example, host government officials at one CSI port we visited indicated that some of these denials were for inspection requests based on factors not related to security threats, such as drug smuggling. They told us their rationale in denying these requests was that CBP could inspect these containers in the United States, and identifying customs violations was not the purpose of CSI. At another port we visited, CSI team officials told us that host customs officials initially denied inspections of shipments referred solely because of the shipment’s ATS score, preferring to instead have referrals that were further researched by the CSI team to help ensure that shipments were truly high- risk. As noted earlier, if the CSI team members are adamant that a cargo container poses an imminent risk to the conveyance or the U.S. port of arrival, they can coordinate with the National Targeting Center to issue a do-not-load order to prevent the container from being placed on the ship. Containers Not Inspected Overseas Can Be Inspected on U.S. Arrival Containers with high-risk shipments that are not inspected overseas are supposed to be referred for inspection upon arrival at the U.S. destination port. Effective November 21, 2003, CSI team members were required to place domestic exam holds on high-risk containers that had not been inspected overseas. That is, the CSI team is supposed to request a domestic inspection for all containers for which an inspection was waived or denied by marking, in ATS, the container for a domestic hold and notifying the director of the U.S.-destination port. The CSI team is also supposed to request domestic exams for shipments that were inspected overseas but not to the satisfaction of the CSI team, such as if there was a disagreement over the interpretation of the X-ray image produced during the nonintrusive inspection or if the host nation was not willing to perform a physical exam after an anomaly was detected. However, not all shipments referred for a domestic inspection by CSI teams are inspected. Although CBP has not systematically tracked since the program’s inception whether containers placed on domestic hold are examined, according to CBP, it began tracking this information in July 2004. CBP officials told us that between July 2004 and September 2004, 93 percent of the shipments placed on CSI for domestic exam hold were actually inspected at a U.S. port. CBP explained that U.S. port officials did not inspect about 2 percent of the shipments placed on domestic exam hold during this time period because the shipments were either remaining on board at the U.S. port or additional intelligence information convinced them that the shipment no longer needed to be characterized as high-risk. For the remaining 5 percent of shipments that were not inspected domestically, CBP officials told us the bureau cannot confirm what action was taken on these shipments because of data input errors by domestic inspectors. CBP officials also noted that they were unable to confirm whether any shipments placed on domestic exam hold prior to July 2004 were actually inspected upon arrival in the United States because of these same data input errors. In the Absence of Minimum Technical Requirements, Inspection Equipment Capabilities Vary As of September 11, 2004, host governments had inspected 72 percent (10,343) of all containers referred to them by CSI teams since the inception of the program. These containers were inspected using nonintrusive inspections and physical examinations. According to CBP and host government officials, variation in the extent of physical examinations depends on anomalies detected during the nonintrusive inspection. CBP officials also told us that no WMD have been discovered under CSI. There are two different types of radiation detection devices used at CSI ports to inspect cargo containers—radiation isotope identifier devices (RIID) and radiation portal monitors (RPM)—each with different detection and identification capabilities. While both devices can detect the presence of radioactive material, only the RIID can determine whether or not the type of radiation emitted by the material actually poses a threat or whether it is a normal emission of radiation, such as that found in ceramic tile. In addition, there is another type of radiation detection device used at CSI ports to help ensure the safety of CSI team members—personal radiation detectors (PRD). According to radiation detection experts, PRDs are personal safety devices to protect against radiation exposure, they are not adequate as search instruments. A scientist at the Department of Energy Los Alamos National Laboratory who was involved in the testing of radiation detection equipment said that PRDs have a limited range and are not designed to detect weapons-usable nuclear material. There are also various types of X-ray and gamma-ray imaging machines used at CSI ports to inspect cargo containers, and their detection and identification capabilities may vary. According to CBP, there are various brands of imaging machines used to conduct nonintrusive inspections at CSI ports. These brands of machines differ in their penetration capabilities, scan speed, and several other factors. Despite this variability in detection and inspection capability, CBP officials told us that the inspection equipment used at all CSI ports had inspection capabilities at least as good as the nonintrusive inspection equipment used by CBP at domestic ports. CBP officials told us that prior to establishing CSI at a foreign port, CBP officials conducted on-site assessments of the nonintrusive inspection equipment used at the port. More recently, CBP conducted an assessment of the capabilities of the equipment in use at each CSI port against the capabilities of one brand of equipment. This assessment indicated that with the exception of equipment used in one country, all equipment had capabilities that met or exceeded those of this brand of equipment. In addition, technologies to detect other WMD have limitations. According to CBP officials, the bureau has not established minimum technical requirements for the nonintrusive inspection equipment or radiation detection equipment that can be used as part of CSI because of sovereignty issues, as well as restrictions that prevent CBP from endorsing a particular brand of equipment. Although CBP cannot endorse a particular brand of equipment, the bureau could still establish general technical capability requirements for any equipment used under CSI similar to other general requirements CBP has for the program, such as the country committing to establishing an automated risk management system. Because the CSI inspection could be the only inspection of a container before it enters the interior of the United States, it is important that the nonintrusive inspection and radiation detection equipment used as part of CSI meets minimum technical requirements to provide some level of assurance of the likelihood that the equipment could detect the presence of WMD. CBP Has Made Progress Developing a Strategic Plan and Performance Measures for CSI, but Further Refinements Are Needed Although CBP has made some improvements in the management of CSI, we found that further refinements to the bureau’s management tools are needed to help achieve program goals. In July 2003, we recommended that CBP develop a strategic plan and performance measures, including outcome-oriented measures, for CSI. In February 2004, CBP finalized a strategic plan for CSI containing three of the six key elements identified by the Government Performance and Results Act of 1993 (GPRA) for an agency strategic plan: a mission statement, objectives, and implementation strategies. CBP officials told us the bureau plans to incorporate the remaining three elements into the CSI strategic plan, specifying how performance goals are related to general goals of the program, identifying key external factors that could affect program goals, and describing how the program will be evaluated. CBP has also made progress in the development of outcome-oriented performance measures for some objectives, particularly for the objective of increasing information sharing and collaboration among CSI and host country personnel. However, further refinements are needed to assess the effectiveness of the other program objectives, including CSI targeting and inspection activities. CBP Completed a Strategic Plan for CSI, but Three Key Elements Are Still under Development In July 2003, we recommended that CBP develop a strategic plan for CSI. CBP developed a strategic plan in February 2004. According to GPRA, executive agency strategic plans should include a comprehensive mission statement, general goals and objectives, a description of how the general goals and objectives are to be a description of how performance goals and measures are related to the general goals and objectives of the program, an identification of key factors external to the agency and beyond its control that could affect the achievement of general goals and objectives, and a description of the program evaluations. These six key elements are required for executive agency strategic plans and thus serve as a good baseline to measure other long-term planning efforts. In addition, we have found that high-quality plans include strategies to mitigate the effects of external factors, although such strategies are not a legislative requirement. CSI’s strategic plan includes three of these key elements: a mission statement: “to prevent and deter terrorist use of maritime containers while facilitating movement of legitimate trade”; objectives, including (a) pushing the United States’ zone of security beyond its physical borders to deter and combat the threat of terrorism; (b) targeting shipments for potential terrorists and terrorist weapons, through advanced and enhanced information and intelligence collection and analysis, and preventing those shipments from entering the United States; (c) enhancing homeland and border security while facilitating growth and economic development within the international trade community; and (d) utilizing available technologies to leverage resources and to conduct examinations of all high-risk containers (another objective cited by CBP officials, although not included in the CSI strategic plan, is to raise the level of bilateral cooperation and international awareness regarding the need to secure global trade); and various descriptions of how general goals and objectives are to be achieved. However, CBP has not yet incorporated the other three key elements into its strategic plan. For example, the CSI strategic plan does not include a description of how performance goals and measures are related to program objectives. At the time the strategic plan was developed, CBP lacked performance goals and measures. We discuss performance measures in more detail in the next section. In addition, the CSI strategic plan does not identify external factors beyond the control of CBP that could affect the achievement of program objectives. Such external factors could include economic, demographic, social, technological, or environmental factors. Two external factors that could be addressed in the CSI strategic plan are the extent to which host governments can provide additional information to contribute to the targeting process and the various operational limitations that prevent all high-risk containers from being inspected overseas. In addition, the CSI strategic plan does not include a description of program evaluations. Although evaluations are not described in the CSI strategic plan, CBP conducts periodic evaluations of CSI ports in order to determine areas in which implementation of CSI can be improved and to determine whether CSI should continue to operate at that port. However, these evaluations do not employ a systematic methodology or identify the basis on which program success is determined. GPRA defines a program evaluation as an objective and formal assessment of the implementation, results, impact, or effects of a program or policy. Program evaluations are used to ensure the validity and reasonableness of program goals and strategies, as well as identify factors likely to affect program performance. Specifically, CBP has not identified and planned which CSI elements will be assessed at each port; rather, assessment topics are generated ad hoc. In addition, assessment topics differ over time, preventing CBP from determining the extent to which CSI teams addressed issues raised in previous evaluations. For example, in its July 2003 evaluation of one CSI port, CBP’s Office of International Affairs identified the following problems: (1) lack of information available to the intelligence research specialist, (2) the need to make better information available to CSI team members, and (3) the lack of follow-through on shipments through CSI ports that were referred for domestic exam. However, none of these issues was discussed in the Office of International Affairs’ next evaluation of this port in December 2003. Similarly, the assessment topics for CSI port evaluations also differ across ports, making it difficult to make comparisons across ports. In February 2005, CBP officials told us that CBP is revising the CSI strategic plan to address the elements we raise in this report. While it appears that the bureau’s initial efforts in this area meet the intent of our prior recommendation to develop a strategic plan for CSI, we cannot determine the effectiveness of further revisions to the plan without first reviewing and evaluating them. We will continue to monitor CBP’s efforts in this area. CBP Has Developed Outcome-Oriented Performance Measures for Some Program Objectives In July 2003, we recommended that CBP expand efforts already initiated to develop performance measures for CSI that include outcome-oriented indicators. Until recently, CBP based the performance of CSI on program outputs such as (1) the number and percentage of bills of lading reviewed, further researched, referred for inspection, and actually inspected, and (2) the number of countries and ports participating in CSI. As of January 2005, CBP had developed 11 performance indicators for CSI, 2 of which it identified as outcome-oriented: (1) the number of foreign mitigated examinations and (2) the percentage of worldwide U.S.-destined containers processed through CSI ports. As indicated in table 2, both outcome indicators are used to assess CBP’s progress in meeting its objective of increasing information sharing and collaboration among CBP officials and host country personnel. However, the way in which one of these indicators is measured needs refinement. The measure for the number of foreign mitigated examinations is the number of shipments referred to host governments that were not, for a variety of reasons, inspected overseas. Specifically, according to CBP, an increase in the number of examinations waived or denied suggests an increase in the number of unnecessary examinations that were prevented. However, the number of examinations waived or denied by host nations are not appropriate measures for the prevention of unnecessary exams. A shipment is inspected unnecessarily if, when provided with additional information on the shipment, the CSI team and the host nation would have agreed that the shipment was not high-risk and, therefore, the inspection should not have taken place. However, if an inspection is waived because of operational limitations, the implication may not be that the CSI team thinks the inspection is unnecessary. To the contrary, the CSI team and host government may agree that the shipment should be inspected. Similarly, a host nation denial of an inspection does not imply that the CSI team believes the inspection is unnecessary. Conversely, when a referral for inspection is categorized as denied, by definition, the CSI team believes the shipment should be inspected, but the host government refuses to conduct the inspection. In response to our review, CBP officials acknowledged that its inclusion of waivers because of operational limitations or denials of inspections in this measure was inappropriate. CBP noted that each of the performance measures for assessing information sharing and collaboration with host nations will be pilot- tested at numerous CSI ports to assess their feasibility, utility, relevancy, and the likelihood that they will produce information that is actionable. According to CBP, the measures may be revised based on the evaluation of the pilot to improve their effectiveness in assessing program performance and outcomes. According to Office of Management and Budget (OMB) and CBP officials, developing outcome-oriented performance measures that measure the effectiveness of programs that aim to deter or prevent specific behaviors is challenging. For example, one of CSI’s objectives is to deter terrorists’ use of oceangoing cargo containers. However, according to host government officials at one port we visited and CBP officials, it is difficult to develop a meaningful measure for the extent to which implementation of CSI has discouraged terrorists from using oceangoing cargo containers to smuggle WMD into the United States. In January 2005, CBP developed a performance indicator to measure CSI’s progress in preventing terrorists’ use of oceangoing cargo containers that measures the amount of terrorist contraband, illegal drugs, and other illegal activity found during CSI inspections. However, this indicator may not be a meaningful measure of deterrence of terrorist activity, since the inclusion of narcotics is not relevant to the program’s objectives, and according to CBP, no terrorist weapons or weapons material have been detected prior to or during the implementation of CSI. According to OMB, when agencies face difficulty in developing outcome- based performance measures, they are encouraged to develop proxy measures. Proxy measures are used to assess the effectiveness of program functions, such as the targeting and inspection processes of CSI, rather than directly assess the effectiveness of the program. For example, CBP could develop a proxy measure associated with targeting and inspection, such as the percentage of containers randomly inspected domestically that was not characterized by CBP officials as high-risk that actually contained WMD. CBP could also use random inspections to measure if containers from CSI ports that were not identified as high-risk actually contained WMD and, therefore, should have initially been identified as high-risk. According to terrorism experts and representatives of the international trade community, random inspections could be an effective practice to supplement and test CBP’s targeting and inspection processes. Terrorism experts and shipping industry representatives also suggest that staging covert, simulated terrorist events could test the effectiveness of both the targeting and inspection processes of CSI. Simulated events could include smuggling fake WMD into the United States using an oceangoing cargo container. Such events could help determine whether the targeting procedures led to the identification of the container as high-risk and whether any subsequent inspection activities actually detected the fake WMD. CBP could, therefore, develop proxy measures associated with this activity for CSI, such as the percentage of staged containers that were identified as high-risk and the percentage of staged containers for which the fake WMD was detected during the inspection process. In response to our prior work on container security, CBP officials agreed with our recommendation that containers be subject to such tests. CSI lacks performance goals and measures for its objective of enhancing homeland and border security while facilitating growth and economic development within the international trade community. Regarding the enhancement of homeland and border security, there are no performance goals for CSI. According to host government officials at CSI ports we visited and shipping industry representatives with whom we met, CSI has resulted in increased international awareness of supply chain security. Officials from the World Customs Organization predicted that as more countries partner with CBP through CSI, there will be increased consistency in the way in which the supply chain and ports are secured worldwide. One WCO official also stated that CBP’s efforts through initiatives such as CSI provide guidance for developing countries on how to improve their supply chain security efforts. While these testimonials help identify some benefits of CSI, CBP does not have performance indicators and goals to actually measure the extent to which the program has resulted in enhanced homeland and border security. Regarding facilitating economic growth, there are also no performance measures for CSI. According to host government officials with whom we met at one CSI port, they are willing to participate in CSI as long as the program does not disrupt the flow of trade. An example of such a disruption would be the delayed departure of a vessel because of a CSI inspection or the instruction not to load a container on a departing vessel because of a CSI inspection. Discussions with CBP and host government officials and representatives of the shipping industry indicate that CBP has been successful in not disrupting the flow of trade through CSI. However, CBP has not developed associated performance goals and measures to demonstrate its reported success in achieving this objective. In commenting on a draft of this report, DHS noted that CBP is continuing to refine existing performance measures and develop new performance measures for its program goals. For example, CBP was developing a cost efficiency measure to measure the cost of work at a port and to contribute to staffing decisions. CBP believes that its continued revisions to the CSI strategic plan have also allowed CSI staff to refine performance measures and the bureau’s data collection methodology. Conclusions CBP has made progress in its implementation of CSI, but the program could be further improved by taking steps to help ensure its effectiveness in preventing WMD from entering the United States via cargo containers. First, CBP’s inability to staff all CSI ports to the level suggested by its staffing model and the model’s assumption that all staff should be located at the CSI ports have limited the program’s ability to target potentially high-risk shipments at some foreign seaports before they depart for the United States. This problem may be exacerbated as CBP continues to expand CSI to additional overseas seaports. Second, without minimum technical requirements for the nonintrusive inspection equipment used as part of CSI, CBP has limited assurance that the equipment in use can successfully detect all WMD. While we recognize that establishing such requirements may be a difficult issue to address, it is important that CBP establish them because the CSI inspection may be the only inspection of some containers before they enter the interior of the United States. Third, CBP has developed a strategic plan for the CSI program and indicated that it will refine the plan to include key elements described in GPRA. Although we are not making a recommendation related to its strategic plan, given the importance of having an effective strategic plan for the program, we will continue to monitor the bureau’s progress in refining the plan. Finally, while CSI has apparently resulted in some benefits, such as cooperation with foreign governments and enhanced international awareness of container security, CBP has not developed outcome-based performance measures or proxy measures for all of its program objectives. Without outcome-based performance measures on which to base program evaluations, CBP will have difficulties assessing the effectiveness of CSI as a homeland security program. Recommendations for Executive Action To help ensure that the objectives of CSI are achieved, we recommend that the Secretary of the Department of Homeland Security direct the Commissioner of U.S. Customs and Border Protection take the following three actions: revise the CSI staffing model to consider (1) what functions need to be performed at CSI ports and what functions can be performed in the United States, (2) the optimum levels of staff needed at CSI ports to maximize the benefits of targeting and inspection activities in conjunction with host nation customs officials, and (3) the cost of locating targeters overseas at CSI ports instead of in the United States; establish minimum technical requirements for the capabilities of nonintrusive inspection equipment at CSI ports, to include imaging and radiation detection devices, that help ensure that all equipment used can detect WMD, while considering the need not to endorse certain companies and sovereignty issues with participating countries; develop performance measures that include outcome-based measures and performance targets (or proxies as appropriate) to track the program’s progress in meeting all of its objectives. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of DHS and the Department of State for comment. We received comments from the DHS Acting Director, Departmental Liaison, that are reprinted in appendix III. DHS generally agreed with our recommendations and outlined actions CBP either had taken or was planning to take to implement them. The Department of State had no comments. CBP agreed with our recommendation on CSI’s staffing model and said that modifications to the model would allow for program objectives to be achieved in a cost-effective manner. Specifically, CBP said that it would evaluate the minimum level of staff needed at CSI ports to maintain an ongoing dialogue with host nation officials, as well as assess the staffing levels needed domestically to support CSI activities. If properly implemented, these actions should address the intent of this recommendation. In addressing our recommendation to establish minimum technical requirements for the capabilities of the nonintrusive inspection equipment used at CSI ports, CBP agreed to evaluate the feasibility of making such requirements for the imaging and radiation detection devices in use at CSI ports but did not commit to implement our recommendation. CBP noted that because host governments purchase the equipment for use at CSI ports, a legal issue may exist regarding CBP’s ability to impose such requirements. CBP noted it would also seek comment and advice from other U.S. government agencies that would be affected by such a decision. Although CBP cannot endorse a particular brand of equipment, the bureau could still establish general technical capability requirements for any equipment used under CSI similar to other general requirements CBP has for the program, such as the country committing to establishing an automated risk management system. Because the CSI inspection could be the only inspection of a container before it enters the interior of the United States, it is important that the nonintrusive inspection and radiation detection equipment used as part of CSI meet minimum technical requirements to provide some level of assurance of the likelihood that the equipment could detect the presence of WMD. CBP agreed with our recommendation on developing performance measures, noting that it would continue to refine, evaluate, and implement any and all performance measures needed to track the progress in meeting all of CSI’s objectives. CBP noted that this would be an ongoing activity. If properly implemented, these plans should help address the intent of this recommendation. DHS also offered technical comments and clarifications, which we have considered and incorporated where appropriate. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or at stanar@gao.gov. Key contributors to this report are listed in appendix IV. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Appendix I: Objectives, Scope, and Methodology Objectives We addressed the following issues regarding the U.S. Customs and Border Protection’s (CBP) Container Security Initiative (CSI): What factors affect CBP’s ability to target high-risk shipments at overseas seaports? Under CSI, to what extent have high-risk containers been inspected overseas prior to their arrival at U.S. destinations? To what extent has CBP developed strategies and related management tools for achieving the program’s goals? Scope and Methodology To address our first issue—what factors affect CBP’s ability to target shipments at overseas seaports—we first reviewed relevant GAO reports on CBP’s Automated Targeting System (ATS) and CSI. We then met with CBP headquarters officials to hold discussions and review documents related to CSI’s overall targeting strategy, criteria for identifying high-risk containers, efforts to evaluate the program, efforts to refine targeting, training provided to CSI targeters, and the criteria for staffing at CSI ports. We also visited the National Targeting Center, which serves as CBP’s central targeting facility related to terrorism. At this facility, we met with cognizant officials and discussed ATS categorization of containers by risk level, how cargo containers’ scores are transmitted to targeters at CSI ports, the training provided to the ATS targeters, the types of information and intelligence utilized by targeters, and recent and planned refinements to ATS. We also met with officials from the European Commission and the World Customs Organization (WCO) in Brussels, Belgium, and discussed how the CSI program has been implemented and its impact on container security. Also related to this first issue, we visited four overseas CSI ports. We selected these ports on the basis of the volume of containers shipped to the United States, geographic dispersion, and time the CSI team was in operation. At these ports, we met with the CSI teams to discuss and review documents related to the overall targeting process, the types of information used in the targeting process, efforts to evaluate the targeting process, the impact other CBP initiatives may have had on the targeting process, and requests for information to host governments. We also observed operations at each of the ports, including targeters reviewing manifest information. To address our second issue—to what extent have high-risk containers been inspected overseas prior to their arrival at U.S. destinations—we met with officials from CBP headquarters and CSI port teams to hold discussions and review documents related to the overall inspection process, types of inspections, inspection equipment used, statistics on inspections conducted at CSI ports, and levels of cooperation with host governments. At the four ports we visited, we also met with foreign government customs officials to discuss the role of the CSI teams in the inspection process, the criteria they use in deciding whether to inspect a container that was referred for inspection by the CSI team, the criteria they use in deciding the type of inspection to be conducted, the procedures they use to safeguard containers once inspected, and the types of inspection equipment they used. To address our third issue—to what extent has CBP developed clearly formulated and documented strategies for achieving the program’s goals— we reviewed GAO reports examining management factors that were necessary components for the successful management of cabinet departments, agencies, and, by extension, individual programs. Specifically, we focused our review on two management factors—the development of performance measures and strategic planning—because of their general importance in the literature. We reviewed Office of Management and Budget (OMB) and Government Performance and Results Act of 1993 (GPRA) guidance on performance measures and goals to assess the extent CBP has incorporated them into the CSI program. We also discussed CSI strategies for achieving program goals with officials from CBP headquarters, CSI teams, and host governments. We also obtained and reviewed CBP evaluations of CSI port teams to assess the methodology used to conduct evaluations. We conducted our work from February 2004 through February 2005 in accordance with generally accepted government auditing standards. Data Reliability To assess the reliability of CBP’s data on the number of shipments and containers subject to targeting and inspection under CSI, we (1) obtained source data on targeting and inspection activity for two 1-week periods from CSI teams at two ports, (2) compared the source data with the data generated by CBP’s Automated Targeting System (ATS) for the same 2-week period, (3) discussed discrepancies between the source data and ATS data with CBP officials at these ports, and (4) obtained CBP headquarters’ responses to our questionnaire regarding the reliability of ATS and the data that are produced by the system. Although our initial reliability testing indicated that there were some inconsistencies between the source data and the data generated by ATS, generally because of human input error, we were able to work with CSI team officials to resolve most of the discrepancies. In addition, the differences between the source data and ATS data were so small that the results of our analysis, at least for this 2-week period, would have remained the same regardless of which data we used. Therefore, we determined that the CSI targeting and inspection data generated by ATS are sufficiently reliable for use in supporting our findings regarding the extent to which high-risk containerized shipments are identified and inspected prior to arrival at U.S. destinations. Appendix II: CSI Performance Measures, as of January 2005 Appendix II: CSI Performance Measures, as of January 2005 The measure will be the number of examinations waived because of a variety of reasons. 2,416 examinations (cumulative) This measure will utilize the annual volume of U.S.-destined containers processed through all CSI ports prior to lading and divide it by the annual worldwide number of U.S.-destined containers. This measure will track the number of memorandums of information received (MOIR), which are narratives of intelligence gathered from CSI foreign sources. This measure identifies the total number of ports where CSI has been implemented. Number of positive findings, by category This measure includes identifying the number and type of “positive findings” documented because of CSI participation. Positive findings occur when examinations performed on containers yield a positive result such as implements of terror, narcotics, forced labor, uninvoiced or unmanifested good, restricted merchandise, hazardous materials, or other results. Note that the CSI goal is to find implements of terror; other categories are peripheral benefits. This measure tracks the number of investigative cases opened either in the United States or at a foreign location because of intelligence gathered by CSI staff at foreign ports. The average cost per CSI port includes site assessments and certifications, telecom circuit installation, local area network (LAN) and office equipment, commercial off-the- shelf software, office furniture, radiation isotope identification devices (RIID), purchase of automobiles, initial lease and utilities costs, and initial shipping costs. This measure records the number of declarations of principles signed with countries where CSI ports are planned. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Mark Abraham, Kristy N. Brown, Kathryn E. Godfrey, Stanley J. Kostyla, and Deena D. Richart made key contributions to this report. Related GAO Products Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404 (Washington, D.C.: March 11, 2005). Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170 (Washington, D.C.: January 14, 2005). Port Security: Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106 (Washington, D.C.: December 10, 2004). Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062 (Washington, D.C.: September 30, 2004). Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838 (Washington, D.C.: June 30, 2004). Border Security: Agencies Need to Better Coordinate Their Strategies and Operations on Federal Lands. GAO-04-590 (Washington, D.C.: June 16, 2004). Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T (Washington, D.C.: March 31, 2004). Rail Security: Some Actions Taken to Enhance Passenger and Freight Rail Security, but Significant Challenges Remain. GAO-04-598T (Washington, D.C.: March 23, 2004). Department of Homeland Security, Bureau of Customs and Border Protection: Required Advance Electronic Presentation of Cargo Information. GAO-04-319R (Washington, D.C.: December 18, 2003). Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T (Washington, D.C.: December 16, 2003). Posthearing Questions Related to Aviation and Port Security. GAO-04-315R (Washington, D.C.: December 12, 2003). Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083 (Washington, D.C.: September 19, 2003). Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T (Washington, D.C.: September 9, 2003). Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770 (Washington, D.C.: July 25, 2003). Homeland Security: Challenges Facing the Department of Homeland Security in Balancing Its Border Security and Trade Facilitation Missions. GAO-03-902T (Washington, D.C.: June 16, 2003). Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843 (Washington, D.C.: June 30, 2003). Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T (Washington, D.C.: April 1, 2003). Border Security: Challenges in Implementing Border Technology. GAO-03-546T (Washington, D.C.: March 12, 2003). Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T (Washington, D.C.: October 17, 2002). Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T (Washington, D.C.: August 5, 2002).
In January 2002, U.S. Customs and Border Protection (CBP) initiated the Container Security Initiative (CSI) to address the threat that terrorists might use maritime cargo containers to ship weapons of mass destruction. Under CSI, CBP is to target and inspect high-risk cargo shipments at foreign seaports before they leave for destinations in the United States. In July 2003, GAO reported that CSI had management challenges that limited its effectiveness. Given these challenges and in light of plans to expand the program, GAO examined selected aspects of the program's operation, including the (1) factors that affect CBP's ability to target shipments at foreign seaports, (2) extent to which high-risk containers have actually been inspected overseas, and (3) extent to which CBP formulated and documented strategies for achieving the program's goals. Some of the positive factors that have affected CBP's ability to target shipments overseas are improved information sharing between U.S. and foreign customs staff and a heightened level of bilateral cooperation and international awareness of the need to secure the whole global shipping system. Although the program aims to target all U.S.-bound shipments from CSI ports, it has been unable to do so because of staffing imbalances. CBP has developed a staffing model to determine staffing needs but has been unable to fully staff some ports because of diplomatic considerations (e.g., the need for host government permission) and practical considerations (e.g., workspace constraints). As a result, 35 percent of these shipments were not targeted and were therefore not subject to inspection overseas. In addition, the staffing model's reliance on placing staff at CSI ports rather than considering whether some of the targeting functions could be performed in the United States limits the program's operational efficiency and effectiveness. CBP has not established minimum technical requirements for the detection capability of nonintrusive inspection and radiation detection equipment used as part of CSI. Ports participating in CSI use various types of nonintrusive inspection equipment to inspect containers, and the detection and identification capabilities of such equipment can vary. In addition, technologies to detect other weapons of mass destruction have limitations. Given these conditions, CBP has limited assurance that inspections conducted under CSI are effective at detecting and identifying terrorist weapons of mass destruction. A lthough CBP has made some improvements in the management of CSI, we found that further refinements to the bureau's management tools are needed to help achieve program objectives. In July 2003, we recommended that CBP develop a strategic plan and performance measures, including outcome-oriented measures, for CSI. CBP developed a strategic plan for CSI in February 2004 that contains three of the six key elements required for agency strategic plans, and CBP officials told us they continue to develop the other three elements. While it appears that the bureau's efforts in this area meet the intent of our prior recommendation to develop a strategic plan for CSI, we will continue to monitor progress in this area. CBP has also made progress in the development of outcome-oriented performance measures, particularly for the program objective of increasing information sharing and collaboration among CSI and host country personnel. However, CBP continues to face challenges in developing performance measures to assess the effectiveness of CSI targeting and inspection activities. Therefore, it is difficult to assess progress made in CSI operations over time, and it is difficult to compare CSI operations across ports.
Background For the purpose of this assignment, we defined “concessions” as private or public entities using federally owned/leased property under a government contract, permit, license, or other similar agreements to provide recreation, food, or other services to either the general public or specific individuals. Concessions services included, but were not limited to, food operations, vending machines, retail shops, public pay telephones, barber/beauty shops, transportation, lodging, marinas, and campgrounds. We excluded day care centers, employee association stores, and services provided by visually impaired persons under the Randolph-Sheppard Act. Under concessions agreements with federal agencies, private parties and nonfederal public entities, such as local governments, supply many of the services and accommodations provided on federal property to the public. Each year, millions of people use the services made possible through these agreements. Some agreements are long-term and some are short-term. A long-term agreement, which generally involves a large financial investment by the concessioner for construction or capital improvements, may last up to 50 years. A short-term permit or license, which generally requires little or no financial investment in facilities by the concessioner, may last up to 5 years. Each agency is responsible for developing, implementing, and monitoring its concessions program to ensure that the federal government receives a fair return from the partnership. No overall federal concessions policy exists. In exchange for use of federal property, concessioners pay the government a concessions, franchise, permit, or license fee. Most agreements provide that the concessioner will pay the government either a flat fee or a percentage of gross revenues. The primary purpose of the six land management agencies’ concessions programs is to encourage operation of a public-private partnership to provide recreation for visitors to national parks, forests, and other public lands and waters. Concessions services include food service, retail sales, ski resorts, lodging, and marinas. Nonland management agencies such as the General Services Administration and U.S. Postal Service provide concessions services either for all federal employees or their individual employees and users of their services. The primary purpose of their concessions programs is to provide high-quality merchandise and convenient services at reasonable prices. Their concessions services include food service, retail sales such as gift shops, vending machines, and coin-operated photocopiers. Since the mid 1970s, we have conducted several reviews of the concessions programs in the land management agencies. (See the list of Related GAO Products on pp. 43 and 44.) Objectives, Scope, and Methodology The objectives of our review were to determine (1) the extent of concessions operations in the federal government, (2) the rate of return received by the federal government from concessions and the factors affecting the rate of return, (3) how the federal government’s rate of return compared to other governments’ rates of return, and (4) the extent of agencies’ nonconcessions activities that generated income in fiscal year 1994 and whether they offered opportunities for the agencies to handle them like concessions. To accomplish objectives one, two, and four, we (1) sent 3 questionnaires to the 75 federal government entities listed in appendix I, requesting general information on all concessions operations and detailed agreement-specific information (including copies of the concessions agreements) on each agreement that was either initiated or extended during fiscal year 1994; (2) interviewed federal concessions management staff at both headquarters and field levels and nonprofit organizations interested in concessions issues; and (3) obtained and reviewed the laws, regulations, and policies for each federal entity’s concessions operations. Further, if concessions services in agencies were provided under agreements with GSA, we requested agencies not to include these operations in their responses. GSA agreed to include these concessions in its response. Our information on the concessions agreements comes from only the agency’s questionnaire responses for the agreements. However, we checked selected responses against copies of the concessions agreements sent to us, checked agency totals for concessions revenues and fees against our prior reports, and followed up with agency staff in selected cases to clarify their responses. Because agencies did not collect revenue data on all concessions, we could calculate the rate of return only for those agreements where agencies reported both the revenues and fees. As shown in appendix III, we used the detailed information they provided on agreements containing both revenue and fee data that were either initiated or extended in fiscal year 1994. From this reported information, we calculated the rate of return by dividing gross revenues into the sum of concessions fees and special accounts. To determine how the federal government’s rate of return compared with that of other governments, we also sent a questionnaire to five state governments and Canada. We selected the five states—California, Maryland, Michigan, Missouri, and Tennessee—on the basis of each government’s rate of return that was collected by the National Parks and Conservation Association. This Association is a private, nonprofit citizens organization organized to protect, preserve, and enhance the U.S. National Park System. We also visited two of the states—Maryland and Tennessee—and met with their key concessions managers. We selected Canada to obtain information on another country’s experience. We did our review from January 1995 to November 1995, in accordance with generally accepted government auditing standards. We did our work in Washington, D.C.; Nashville, Tennessee; and Annapolis, Maryland. Because it was impractical for us to obtain comments from all 75 agencies, we provided copies of a draft of this report to the heads of the departments of the 6 land management agencies for comment. The six agencies accounted for over 92 percent of the concessions. On March 25, 1996, we discussed the draft report with officials designated by the departments. Their comments are discussed on pages 16 and 17. Appendix VIII contains a more detailed description of our objectives, scope, and methodology. Extent of Concessions Operations in the Federal Government As shown in appendix I, 27 of the 75 federal departments and agencies surveyed reported having concessions agreements in effect during fiscal year 1994. Forty-two respondents (agencies or agency components) provided concessions data because, as shown in appendix II, some agencies, such as the Department of the Interior, had more than 1 component managing concessions agreements. The 42 agencies or agency components reported that they had 11,263 concessions agreements in effect during fiscal year 1994. Reported data showed that concessions operations ranged from small fishing guide services generating annual revenues of less than $1,000 to multimillion-dollar recreation corporations. As shown in table 1, 10,427 (over 92 percent) of the total 11,263 reported concessions agreements were with the 6 land management agencies. The National Park Service and the Forest Service concessions operations accounted for about 90 percent of the six land management agencies’ reported concessioners’ gross revenues and fees paid to the government. The agencies reported concessioners’ gross revenues of $2.2 billion in fiscal year 1994. However, the actual amount of gross revenues was greater because some agencies did not collect gross revenue data from all concessioners. Eight of the 42 agencies or agency components with concessions reported that some concessioners were not required to report revenues, particularly those paying a flat concessions fee. Some agency officials said they had no requirement to track concessioners’ revenues when concessioners paid a flat concessions fee. However, the National Park Service said it plans to change this practice in the future, because some of these agreements may be conducive to competitive agreements. As shown in table 2, agencies reported that the government received over $82 million from concessions operations during fiscal year 1994. The reported $82.5 million of government receipts includes funds that concessioners deposited into special accounts that officials said are primarily used for repairs and improvements to facilities on government property. Nine of the 42 agencies or agency components also estimated that concessioners provided an additional $4.7 million in nonfee compensation by maintaining government property. This amount is not included in table 2. Some agency officials said they estimated nonfee compensation value by considering what the cost would have been for the agency to perform the work, obtaining quoted prices from vendors, using receipts maintained by the concessioners, and reviewing concessioners’ annual financial reports. This estimated value likely did not include the total nonfee value; some agencies said they did not monitor the value of concessioners’ maintenance of government property for various reasons, including the difficulty of distinguishing between maintenance costs for federal property and concessioners’ property. The Rate of Return on Concessions Agreements Either Initiated or Extended During Fiscal Year 1994 As shown in appendix III, our analysis of financial data from the questionnaire showed a 3.6 percent rate of return to the government on reported concessioners’ revenues from concessions agreements either initiated or extended in fiscal year 1994. From reported information on the agreements with both revenue and fee data, we calculated the rate of return by dividing gross revenues into the sum of (1) concessions fees and (2) the amount concessioners deposited into special accounts for improvements. The rate represents the percentage of reported concessioners’ gross revenues that the federal government is to receive. Our analysis of the reported data showed a rate of return of 2.8 percent for the 6 land management agencies’ concessions, 9.2 percent for the nonland management agencies’ concessions, and 3.1 percent for the 50 concessions with the largest reported amount of gross revenues in our survey. As shown in appendix IV, the reported data showed that the rates of return ranged from a low rate of 2 percent to a high rate of 47 percent for the 15 service categories. Food service operations averaged the lowest rate of return (2 percent), and coin-operated copiers in U.S. Postal Service facilities averaged the highest rate of return (47 percent). How the Federal Rate of Return Compared to Other Governments’ Rates Other governments reported receiving higher rates of return from concessions operations than the overall federal rate. Four states and Canada reported on average a 12.7 percent rate of return. The states were California, Maryland, Michigan, and Missouri. The states noted by the National Parks and Conservation Association as having high rates of return from concessions reported obtaining rates of return ranging from 11 to 17 percent. In addition, Canada reported receiving a 9.8 percent rate of return on its concessions operations. As shown in appendix V, Canada and the four states reported that their concessions services included marinas, food service operations, campgrounds, and retail sales—some of the same types of services reported by the agencies we surveyed. All four states and Canada said they generally compete concessions agreements. They said that key factors for selecting concessioners were the amount of fees generated for the government and bidders’ experience and financial status. According to state officials, agreements exempted from competition included short-term permits expecting to gross a low level of revenue, generally $5,000 or less. Officials for one state also said the state would enter into a noncompetitive agreement with a business that initiated a proposal for a concession, but if the operation proved lucrative after 1 year, the state would renegotiate the concessions agreement through a competitive process. Factors Affecting the Rate of Return From Concessions We analyzed numerous factors to determine their impact on the rate of return, including competition, background of concession staff, type of service, agencies’ retention of concessions fees, and the methods used to determine concessions fees. Questionnaire data showed that although some of these factors affected the rate of return to the government, others did not. For example, our analysis of the reported data showed that the lack of a procurement background for concessions staff did not have an impact on the rate of return. In addition, officials from the five states said none of their concessions staff had procurement backgrounds. They reported that they had contracting officers to set policies but delegated concessions management to park managers. Competition Resulted in a Higher Rate of Return From Concessions Operations As shown in appendix VI, concessions agreements entered into on a competitive basis had higher rates of return than those that were not competed. Our calculated rate of return for agreements where agencies reported that they competed concessions fees was 5.1 percent, compared to a 2.0-percent rate when agencies reported that they did not use competition. The impact of competition on the rate of return remained when the differences among services were considered. Detailed analysis of reported information on the recreation service providing the highest rate of return in the land management agencies—campground—showed that competition was a factor. For campground permits where agencies reported both revenue and fee data, agencies reported that they competed 82 percent of the permits issued in fiscal year 1994. Campground permits that agencies reported competing averaged a 7.1 percent rate of return compared to a 4.1-percent rate of return for campground permits agencies said they issued noncompetitively. Questionnaire information showed that nonland management agencies competed more of their concessions than the land management agencies. Information on 2,234 concessions agreements reporting both revenue and fee data detailed how they were entered into during fiscal year 1994. The information showed that nonland management agencies entered into 101 of the agreements and competed 96 percent of them, and the land management agencies initiated 2,133 of these concessions agreements and competed 8.6 percent of them. Nonland management agencies reported that they either entered into concessions agreements using the Federal Acquisition Regulation or other policies that under most circumstances provide for competition. Most land management agencies generally have discretion whether to compete concessions agreements. The Concessions Policy Act of 1965, governing National Park Service concessions, is the only law covering concessions in land management agencies that specifically requires competition. The act requires the National Park Service to give the public the right to compete for concessions contracts. However, competition is limited by the requirement that existing concessioners who perform satisfactorily be given a preferential right of contract renewal when the agreement expires. Officials in the land management agencies said that more competition is needed, but they also said it can not always be used. They said some operations could not be competed, such as ski areas where major portions of the operations are located on private land and the concessioners have a substantial financial investment in the activities. In such situations, the federal government’s land is usually needed to complete a service, such as adding a ski lift or extending a ski lift to the top of a mountain. However, they noted that other activities, such as river running, jeep tours through scenic areas, and hunting trips, were very profitable to concessioners and conducive to competition. On the basis of questionnaire data, we determined that 6 of these types of concessions were among the 50 concessions with the highest reported gross revenues to the concessioners in our survey and were initiated in fiscal year 1994. The reported information showed that the agreements were initiated on a noncompetitive basis. Agencies’ Authority to Retain Fees Our analysis of questionnaire data showed that another factor increasing the rate of return was the agencies’ authority to retain concessions fees and use them in their operations. The rate of return on agreements where agencies reported that they were authorized to retain over 50 percent of the fees was 3.3 times the rate on agreements where agencies reported that over 50 percent of the fees were to be deposited into the Department of the Treasury as miscellaneous receipts. Further questionnaire data analysis showed that concessions with the highest gross revenues in our survey managed by agencies retaining fees averaged an 11.1 percent rate of return to the government. In contrast, the reported data showed that this category of concessions managed by agencies that did not retain fees averaged a 2.6 percent rate of return. Additionally, five nonland management agencies (with authority to retain fees) reported 5 percent of the total agreements and 3 percent of concessioners’ gross revenues but reported 18 percent of concessions fees. In contrast, the six land management agencies (without authority to retain their fees) reported 93 percent of the total agreements and 93 percent of concessioners’ gross revenues but reported 73 percent of concessions fees, as shown in table 3. Therefore, agencies authorized to retain fees reported obtaining more fees in proportion to their concessioners’ gross revenues than agencies that were not authorized to retain fees. Generally, agencies are not authorized to retain and use money they receive from outside sources in the absence of express statutory authority to do so. As shown in figure 1, most concessions fees are to be deposited into the Department of the Treasury as general miscellaneous receipts. Since agencies that collect concessions fees generally are not able to use them, they have less incentive to maximize fees. An official from one of the agencies that retained fees said that since the fees support agency operations, staff put forth extra effort to obtain a high rate of return on concessions. About 70 percent of the 42 agencies or agency components responding to our survey said retaining fees is or would be beneficial to them. Preferential Right of Contract Renewal Reduces Competition The Concessions Policy Act of 1965 grants existing National Park Service concessioners a preferential right of contract renewal when their agreements expire. Under the legislation, the Secretary of the Interior is to give preference to the renewal of a concessions contract to existing concessioners who have satisfactorily performed their obligations. Under the Department of the Interior’s regulations, the preferential right for contract renewal is the right of incumbent concessioners to match or better the best offer received from firms competing for the concessions contract. The existing concessioner must have performed satisfactorily and must have been under the existing contract for 2 years. This preference reduces competition because it may limit the number of prospective concessioners. Businesses are reluctant to expend time and money preparing bids in a process where the award is most likely going to the incumbent contractor. The National Park Service said that between 1985 and 1989, 28 of 29 contracts up for renewal were awarded to the incumbent concessioner. The National Park Service reported 23 of the 50 concessions agreements with the highest revenues reported in our survey. On the basis of the reported data, when 17 of the contracts were last awarded, the incumbent concessioners received preferential right of contract renewal and received 16 of the contracts, with 1 contract awarded to a new concessioner. The National Park Service reported that the existing concessioners sold three of the remaining six concessions to other concessioners before the contracts expired, two concessioners operated under noncompetitive commercial use licenses, and the National Park Service converted another commercial use license to a sole-source contract. Possessory Interest Another statutory requirement for the National Park Service that influences the number of bidders is possessory interest. The Concessions Policy Act of 1965 gives National Park Service concessioners the right to be compensated for improvements they construct on federal lands, which is called possessory interest. The legislation specifies that unless otherwise provided by agreement, the compensation must be based on “sound value,” which is generally defined as reconstruction cost less depreciation, not to exceed fair market value. Either the National Park Service or a successor concessioner has the liability to pay the concessioner sound value compensation. According to National Park Service officials, this valuation limits the number of businesses submitting offers for concessions. In 1993, the National Park Service issued a new policy covering standard concessions contract language, which included a provision to reduce possessory interest for contracts awarded after January 7, 1993. The policy revises the calculation of possessory interest to “fair value,” which is defined as the original cost of improvements less straight-line depreciation. This change is being challenged by the National Park Hospitality Association in the courts on the basis that the new policy is not in accordance with the Concessions Policy Act of 1965. Officials from the four states and Canada said their regulations do not allow concessioners to acquire possessory interests. However, they said they consider the amount of a concessioner’s investment when deciding the length of the contract. According to the officials, concessioners are given enough time to make a profit and amortize their investments, but the maximum term of contracts is 20 years. Calculation of questionnaire data on the National Park Service concessions that reported both revenue and fee data for contracts either awarded or extended during fiscal year 1994 showed that: New and extended agreements granting possessory interest resulted in a rate of return of 3.8 percent, and those without possessory interest resulted in a rate of return of 4.5 percent. New agreements with preferential right of contract renewal resulted in a 3.8 percent rate of return, and those without the preference resulted in a rate of return of 6.4 percent. Reported Nonconcessions Activities Generating Income Fifty components from 29 of the 75 federal agencies we surveyed said they received income of $20.5 billion in 1994 from activities that were not concessions. As shown in appendix VII, the activities varied and included the sale of hydroelectric power, audiovisual products, coins, medals, and commemorative items; tours of the Hoover Dam; operation of gift shops and reproduction services; and admission to presidential libraries. Agencies reported that most of the $20.5 billion was to be either deposited in Treasury’s special account for the agency’s use or retained by the agency for its use. According to agency officials, because of such issues as security and privacy concerns, most of the activities were not conducive to concessions operations. They estimated that activities generating $175 million, or about 1 percent of the $20.5 billion in income, could be converted into concessions operations. These activities included the sale of hydroelectric power, tours of Hoover Dam, the sale of commemorative items and coins, and collection of user or entrance fees. Agency Comments and Our Evaluation On March 21, 1996, we provided copies of a draft of this report to the heads of the departments of the six land management agencies for comment. We did not ask for comments from all 75 agencies in our survey because to do so would have been impractical. The six agencies accounted for over 92 percent of total concessions. On March 25, 1996, we discussed the draft report with officials designated by the departments, including the Forest Service’s Director of Recreation, Heritage, and Wilderness Resources; the National Park Service’s Acting Chief of the Concessions Program Division; the Bureau of Land Management’s Special Assistant to the Assistant Director for Resource Use and Protection; the Fish and Wildlife Service’s Branch Chief for Visitor Services and Information Management; and the Bureau of Reclamation’s Natural Resources Specialist. The officials said they generally agreed with the facts as presented in the draft report. Officials from four of the agencies—National Park Service, Fish and Wildlife Service, and the Bureaus of Land Management and Reclamation—reiterated the statement in our report that the primary purpose of the land management agencies’ concessions programs is to provide a service to the public, not to maximize the rate of return. Officials from the Forest Service and the Bureau of Reclamation noted that high investments made by some concessioners also affect the rate of return that the government receives, which our report recognizes. The National Park Service official said the report highlighted two factors required by legislation—preferential right of contract renewal to the existing contractor and granting possessory interest to concessioners—that affect the agency’s rate of return. He added that three other factors also affect the rate of return: (1) the National Park Service’s periodic operational reviews of concessioners, which may increase maintenance costs of concessioners; (2) the legislatively required rate control of concessioners’ prices for goods and services; and (3) the expense for financial audits to concessioners grossing over 1 million dollars annually that the National Park Service requires. Our review was not designed to measure what impact, if any, that operational reviews or rate controls have on a concessioner’s profitability or whether all concessioners had financial audits. We would expect, however, that economic market forces for large dollar value concessions would be similar for the National Park Service and other agencies’ concessions. It is likely that all larger concessioners would incur the costs of financial and routine maintenance audits, regardless of the agencies’ requirements. Also with respect to prices, the legislative requirement calls for National Park Service concessioners’ prices to be comparable to those of similar services and facilities under similar circumstances. Therefore, nothing seems to suggest that National Park Service concessioners have been directed to set prices at rates below those that one would normally expect to find in the surrounding localities. The Department of the Army Corps of Engineers said they had no comments on the draft report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees, the agencies included in our review, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. The major contributors to this report are listed in appendix IX. Please contact me on (202) 512-8387 if you have any questions concerning this report. List of Agencies Surveyed operations during fiscal year 1994? Department of the Army Corps of Engineers Federal Mediation and Conciliation Service Federal Mine Safety and Health Review Commission Federal Retirement Thrift Investment Board Department of Health and Human Services Department of Housing and Urban Development The John F. Kennedy Center for the Performing Arts (continued) operations during fiscal year 1994? National Aeronautics and Space Administration National Archives and Records Administration National Endowment for the Arts National Endowment for the Humanities Occupational Safety and Health Review Commission Thrift Deposit Protection Oversight Board U.S. Trade and Development Agency (continued) operations during fiscal year 1994? Extent of Concessions in the Federal Government Twenty-seven of the 75 departments and agencies we surveyed reported that they had concessions operations during fiscal year 1994. Table II.1 contains overall information reported by the 27 departments and agencies. Forty-two departments and agencies responded because some had more than one component managing concessions. As indicated in the table, data on all concessioners’ revenues were not available. Some agencies reported that gross revenue data were not available because concessioners paid a flat concessions fee and the agency had no requirement to track gross revenues. As a consequence, total concessions revenues and fees in this table can not be compared to determine the rate of return the government received from concessioners’ revenues. Value of concessioners’ nonfee compensation(continued) Value of concessioners’ nonfee compensation(continued) Value of concessioners’ nonfee compensation(Table notes on next page) The Central Intelligence Agency did not provide details on its concessions agreements. Agency responded that concessions fees were sometimes waived because of the following reasons: (1) reduced prices on vended items, (2) difficulty in obtaining contractors—the building is surrounded by many food establishments, (3) a more attractive procurement was needed because of a lack of offerors, (4) a limited profit rate for concessioners—with overage to be either returned to the government or put back into the food service operation, (5) concessioners make it possible to market Native Americans arts and crafts—an activity the agency could not do. Agency responded that some revenue data were not available mainly because concessioners were not required to report revenues for certain concessions where they generally paid a flat concessions fee. According to the Bureau of Reclamation, the 37 concessioners represent only a portion of the agency’s concessioners. Survey information was not available for the 225 concessions agreements that are managed by state agencies. Rate of Return on Concessions Agreements Either Initiated or Extended During Fiscal Year 1994 Total (fees + special accounts) Rate of Return by Primary Concessions Services on Concessions Agreements Either Initiated or Extended During Fiscal Year 1994 Rate of return (percent) Transportation (including ferry, cruise, tourmobile) Comparison of the Federal Rate of Return With Other Governments’ Rates—Fiscal Year 1994 Food service operations, lodging, campgrounds, vending machines, retail sales, river running, big game hunting, marinas, ski resorts, transportation, cruise boats, boat docks, coin-operated copiers, and others. Retail sales, marinas, beaches, golf courses Food service operations, vending machines, optical viewing machines, water sports equipment, campgrounds, and cruise boats Food service operations, retail sales, campgrounds, stables, bicycle and boat rentals, rifle ranges, and vending machines Lodging, food service operations, marinas, retail sales, pools, horseback riding, and firewood sales Lodging, food service operations, swimming pools, snack bars, marinas, boat docks, horseback riding, and golf courses Retail sales, recreation equipment rentals, food service operations, marinas, golf courses, tennis courts, theaters, and office and special purpose space Tennessee did not track concessioners’ revenues. It charged a flat concessions fee. Rate of Return on Concessions Agreements Initiated During Fiscal Year 1994 With or Without Competition Agencies’ Nonconcession Income-Generating Activities in Fiscal Year 1994 Application fees and the sale of patent information Fees for rule reviews, designations, Freedom of Information Act (FOIA), leverage audits, registration, reparations, photocopying, and publications User fees for the disposal of high-level radioactive waste; timber sales; public hunting; recycling; procurement seminars and procurement solicitation fees; occasional rights-of-way and easement and grazing fees; and sale of crude oil and natural gas Sale of wholesale power to customers who redistribute to retail customers Sale of hydroelectric power from 21 multipurpose water resource projects of the Army Corps of Engineers and 9 of the Bureau of Reclamation, plus power from nonfederal generating plants Sale of more than 10,000 megawatts of power (electricity) from 54 hydro power plants Sale of power and energy from 24 hydroelectric power plants operated by the Army Corps of Engineers Sale of power generated at 22 U.S. Army Corps of Engineers projects located in a 10-state southeastern region (continued) Revenue and fees obtained from contractor parking, fitness dues, interest earned on investments, telecommunications income from billings, earned assessments, provision for assessment credit, exit/ entrance fees, recoverable expenses from the Uniformed Bank Performance Report (UBPR) collections, miscellaneous income from seminars, rents, and others Collection of civil penalties, the sale of federal campaign disclosure reports and records, and revenues from FOIA User fee collections, fines and penalty payments, and Davis Law receipts Filing fee from persons acquiring voting securities or assets who are required to file pre-merger notifications (15 U.S.C. 18a) Entrance fees, commodity revenues from grazing, oil and gas, sand and gravel, and other special use fees Tours of Hoover Dam, site rentals of cabins, trailers, and camping and group-use sites, and land-use fees Public Information products, the review and approval of pipeline rights-of-way, Cenozoic Publication, transfer of rights-of-way titles, assignment and lease transactions User fees from applicants for licenses to engage in interstate commerce, parties in rail authority proceedings and compliant and compliant-type declaratory order proceedings (49 C.F.R. 1002.3) Registration fees from handlers of authorized drugs (doctors, pharmacies, and others) Collection of initial and supplemental registrations for foreign principals; generation of copies of registration statements, supplements, amendments, exhibits, dissemination reports, political propaganda, and other materials contained in public files; execution of information searches; preparation and execution of written advisory opinions Sale of utilities (electricity, steam, water, and sewage treatment) to Federal Prison Industries, Inc. (Trade name-UNICOR); sale of meal tickets; rental income from staff housing located at various federal prisons; sale of farm by-products; fees from the care and custody of state prisoners from various states Gift shop sales, tickets, and tours National Archives and Records Management Reproduction services, the sale of reference material, over-the-counter sales (museum and presidential library shops), publication sales, audiovisual sales and rentals, and Presidential library admissions File search and copying services in association with FOIA and requests made to the Research Division Duplicating costs under FOIA and the sale of Board publications reports License and inspection fees, annual fees, and other regulatory costs Rebates for volume discounts on governmentwide total quality training costs Fees for over 80 types of applications, statements, and reports filed pursuant to each of the statutes the Commission administers User fees charged to cover the costs of materials, brochures, and space rental for seminars, workshops, business award events, and others (continued) Ad valorem fees for cargo on vessels going into U.S. ports; interest on Minority Bank investments; fees for observation decks and viewing machines at Eisenhower Lock; fees for vessel service, damage repairs, and violations; rental of office space; pleasure craft and noncommercial tolls for use of the seaway Civil penalties, sale of test tires and vehicles, royalties, FOIA requests, Corporate Average Fuel Economy fine (CAFE) penalties, and user fees User fees for processing applications to sell ships; operating the computer-aided Operations Research Facility at the U.S. Merchant Marine Academy; making copies of agency rulings, orders, and economic data; filing and investigation fees Assessment of all federally chartered national banks, corporate applications, examinations, and security filings; sale of publications; investment income (interest earned from the investment of operating funds in U.S. Treasury securities) Harbor maintenance fees; commissions on pay telephone stations; charges for testing, inspecting and grading services; fees and other charges for miscellaneous services and Consolidated Omnibus Budget and Reconciliation Act (COBRA). Objectives, Scope, and Methodology The objectives of our review were to determine (1) the extent of concessions operations in the federal government, (2) the rate of return the federal government received from concessions and factors that affected the rate of return, (3) how the federal government’s rate of return compared to other governments’ rates of return, and (4) the extent of agencies’ nonconcessions activities that generated income in fiscal year 1994 and whether they offered opportunities to be handled as concessions. To accomplish objectives one, two, and four, we used three questionnaires to request data from 75 federal executive departments and agencies listed in the 1993/94 U.S. Government Manual. The first questionnaire requested summary information on all concessions agreements in effect during fiscal year 1994, such as the total number of agreements, concessioners’ revenues, and concessions fees. The second questionnaire asked for detailed agreement-specific information on each concessions agreements either initiated or extended during fiscal year 1994. Details included the amount of revenues and fees, information on whether competition was used to select the concessioner, whether fees was one of the factors considered during competition, how competed agreements were advertised, and terms of agreements. We also requested copies of pertinent agency policies and each agreement that was either issued or extended in fiscal year 1994. The third questionnaire asked for information on agencies’ income-generating activities that were not concessions. We pretested the questionnaires at six federal agencies or agency components: the Department of Agriculture’s Forest Service, the Department of the Army Corps of Engineers, the General Services Administration, the Smithsonian, and the Department of the Interior’s National Park Service and Fish and Wildlife Service. These agencies—the land management agencies in particular—are responsible for most federal concessions. We revised the questionnaires on the basis of their detailed feedback. For the purpose of this assignment, we defined “concessions” as private or public entities using federally owned/leased property under a government permit, contract, or other similar agreement to provide recreation, food, or other services to either the general public or specific individuals. Concession services included, but were not limited to, food operations, vending machines, retail shops, public pay telephones, barber/beauty shops, transportation, lodging, marinas, and campgrounds. We excluded day care centers, employee association stores, and services provided by the visually impaired under the Randolph-Sheppard Act. State governments manage Randolph-Sheppard concessions that are on federal property. Further, if concessions services in an agency were provided under an agreement with GSA, we requested agencies not to include these operations in their response. GSA agreed to include these concessions in its response. All 75 agencies responded to our request. Twenty-seven of the agencies said they had at least one concessions agreement. Forty-two respondents provided concessions information, because some agencies, such as the Department of the Interior, had more than one component managing concessions (see app. II). Fifteen of the 27 agencies either initiated or extended at least 1 concession agreement during fiscal year 1994. The Central Intelligence Agency provided an oral briefing on its concessions program and did not provide any details on its concessions agreements. In response to our questionnaires, we received information on 5,000 concessions agreements. Our information about the agreements comes from only the agencies’ questionnaire responses for the agreements. However, to check whether the questionnaires were filled out completely and accurately, we (1) checked selected responses against copies of the concessions agreements that agencies sent to us; (2) checked agency totals for concessions revenues and fees against prior GAO reports; (3) followed up with agency staff in selected cases to clarify their responses; (4) manually reviewed all pages of each form; (5) had specially trained staff convert the data to computer-readable format and verify their entries; (6) manually checked computerized data against the original forms, including all data on concessions revenues and fees; and (7) conducted computerized checks for data consistency. We analyzed the information using standard software for tabulating and analyzing data. To calculate the rate of return from concessions, we used questionnaire financial data for concessions agreements either initiated or extended during fiscal year 1994. From this reported information, we calculated the rate of return by dividing gross revenues into the sum of concessions fees and the amount in special accounts. For rate of return analyses, we excluded questionnaires that did not contain both gross revenue and concessions fee data. In addition to the questionnaire data, we also (1) interviewed federal concessions management staff at both headquarters and field levels and officials of the National Parks and Conservation Association and the National Park Hospitality Association; and (2) reviewed our previous work in this area; Inspector General reports; and laws, regulations, and policies for each federal entity’s concessions operations. To determine how the federal government’s rate of return compared with that of other governments, we used the data we obtained from objectives one, two, and four and sent a questionnaire to five state governments and Canada. We selected the five states—California, Maryland, Michigan, Missouri, and Tennessee—on the basis of information we received from the National Parks and Conservation Association. The information showed that these five states had relatively high rates of return. We visited two of the states—Maryland and Tennessee—and met with their key concessions managers. We selected Canada to obtain information on another country’s experience. We did our review from January 1995 to November 1995, in accordance with generally accepted government auditing standards. Because it was impractical for us to obtain comments from all 75 agencies, we provided copies of a draft of this report to the heads of the departments of the six land management agencies for comment. The six agencies accounted for over 92 percent of the concessions. On March 25, 1996, we discussed the draft report with officials designated by the departments. Their comments are discussed on pages 16 and 17. Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel, Washington, D.C. Related GAO Products Federal Lands: Views on Reform of Recreation Concessioners (GAO-T/RCED-95-250, July 25, 1995). Federal Lands: Improvements Needed in Managing Short-Term Concessioners (GAO/RCED-93-177, Sep. 14, 1993). Federal Land: Little Progress Made in Improving Oversight of Concessioners (GAO/T-RCED-93-42, May 27, 1993). Forest Service: Little Assurance That Fair Market Value Fees Are Collected From Ski Areas (GAO/RCED-93-107, Apr. 16, 1993). National Park Service: Policies and Practices for Determining Concessioners’ Building Use Fees (GAO-T-RCED-92-66, May 21, 1992). Federal Lands: Oversight of Long-Term Concessioners (GAO/RCED-92-128BR, Mar. 20, 1992). Federal Lands: Improvements Needed in Managing Concessioners (GAO/RCED-91-163, June 11, 1991). Forest Service: Difficult Choices Face the Future of the Recreation Program (GAO/RCED-91-115, Apr. 15, 1991). Recreation Concessioners Operating on Federal Lands (GAO/T-RCED-91-16, Mar. 21, 1991). Changes Needed in the Forest Service’s Recreation Program (GAO/T-RCED-91-10, Fed. 27, 1991). Parks and Recreation: Maintenance and Reconstruction Backlog on National Forest Trails (GAO/RCED-89-182, Sep. 22, 1989). Parks and Recreation: Problems with Fee System for Resorts Operating on Forest Service Lands (GAO/RCED-88-94, May 16, 1988). Parks and Recreation: Interior Did Not Comply With Legal Requirements for the Outdoors Commission (GAO/RCED-88-65, Mar. 25, 1988). Parks and Recreation: Park Service Managers Report Shortfalls in Maintenance Funding (GAO/RCED-88-91BR, Mar. 21,1988). Maintenance Needs of the National Park Service (GAO/T-RCED-88-27, Mar. 23, 1988). Parks and Recreation: Limited Progress Made in Documenting and Mitigating Threats to the Parks (GAO/RCED-87-36, Feb. 9, 1987). Parks and Recreation: Recreational Fee Authorizations, Prohibitions, and Limitations (GAO/RCED-86-149, May 8, 1986). Corps of Engineers and Bureau of Reclamation’s Recreation and Construction Backlogs (GAO/RCED-84-54, Nov. 25, 1984). The National Park Service Has Improved Facilities at 12 Park Service Areas (GAO/RCED-83-65, Dec. 17, 1983). Information Regarding U.S. Army Corps of Engineers Management of Recreation Areas (GAO/RCED-83-65, Dec. 17, 1983). National Parks’ Health and Safety Problems Given Priority: Cost Estimates and Safety Management Could Be Improved (GAO/RCED-83-59, Apr. 25, 1983). Increasing Entrance Fees: National Park Service (GAO/RCED-82-84, Aug. 4, 1982). Facilities in Many National Parks and Forests Do Not Meet Health and Safety Standards (GAO/CED-80-115, Oct. 10, 1980). Better Management of National Park Concessions Can Improve Services Provided to the Public (GAO/CED-80-102, July 31, 1980). Concession Operations in the National Parks—Improvements Needed in Administration (GAO/RED-76-1, July 21, 1975). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on government concession contracting in 1994, focusing on: (1) the extent of government concession operations; (2) the amount of and factors that affect federal concession returns; (3) how federal concession return rates compare with other governments' return rates; and (4) whether agencies' non-commission income gathering operations could be converted into concessions. GAO noted that: (1) 27 of the 75 federal departments and agencies surveyed had entered into concessioning agreements; (2) 42 of these agencies and agency components managed a total of 11,263 concessions programs; (3) 92 percent of these concession agreements were held by 6 land management agencies; (4) gross revenue from the concessions totalled $2.2 billion, but agencies lacked sufficient data to determine the gross revenues of all concession operations; (5) concessioners paid the government $82.5 million, including $64.6 million in concession fees and $17.9 million for facility repair and improvement; (6) concessioners provided the government with an additional $4.7 million in non-fee compensation by maintaining government property; (7) the government's return rate on concessioners' gross revenues was 3.6 percent; (8) other governments surveyed had average concession return rates of 12.7 percent; (9) competitively awarded concession agreements where fees were considered in the competition had higher return rates than those that were not competed; (10) although 96 percent of non-land-management-agency concessions were awarded competitively, only 8.6 percent of land management agencies concessions were awarded competitively; (11) the agencies able to retain more than 50 percent of their concessions fees had higher return rates than those agencies required to deposit their returns into treasury funds; and (12) 29 of the 75 agencies had income generating-operations other than concessions, but many of these activities could not be concession contracted because of security and privacy concerns.
Background In the 1990s, Congress put in place a statutory framework to address the long-standing weaknesses in federal operations, improve federal management practices, and provide greater accountability for achieving results. This framework included as its essential elements the Results Act and key financial management and information technology reform legislation: the Chief Financial Officers Act of 1990 (CFO)—as expanded by the Government Management Reform Act of 1994 (GMRA)—and the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996, respectively. Taken together, these legislative initiatives seek to respond to a need for accurate, reliable, and integrated budget, financial, and program information for congressional and executive branch decisionmaking, information that much of our prior work has shown to be badly lacking. The goal-setting and performance measurement and improvement system envisioned by the Results Act is the centerpiece of this framework and starts with the requirement that each executive agency develop and periodically update a strategic plan to lay out its mission, long-term goals and objectives, and strategies for achieving those goals and objectives. Under the Results Act, the first of these plans were due by September 30, 1997. Next, each agency is to develop an annual performance plan, beginning with the agency’s plan for fiscal year 1999, which covers each program activity set forth in the agency’s budget. Agencies were to submit their fiscal year 1999 annual performance plans to OMB in the fall of 1997 with their fiscal year 1999 budget requests and are to submit those plans to Congress after the President’s fiscal year 1999 budget is provided to Congress in February 1998. Among other things, agencies’ annual performance plans are to contain their programs’ goals and measures for fiscal year 1999. Finally, each agency is to report publicly on its programs’ performance, specifically on the degree to which the goals that are laid out in the agency’s annual performance plan are being met and on actions it plans to take to achieve unmet goals. The first of these reports, on programs’ performance for fiscal year 1999, is due by March 31, 2000; subsequent reports are due by March 31 for the years that follow. (For a detailed description of the Results Act’s requirements, see app. I.) The CFO Act and GMRA are intended to strengthen the reliability of agencies’ financial and programmatic performance information and the reporting of such information by, among other things, having agencies develop better performance measures and cost information and design results-oriented reports that integrate budget, accounting, and program information.Finally, the Paperwork Reduction Act and Clinger-Cohen Act seek to help agencies address long-standing weaknesses in their use of information technology. Under this legislation, each agency is to better link its technology plan and information technology use to achieving the agency’s desired results. In addition, long-standing concerns about the program and financial management of credit programs have prompted Congress to enact important budget and credit management reform initiatives over the last 15 years. These initiatives include the Debt Collection Act of 1982 (DCA) and amendments, the Debt Collection Improvement Act of 1996 (DCIA), and the Federal Credit Reform Act of 1990 (FCRA). DCA and DCIA are significant pieces of credit management legislation designed to, among other things, facilitate federal efforts to decrease delinquencies and increase collections. Under DCA, agencies were to annually report to the Director of OMB and the Secretary of the Treasury on the status of their debt collection activities. Under DCIA, agencies are now to annually report on those activities only to the Secretary of the Treasury, who in turn is to annually report to Congress, beginning no later than 1999, on such activities governmentwide. FCRA changed the budget treatment of direct loans and loan guarantees made on or after October 1, 1991, to (1) facilitate more accurate reporting by credit agencies on the full cost to the government in the budget for the year in which the programs made or guaranteed the loans so that executive branch and congressional decisionmakers might consider such costs when making budget decisions and (2) permit appropriate cost comparisons between direct and guaranteed credit programs and between credit and noncredit programs intended to achieve similar purposes. Guided by these legislative initiatives, since 1992, OMB has encouraged federal credit agencies to improve their credit programs’ financial and programmatic performance measures and to adopt a set of common performance measures for those programs. According to OMB and the Federal Credit Policy Working Group, common measures should help credit program managers and other decisionmakers assess how similar functions are performed and promote an atmosphere of cooperation and a sharing of ideas among agency officials on how to improve the performance of credit programs. In response to the increasing importance being placed on agencies’ integration of performance measurement with budgeting, management improvement, and overall agency accountability, the Working Group has focused since July 1995 on developing common performance measures for credit programs. To do so, the Working Group established a task force to develop measures for credit programs consistent with the Results Act and credit and financial management reform initiatives. In August 1997 we reported that agencies’ annual planning under the Results Act could be used as a vehicle for developing, where appropriate, common performance measures for permitting future comparisons of similar programs’ results and the methods those programs used to achieve those results. Scope and Methodology To meet our first objective—to identify goals and measures established by the selected credit programs that related to the programs’ intended purposes and determine whether the programs had set target levels of performance for assessing their progress in achieving their desired results—we compared the programs’ goals and measures to their respective intended purposes, as identified by the programs or their respective agencies. We interviewed agency officials about the programs’ intended purposes and asked those officials to comment on the relationships we identified between the programs’ goals and measures and intended purposes. To determine whether the programs had set target levels of performance for assessing their progress in achieving their desired results, we identified those measures for which the programs had either (1) identified fiscal year 1998 targets or (2) reported prior year baseline data for those measures and indicated how performance on those measures was to change (i.e., increase or decrease) in fiscal year 1998 relative to the baseline. The programs’ goals and performance measures that we used for our assessment were those established by the programs as of May 1, 1997, which, according to agency officials, were generally the same ones the programs submitted with their respective agencies’ fiscal year 1998 budget presentations to OMB and Congress. The programs also were proposing to include these goals and measures for their agencies’ fiscal year 1999 annual performance plans under the Results Act. Although we make observations about differences in goals and measures among the programs, our review did not address the reasonableness of the processes or methods the programs used to determine how to assess progress or establish target levels toward achieving the programs’ intended purposes; determine whether other, more appropriate measures existed; or evaluate the feasibility of the targets the programs established. To meet our second objective—to identify the challenges agency officials cited in developing performance information, including goals and measures, for the selected programs and any approaches those programs were taking to address those challenges—we asked agency officials responsible for and involved in the development of goals and measures for those programs to rate how great a challenge it was to perform each of 49 activities we identified as associated with developing performance information. Examples of these activities included “determining a realistic target level of performance for annual performance goals” and “developing measures for assessing the net effect of the program compared with what would have occurred in the absence of the program.” To identify these activities, we referred to key steps and practices identified in our Executive Guide: Effectively Implementing the Government Performance and Results Act and other work that we had under way assessing the challenges agencies were facing in implementing performance measurement. We asked officials to use a five-point scale to rate the 49 activities, ranging from “little to no” challenge to a “very great” challenge for the selected credit programs in their agencies. For purposes of this report, we refer to activities that any of the agency officials rated as a “great” or “very great” challenge as significantly challenging activities. We then interviewed agency officials about why they rated certain activities associated with developing performance information as significantly challenging. We analyzed their responses and related documentation to identify general challenges that led agency officials to report those activities as significantly challenging for the selected programs. To identify any approaches those programs were taking to address the challenges they have been facing, we talked with agency officials about such approaches and analyzed agency documentation. We also considered prior and ongoing work we have done on the efforts of VA, Education, USDA, and other credit agencies to implement various credit and financial management reforms. To meet our third objective—to describe the status of the Working Group’s effort to develop common performance measures for federal credit programs—we reviewed various documentation from OMB and members of the Working Group that described this effort and agency officials’ views about those measures. We also talked to OMB officials and members of the Working Group, including agency officials at those agencies administering the programs we selected for our review, to obtain their views of the common performance measures proposed. Our review did not address the reasonableness of the processes or methods the Working Group’s task force used in determining how to assess progress for federal credit programs or determine whether other measures existed that may be more appropriate. (For a more detailed discussion on our objectives, scope, and methodology, see app. II.) We did our work from September 1996 to December 1997, in Washington, D.C., in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB, the Secretaries of Education and Agriculture, and the Acting Secretary of VA or their designees. Comments were provided orally by designees of the four agencies, and those comments are discussed at the end of this report. Selected Credit Programs Established Goals, Measures, and Targets to Monitor Their Progress In their efforts to implement the Results Act, the five credit programs established goals and performance measures that appeared to be generally related to the programs’ intended purposes. In some cases, the programs established goals or measures that addressed intermediate results that the programs expected to lead to their intended purposes. Also, some of the selected programs established a range of measures that should provide a more complete picture of particular aspects of their performance related to the programs’ intended purposes. Finally, the selected programs also set fiscal year 1998 targets for most of their measures. Goals and Performance Measures Generally Related to the Programs’ Intended Purposes In their fiscal year 1998 budget presentations, each of the five credit programs established goals and performance measures that appeared to be generally related to the programs’ intended purposes. For example, to monitor its performance in achieving one of its intended purposes, namely helping veterans retain their homes, VA’s Loan Guaranty Program established the Foreclosure Avoidance Through Servicing (FATS) Ratio measure. According to VA, the FATS ratio is to provide data on the extent to which foreclosures would have been greater had VA not pursued alternatives to foreclosure, such as intervening with the holder of the loan on behalf of the borrower to set up a repayment plan. For each of the selected credit programs, appendix III shows the intended purposes identified by those programs and their respective agencies and examples of related goals and performance measures the programs established in their fiscal year 1998 budget presentations. In some cases, the programs’ goals or performance measures addressed intermediate results that the programs expected to lead to an intended purpose. For example, the performance measure that VA’s Loan Guaranty Program established to monitor its performance in achieving its intended purpose of helping veterans purchase homes was the percentage of respondents to VA’s Lender Customer Satisfaction Survey who say they are satisfied with their overall interaction with VA. According to VA, “maximizing lender satisfaction with their dealings with VA employees . . . will encourage lenders to participate in the program, expanding financing opportunities for veterans.” In this way, the intermediate result of increasing lender satisfaction could be expected to contribute to helping veterans and active duty personnel purchase homes. Similarly, Education’s direct student loan program established as performance measures the rate of (1) institutional (i.e., school) participation; (2) overall satisfaction of schools with the direct student loan program; and (3) institutional retention in the program. The program uses schools as the vehicles for providing loans to students and their families. By providing a streamlined loan delivery system, the program expects to attract schools to participate in the program. Further, by satisfying participating schools, the program expects to encourage those schools to stay as participants in the program. Thus, increases in schools’ participation, satisfaction, and retention in the program are intermediate results that the program expects will lead to broader student access to capital for postsecondary education. Some of the selected programs established a range of measures to provide a more complete picture of particular aspects of their performance related to their intended purposes. For example, VA’s Loan Guaranty Program established two performance measures for monitoring the timeliness of issuing a certificate of eligibility, which is related to its intended purpose of treating all veterans in a timely manner. These measures were (1) the percentage of veterans responding to a VA veteran survey who say they are satisfied with the time it takes VA to certify veterans’ eligibility for a home loan; and (2) the average time VA calculated it took to issue a certificate of eligibility, which is to supplement the survey data. Similarly, Education established several performance measures for assessing its guaranteed and direct student loan programs’ performance toward successfully managing the programs in a cost-effective manner. These measures included the programs’ lifetime gross dollar default rates; lifetime net default rates (i.e., loss rate); annual delinquency rates; per unit administrative costs; and the annual collection rates. The Credit Programs Set Fiscal Year 1998 Targets for Most of Their Measures The Results Act defines a performance goal as the target level of performance expressed as a tangible, measurable objective against which achievement is to be compared. Thus, annual performance goals should consist of two parts: (1) the performance measure that represents the specific characteristic of the program that the program uses to gauge its performance and (2) the annual target level of performance to be achieved during a given fiscal year for the measure. The Results Act also requires each agency to report to the President and Congress annually, beginning for fiscal year 1999, on the degree to which the agency is meeting its annual performance goals. Thus, under the Act, an agency is to monitor and report on its actual performance during the year compared to the targets it had established for its performance measures for that year. As shown in table 1, we found that the selected programs set fiscal year 1998 targets for most of their respective measures. These included measures for which the programs either (1) set fiscal year 1998 target levels of performance; or (2) reported prior year baseline data for those measures and indicated how performance on those measures was to change (i.e., increase or decrease) in fiscal year 1998 relative to their baselines. Thus, if the selected programs collect accurate corresponding data on their actual performance, they should be able to monitor their progress in achieving desired results on those measures and have fiscal year 1998 baseline data to use in setting future targets for those measures. For example, one of the fiscal year 1998 performance goals set by Education’s guaranteed student loan program is that the “Level of [overall school] satisfaction will meet or exceed the level of school satisfaction measured last year, 67 percent of the schools reported satisfaction.” However, for other measures, the program did not set targets that could be used to monitor their progress on those measures. For example, Education’s guaranteed student loan program did not establish a fiscal year 1998 target for its annual delinquency rate measure. The program reported that this measure will provide information on the dollar amount of loans “past due” as a percentage of dollars in repayment and that baseline data for the measure will be developed as the definition of “past due” is finalized. The Credit Programs Have Been Facing Three General Challenges to Developing Performance Information On the basis of agency officials’ responses to questions on developing performance information for the selected programs, we identified three general challenges those programs have been facing: (1) a struggle in reaching consensus among stakeholders on the programs’ intended purposes, performance measures, and target levels of performance; (2) difficulty in separating the effects of external forces from program influences on results; and (3) a lack of relevant program performance and financial baseline data. Agency officials also described some approaches they were taking to address the challenges they have been facing. A Struggle in Reaching Consensus Among Stakeholders on the Programs’ Intended Purposes, Performance Measures, and Target Levels of Performance As we have noted in a previous report, because the interests of a program’s stakeholders can and often do differ significantly, full agreement among those stakeholders on all aspects of the program’s performance is relatively uncommon. However, our past and current work also has shown that although it is difficult to get stakeholders to reach agreement, stakeholder involvement can help an agency identify results-oriented performance measures and set realistic target levels. For example, VA officials said that they had difficulty reaching consensus with internal program stakeholders at the agency’s field offices. This difficulty concerned how the activities of the field offices would be linked to achieving the intended results of VA’s Loan Guaranty Program as established by headquarters staff. To address this difficulty, VA headquarters and field office staff worked together to develop the program’s performance measures. They said the program then brought together key VA headquarters and field managers to reach agreement on target performance levels for linking field office activities to the intended results of the program. VA’s Loan Guaranty Program also struggled with trying to reach consensus with OMB. According to a VA official, OMB suggested that the program establish “outcome-oriented” performance measures, where feasible, which could provide data on the extent to which the program is helping veterans achieve a higher rate of homeownership. However, VA officials said that the program is an entitlement program (i.e., veterans receive the benefit regardless of need as a reward for their service); and it is not clear whether increasing homeownership among veterans is a primary intended purpose of the program. According to the officials, the more appropriate performance measures for assessing the program’s performance and holding it accountable are those for monitoring how well VA is delivering the benefit (e.g., satisfying veterans and keeping program costs down). To address OMB’s suggestion, the program established a homeownership assistance measure in its fiscal year 1998 budget presentation, which is to provide data on the percentage of veterans surveyed who said they would not have been able to purchase any home or would have had to purchase a less expensive home without a VA guaranteed loan. However, the program did not identify the homeownership assistance measure as a performance measure. Instead, the program identified this measure as a measure of workload and other program data, which a VA official said was to provide “contextual program information,” rather than information for gauging the program’s progress in achieving its intended purposes and holding the program accountable. The VA official added that the homeownership assistance measure is an “imperfect measure” because of its reliance on self-reporting by veterans. Similarly, at the time of our interviews with USDA officials, they said internal stakeholders were grappling with what the appropriate results for their SFH loan programs were. According to those officials, the struggle of trying to reach consensus among those stakeholders contributed to why they rated as a significantly challenging activity “developing measures for assessing the net effect of the program.” One agency official said that the programs’ intended purposes were “putting people [who are unable to get credit from other sources] in homes”; thus, to the extent that they put such borrowers in homes, the programs are having a net impact. However, another agency official suggested that the programs’ intended purposes also should include “improving the quality of life among rural residents” and “improving housing conditions and the economy in a given community or state.” He said that measures for gauging the programs’ progress toward such purposes would attempt to collect data on, for example, the extent to which putting people in homes is improving the quality of life among rural residents. USDA officials also said it is difficult to balance stakeholders’ interests. They said that they are expected to increase program service while also reducing program costs and minimizing default rates. However, they said this is difficult because the SFH programs were designed to offer credit to a population that the private sector would consider high risk. Specifically, to be eligible for a SFH direct loan, a borrower must have a family income that is “very low” to “low” (i.e., a family income under 80 percent of the median income in the area); and the borrower must be unable to get credit from any other source. Therefore, the program’s target population may be more likely to default. At the time of our review, USDA, VA, and Education were working with stakeholders in and outside of the selected programs, including OMB and Congress, to reach consensus with those stakeholders on the most appropriate goals and measures for the programs. Difficulty in Separating the Effects of External Forces From Program Influences on Results The efforts of federal agencies often are but one factor among many that may influence whether, and the degree to which, their programs achieve their intended results. Our past and current work has found that many agencies have been challenged to separate out the influence that their program activities have had on the achievement of program results when those results also could have been influenced by external forces. Agency officials from all five of the selected credit programs reported as a significantly challenging activity “separating the impact of the credit programs’ activities from the impact of other factors external to those programs but contributing to the results achieved.” They generally cited economic trends; the role of third parties in helping the programs provide loans; and the existence of other federal financial aid programs (e.g., grants) as examples of forces external to their programs that can affect program results. For example, the foreclosure rate could be viewed as a measure related to the VA Loan Guaranty Program’s intended purpose of helping veterans and active duty personnel retain homes. However, VA reported that external forces, such as interest rates, unemployment, and the general state of the economy, can influence the foreclosure rate. A VA official said that because of such external forces, it has been difficult to confidently attribute a change in the foreclosure rate to the program’s activities and thus view it as a valid measure of the program’s performance. VA officials said that the program attempted to address this problem in 1993 but was not successful. Specifically, a VA program official said that at the encouragement of OMB, the program attempted to develop a model to help the program estimate its foreclosure rate and monitor its performance. However, he said when the program implemented the model, it significantly overestimated the number of VA foreclosures and thus was not an adequate model for determining the external forces that could affect the rate. The VA official said that because of the many external forces that could affect the number of foreclosures, it was unclear if the model could be adjusted to help it adequately predict foreclosures and whether—given the cost of potentially making many adjustments to the model—the value of doing this was worth the additional cost. Therefore, the program took another approach. The program established a surrogate measure, the FATS ratio (which, as mentioned earlier, is to provide data on the extent to which foreclosures would have been greater had VA not pursued alternatives to foreclosure), for monitoring its performance in assisting veterans to avoid foreclosures. The program views the FATS ratio as a more valid measure than the foreclosure rate for assessing the program’s performance in helping veterans and active duty personnel retain homes. USDA officials also said that separating the impact of their SFH guaranteed and direct loan programs’ activities from the impact of other external forces on, for example, the quality of life for rural residents is exceedingly difficult. These officials said that the quality of rural residents’ lives could be affected negatively by other, unrelated events, such as borrowers’ incurring health problems or financial hardship, or the closing of a military base eliminating jobs in the area. They said that although providing single-family housing loans may help to improve the quality of life among those rural residents, such improvement could also be due to other external forces, such as home loans being provided by other federal housing loan programs (e.g., programs administered by VA or the Department of Housing and Urban Development). Similarly, Education officials told us that loans that are issued or guaranteed through their agency’s direct or guaranteed student loan programs are among several types of financial aid that Education offers to help ensure access to postsecondary education. They said that students’ college participation and completion rates can be affected by borrowers’ eligibility for loans through the direct or guaranteed loan programs as well as by such external forces as the eligibility of borrowers for the other types of financial aid assistance provided by Education (e.g., grants); the extent of parental support for the borrowers attending school; and the borrowers having to financially support a family. Although Education established student participation and completion rates as performance measures, it did so to monitor the combined performance of its financial aid programs instead of the specific performance of either the direct or guaranteed student loan programs on those results. Education officials said that an approach the agency was taking to better understand the determinants of college enrollment—including financial aid obtained through direct loans, guaranteed loans, grants, or other financial aid programs—was contracting for a study of the effects of financial aid, including aid provided by these programs, and various external forces on this result. According to an Education official, this study is expected to be completed in early 1998. We believe the findings from this study may help to inform any future program evaluations assessing the impact of Education’s financial aid programs on results compared to the impact of external forces. In a prior report, we discussed how impact evaluations can help an agency confidently attribute the achievement of intended results to one or more of its programs by providing information on the extent to which those programs contributed to the results achieved relative to the impact of external forces. Lack of Relevant Baseline Data As we reported in June 1997, our prior work has shown that baseline and trend data on past performance can help agencies set realistic target levels of performance for their programs given the past performance of those programs. However, we also noted that because agencies often did not focus on having results-oriented performance information in the past, they generally have not collected such data. Thus, they have not had all of the baseline and trend data they believed they needed to set goals. Further, credit agencies, including VA, USDA, and Education, generally have had difficulty producing reliable performance data, particularly financial data, which executive and legislative branch decisionmakers need to make well-informed decisions. Agency officials from all five of the credit programs said that a lack of baseline data was why they rated one or more of the following activities as significantly challenging: “developing objective, quantifiable, and measurable annual program performance goals”; “determining a realistic target level of performance”; “developing unit cost information for the programs’ outputs”; and “developing unit cost information for the programs’ outcomes.” For example, VA officials said that some of their performance measures for their Loan Guaranty Program were new, and baseline data were thus not available on those measures. Consequently, VA did not have data on past performance to use in setting some of the program’s fiscal year 1998 target levels of performance and reported that those targets were “to be determined.” Education officials also attributed the challenges they had in determining realistic target levels of performance for Education’s direct student loan program to a lack of baseline data. According to those officials, the program had not been in existence long enough to have historical data on many of the program’s measures to use in setting fiscal year 1998 targets. They said that a lack of historical data for the direct loan program was a particular problem in terms of predicting borrower repayment behavior, since few borrowers had yet entered the repayment phase. Thus, to set the target for the program’s default rate measure, Education used historical data for the same measure established for Education’s guaranteed student loan program. Similarly, USDA officials said the agency’s SFH programs did not have information systems to collect data on some performance measures, such as the number of loans made in targeted geographic areas. Thus, according to those officials, the programs did not have an informed basis on which to set fiscal year 1998 target levels of performance for those measures and did not include them in their fiscal year 1998 budget submission to OMB. Rather, the program included substitute measures for which the programs had information systems in place to collect the data (e.g., the number of rural families with improved or more suitable housing conditions). Lack of Reliable Financial Data Our prior and ongoing work and that of agencies’ internal or independent auditors have found that some credit agencies still have difficulty, despite numerous years of experience, in producing reliable financial data, such as credit programs’ subsidy rates—the estimated cost to the government from direct loans and loan guarantees. For example, USDA and Education received a disclaimer of an opinion from internal and independent auditors, respectively, on their fiscal year 1996 financial statements. In part, this was due to those agencies not being able to provide the data needed to (1) accurately reflect the cost to the government and (2) permit appropriate cost comparisons between credit and noncredit programs. VA received an unqualified opinion on its fiscal year 1996 financial statements from VA’s Office of Inspector General (OIG). However, the OIG audit, which included a review of the Loan Guaranty Program, found that the program did not reliably accumulate the financial information needed to comply with federal financial accounting standards, identified significant errors that required financial statement adjustments, and identified other errors where data compiled manually did not always reconcile with original source amounts. If successfully implemented, the CFO Act will help credit agencies resolve long-standing problems with data reliability. Further, in passing the Results Act, Congress emphasized that the usefulness of agencies’ performance data depends, to a large degree, on the reliability of those data. Therefore, the Results Act requires agencies to describe in their annual performance plans the means to be used to verify and validate performance data. We have suggested in prior reports that such information, including information about the reliability of credit agencies’ performance data, could be equally important for those agencies to disclose in their reports to ensure report users of the quality of that data. One area in particular need of attention is the development of reliable financial information on the full cost and unit cost of a program, which is an integral part of measuring that program’s efficiency and cost effectiveness. An essential step in developing such information is the identification of individual program costs, such as direct labor. In that regard, unit cost information can be particularly useful in identifying trends and determining key cost drivers of the program. Agency officials from all five of the credit programs we reviewed said they have been challenged to develop unit cost information for the programs’ outputs and outcomes. They generally cited difficulties in allocating basic cost data to specific programs as a reason for this challenge. For example, USDA officials explained that its field offices are involved in administering the SFH direct and guaranteed loan programs as well as other USDA programs. They said difficulties in separating data on labor costs for the various programs, for example, have contributed to the challenge they have faced in developing unit costs for the SFH programs. Similarly, VA officials said that developing meaningful unit cost information for the Loan Guaranty Program has been a significantly challenging activity due to the lack of an adequate methodology, including VA’s inability to separate the data on the actual costs for the program, such as labor costs, from the costs for the several other programs that VA field offices administer. A VA official said that because of the difficulty it has had in isolating program cost data, the program uses a “very loose” process to calculate unit costs, which involves dividing the various resource levels authorized for the program, such as staffing levels (i.e., authorized full-time equivalents) by the activity level during the year (e.g., number of loans guaranteed). New accounting standards developed by the Federal Accounting Standards Advisory Board (FASAB) require federal agencies subject to the CFO Act and Results Act to collect relevant and reliable data on the full costs of carrying out a mission or producing products or services. Although these standards were scheduled to be effective for all federal programs beginning with fiscal year 1997, because of serious shortfalls in agencies’ cost accounting systems, FASAB extended the date by 1 year to fiscal year 1998. The standards took effect on October 1, 1997. Credit agency officials said their respective agencies were establishing information systems to collect needed cost data. Specifically, VA officials said their agency was looking at the use of activity-based costing (ABC) to develop more meaningful unit cost information. According to an Education official, his agency also is developing a system that should be able to provide at least some data on unit costs by 1999. USDA reported in its fiscal year 1998 budget presentation that it was working to develop the data for its SFH direct and guaranteed loan programs’ measure on the “cost of housing a family per recipient household.” Progress in Developing Common Performance Measures Has Been Limited According to OMB and the Working Group, comparing results using common measures across credit programs allows program managers and other decisionmakers to identify best practices among those programs that have the potential for improving other credit programs’ performance. In addition, the Working Group anticipated that agencies that administer credit programs could include such measures in their annual performance plans and reports under the Results Act. By September 1996, a Working Group task force had proposed a set of common financial and programmatic performance measures for federal credit programs. Some of the proposed measures—namely, financial measures for meeting the annual budgeting and reporting requirements of DCIA and FCRA—have been adopted by major credit agencies. However, two general problems have limited the Working Group’s progress in developing common performance measures for credit programs. First, as previously noted, several credit programs lack relevant program performance and financial baseline data. Second, the Working Group has been unable to reach consensus on the appropriateness of decisionmakers using some of the task force’s proposed measures to assess the performance of individual credit programs or to compare that performance against the performance of other credit programs. In addition, OMB does not intend to require credit programs to adopt common performance measures when consensus about the appropriateness of such measures has not been achieved. Common Performance Measures for Credit Programs Have Been Proposed, and Some Have Been Adopted In July 1995, the Working Group established a task force to develop common performance measures that could help agencies and other decisionmakers make relevant comparisons of the results of credit programs. By September 1996, the task force had proposed a set of common financial and programmatic measures in the following four areas: (1) Financial performance. Measures in this area include total receivables; total delinquent debt; default rates; actual versus projected subsidy rates; and administrative costs, such as the costs of extending credit and servicing loans. (2) Program performance in achieving desired loan characteristics. Measures in this area include the percentage of loans going to borrowers who would otherwise not have access to private credit and the percentage of borrowers pleased with the timeliness and quality of credit program service. (3) Program effects on society. Measures in this area include (1) intended effects, such as “supporting investment important to the economy” as monitored by, for example, the amount and quality of low-income housing financed (home loan programs) and business investment financed (business loan programs); and (2) unintended effects, such as borrowers accumulating excessive debt burden. (4) Program “additionality.” Measures in this area indicate the results achieved by the program by providing financial assistance to borrowers that private markets will not serve. An example of a measure in this area is the net increase in homeownership as a result of the program supplementing versus substituting for private financing. OMB and the Working Group’s task force also have encouraged credit programs to establish, where appropriate, program-specific measures and explanatory data for helping to ensure a complete assessment of their programs’ performance and to explain the results to users of performance information. In its June 1997 1997 Federal Financial Management Status Report and Five-Year Plan, OMB and the CFO Council reported that major credit agencies have adopted the proposed common financial measures for assessing their debt collection activities. For example, credit agencies are to collect data on total receivables, total delinquent debt, and total collections for their credit programs. According to an OMB official, these and other common measures that the task force proposed in the area of financial performance are measures that major credit agencies adopted for meeting the annual budgeting and reporting requirements of DCIA and FCRA. However, he said that those agencies have not yet adopted measures that the task force proposed under the other three areas—measures for monitoring and improving a credit program’s (1) performance in achieving desired loan characteristics, (2) effects on society, and (3) additionality. Two General Problems Have Limited Progress in Developing Common Performance Measures Although the common financial measures proposed by the task force have been adopted, two general problems have limited the Working Group’s progress in developing common performance measures. First, as previously noted, several credit programs lack relevant program performance and financial baseline data. Second, the Working Group has been unable to reach consensus on the appropriateness of decisionmakers using some of the task force’s proposed measures to assess the performance of individual credit programs or to compare that performance against the performance of other credit programs. For example, some programs give credit only to persons who are unable to obtain credit from other sources; other programs give credit to anyone who is entitled to the programs’ benefits, regardless of his or her access to credit. Thus, officials in some credit agencies questioned whether it would be appropriate to make comparisons among the default rates of credit programs, because the financial characteristics of borrowers for each program may be different. Relevant Data on Several of the Common Measures Proposed Are Lacking Agency officials told the task force that for several of the proposed measures, their agencies’ credit programs generally did not collect data relevant to those measures, collected incomplete data, or did not routinely collect such data. Such measures included the percentage of borrowers who were pleased with the timeliness and quality of credit program service and measures for monitoring the effects of credit programs on society and program additionality. For example, common measures proposed by the task force for monitoring the performance of home loan programs and business loan programs in “supporting investment important to the economy”—one of four intended effects on society the task force proposed—included the amount and quality of low-income housing financed and business investment financed, respectively. Officials from VA, the Department of Housing and Urban Development (HUD), and the Small Business Administration told the task force that the credit programs at their respective agencies generally did not collect such data. Moreover, even when data existed, our prior and ongoing work and audits by credit agencies’ inspectors general and others have consistently disclosed serious weaknesses in agencies’ systems, which has affected the reliability of data that are used to account for and manage credit programs. The Working Group Has Been Unable to Reach Consensus on the Appropriateness of Some of the Proposed Measures Consistent with the views of OMB and the Working Group, our work over the last few years has recognized that common performance measures for similar programs can provide important information for permitting comparisons of the results of those programs and the methods used to achieve those results. Such information could help program managers identify credit program performance gaps; set improvement goals; improve credit program processes; and inform other decisionmakers, such as OMB and Congress. However, we found that members of the Working Group have been unable to reach consensus on the appropriateness of some of the common performance measures proposed by the task force. Specifically, officials in some agencies questioned whether data collected from some of the proposed common measures would be meaningful for assessing the performance of their agencies’ credit programs. For example, VA and Education officials said that their agencies’ Loan Guaranty Program and direct and guaranteed student loan programs, respectively, are entitlement programs in which the government is obligated to give credit to anyone who qualifies for the programs’ benefits, regardless of his or her access to credit. Thus, these officials questioned whether a common programmatic measure proposed by the task force, “percent of loans or guarantees originated going to borrowers who would otherwise not have access to private credit,” was meaningful for their programs. As mentioned earlier, to be eligible for a USDA SFH direct loan, a borrower must have a family income that is “very low” to “low,” and the borrower must be unable to get credit from any other source. A USDA official said that the target and result for this proposed measure would always be 100 percent; thus, he also questioned whether the measure was meaningful for that program. In addition, for certain measures, officials in some credit agencies questioned whether the data collected would be appropriate to use in making comparisons among different credit programs’ performance results, such as the programs’ default rate, because the financial characteristics of borrowers for each program may be different. For example, USDA officials said that because the borrower must be unable to obtain private credit, the program’s target population may be more likely to default. Thus, according to USDA officials, it may be inappropriate to compare, for example, the actual default rate of the SFH direct loan program with the actual default rate of the VA Loan Guaranty Program, which is an entitlement program. We agree that common measures need to be carefully explained to help ensure that significant program differences are properly interpreted. However, it is not clear that such differences outweigh the potential usefulness of common measures. Credit agencies have generally agreed with our suggestion that they provide explanatory information where necessary in agency reports to portray program differences more appropriately and help users of performance information understand the reported performance. Also related to the problem of comparable data, we recently reported that agencies’ use of inconsistent definitions for their programs’ measures could hamper decisionmakers’ use of data collected from those measures in planning, comparing performance, and reporting on performance achieved. Our June 1997 report noted that some credit agencies differ in how they classify previously delinquent debt on which borrowers are currently making payments. Some reclassify such debt as “current,” but others keep it in a delinquent category regardless of the current payment status. Although such classification practices may be suitable within an agency, they make it difficult to compare agency performance or aggregate data for similar programs. For example, VA loans maintain their delinquent status until the delinquency is repaid or written off. Conversely, a home loan program within HUD reclassified single-family delinquent loans to a current repayment status when borrowers complied with forbearance terms, which typically included making partial mortgage payments for up to 3 years. In our June report, we recommended that the Department of the Treasury’s Financial Management Service work with major credit agencies and OMB to help those agencies consistently report on delinquent debt or disclose their inconsistencies. Treasury, OMB, and the major credit agencies generally agreed with that recommendation, and the agencies commented that consistent application of governmentwide debt collection reporting criteria is essential. Similarly, we observed that when consistent definitions do not exist among credit programs, it is difficult to know whether results reported would be readily comparable. For example, one of the common programmatic measures proposed by the task force for monitoring loan characteristics was the “percent of borrowers who are pleased with the timeliness and quality of credit program service.” However, for this proposed task force measure, it is unclear what is meant by “timeliness” and “quality” of service, because each credit program may have a different interpretation. Education’s guaranteed and direct student loan programs, for example, established a broad measure of “overall borrower satisfaction”; VA’s Loan Guaranty Program established more specific borrower satisfaction measures—the percent of respondents surveyed who will say (1) they are satisfied with their contact with VA; (2) they are satisfied with the time it took to obtain a Certificate of Eligibility; and (3) their loan did not take longer to process than expected as a result of a delay blamed on VA (which is to help the program monitor when program staff may need to work with the lender in identifying reasons for the delay in loan processing time, toward improving performance on this measure). Thus, it is unclear whether the data collected would be comparable if this proposed measure of the task force were used by all credit programs. The Working Group anticipated that agencies that administer credit programs could include common financial and programmatic measures in their annual performance plans and reports under the Results Act. According to OMB’s Senior Advisor for Cash and Credit Management, at an upcoming meeting of the Working Group OMB intends to ask members whether they want to continue to address their concerns about adopting additional common performance measures for credit programs. However, he said that OMB is not taking a prescriptive role in directing agencies’ performance measurement activities. Rather, he said the administration wants agencies to take the initiative on such activities. According to this official, if members choose not to address those concerns, OMB does not intend to require credit programs to adopt common performance measures when consensus about the appropriateness of such measures has not been achieved. Thus, it is unclear when, or if, agencies that administer credit programs will include common measures for those programs in their annual performance plans under the Results Act. Conclusions The credit programs we reviewed have established goals and performance measures that appeared to be generally related to the programs’ intended purposes and set target levels of performance for most of their respective measures. In addition, our prior work suggests that the development of common measures, where appropriate, can provide important information for permitting comparisons of similar programs’ results and the methods they used to achieve those results. We have suggested that agencies’ annual performance plans and reports under the Results Act can serve as suitable vehicles for developing such measures and providing such information. The Working Group has focused on developing common financial and programmatic performance measures for credit programs and anticipated that agencies that administer credit programs could include such measures in their annual performance plans and reports under the Results Act. However, two general problems have limited the Working Group’s progress in developing common performance measures for credit programs: (1) the lack of relevant program performance and financial baseline data for several credit programs and (2) the inability among members of the Working Group to reach consensus on the appropriateness of some of the proposed measures. OMB does not intend to require credit agencies to adopt common performance measures when consensus about the appropriateness of such measures has not been achieved. We agree that OMB should not force the use of common measures when concerns about their appropriateness exist. However, at the time of our review, the Working Group had not resolved those concerns and had yet to decide how and when those concerns would be addressed; thus, it is unclear whether OMB and the credit agencies will maintain their current level of attention to developing common measures. Also unclear is the extent to which agencies that administer credit programs will include common measures for those programs in their annual performance plans that could provide useful information to decisionmakers interested in making performance and cost comparisons. We recognize the difficulty the Working Group is facing in reaching consensus on common performance measures and that significant differences in program characteristics may limit the usefulness of some measures for broad, cross-program comparisons. However, we believe the potential benefits that could be realized from developing common performance measures, where appropriate, underscore the importance of OMB and the credit agencies continuing their efforts to develop and reach consensus on such measures. In addition, because the development and use of performance measures, especially common measures that can be used across programs and agencies, are in the early stages of implementation, it will be especially useful for decisionmakers to evaluate early experiences to identify successful, as well as unsuccessful, approaches. To that end, documented information on the measures considered, how they are used and should be interpreted, and how they can be improved will be helpful to agencies in further achieving the purposes of the Results Act. Recommendations to the Director of OMB Building on recommendations and suggestions we have made in prior reports, we recommend that the Director of OMB sustain OMB’s efforts to work with major credit agencies to use annual performance planning under the Results Act as a vehicle for developing common performance measures across credit programs, where appropriate. In doing so, we recommend that beginning with those agencies’ fiscal year 2000 annual performance plans, the Director of OMB require each agency that administers credit programs to identify in their plans (1) performance measures the agency is using for its credit program(s) that are the same as those used by other credit programs and the strengths and limitations of using those measures to make performance and cost comparisons among those programs; and (2) what actions, if any, are being taken or could be taken to refine the agency’s performance measurement efforts to address the identified limitations to using existing measures to make performance and cost comparisons across credit programs. These or some comparable requirements would serve a twofold purpose. First, they would help ensure that the search for common measures continues. And second, they would document the results of those efforts in a way that would (1) permit further analysis directed at the identification of additional common measures and (2) facilitate an understanding of any limitations of using existing common measures to compare results across credit programs. Agency Comments and Our Evaluation On December 11, 1997, we requested comments on a draft of this report from the Secretaries of Education and Agriculture, the Acting Secretary of VA, and the Director of OMB or their designees. On December 17, 1997, the liaison to GAO from USDA’s Rural Development mission area, which administers the SFH direct and guaranteed loan programs, said the Department generally agreed with the draft report’s factual material, conclusions, and recommendations. A rural development official later provided minor technical suggestions, which we included in the report as appropriate. Similarly, on December 19, 1997, the liaison to GAO from the Department of Education also said the Department concurred with the draft report’s general findings and later provided minor technical suggestions, which we included in the report as appropriate. On January 8, 1998, the VA liaison to GAO said the Department also generally agreed with the draft report’s factual material, conclusions, and recommendations, except for that part of the recommendation requiring credit agencies to include a discussion about common measures in their fiscal year 1999 annual performance plans. This comment was consistent with the comment provided to us on January 6, 1998, by a representative of VA’s Performance Analysis Service within VA’s Office of Budget. Specifically, that representative told us that VA would not likely have time to provide a meaningful discussion about common measures in its fiscal year 1999 plan because VA (1) had yet to achieve consensus among major stakeholders on the common measures, (2) had not done the analysis necessary to provide the discussion about common performance measures that our draft report recommended, and (3) is to submit the agency’s fiscal year 1999 performance plan to Congress within the next few weeks. OMB officials had a similar comment on the feasibility of requiring a discussion about common measures in credit agencies’ fiscal year 1999 performance plans, which we discuss later in this section. In response to this comment from both VA and OMB officials, we removed from the recommendation our suggestion that such a discussion be included in credit agencies’ fiscal year 1999 annual performance plans. However, we retained the recommendation that the Director of OMB require credit agencies to include a discussion about common measures in those agencies’ fiscal year 2000 annual performance plans and subsequent performance plans. We continue to believe that using annual performance planning under the Results Act as a vehicle for developing and discussing common performance measures across credit programs, where appropriate, could provide decisionmakers with important information and help agencies further achieve the purposes of the Results Act. Such information could be useful to decisionmakers in comparing similar programs’ results and the methods they used to achieve those results and for understanding how such measures are used, should be interpreted, and could be improved. Further, in a conversation with us on December 17, 1997, a representative of VA’s Loan Guaranty Service commented that developing information systems to collect data from common performance measures may be costly and that before VA developed systems to collect such data, the costs of doing so should be weighed against the benefits. We agree that collecting needed data from common measures is a problem that credit agencies face, as discussed in the draft report. We also have suggested that annual performance plans provide agencies with the opportunity to alert Congress to the problems they have had or anticipate having in collecting needed data, including the cost and data quality trade-offs associated with various collection strategies. The representative of VA’s Loan Guaranty Service also provided minor technical suggestions, which we included in this report where appropriate. On January 7, 1998, we met with the OMB Senior Advisor for Cash and Credit Management and the Senior Advisor to the Deputy for Management, who said the report would serve as a valuable tool and resource in OMB’s continuing efforts to encourage and work with major credit agencies to effectively implement the Results Act. However, those officials cautioned that based on OMB’s experiences in working with major credit agencies to draft their fiscal year 1999 performance plans and to identify appropriate, results-oriented common performance measures for credit programs, developing and reaching consensus on credit program performance measures will continue to be difficult, time-consuming, and iterative. They said that those experiences and OMB’s review of major credit agencies’ fiscal year 1999 performance plans suggested to them that the priority of those agencies at this point in the implementation of the Results Act needs to be on ensuring the quality of the performance goals and measures for their individual credit programs. According to these OMB officials, in developing program-specific as well as common performance measures for credit programs, major credit agencies need to continue working to reach consensus among the key stakeholders of the agencies’ credit programs and to develop information systems for collecting needed performance data, which will be challenging. Moreover, the OMB officials said that agencies are concluding the preparation of their fiscal year 1999 annual performance plans that will accompany the President’s budget submission to Congress in February 1998. The officials said that those plans reflect budget, policy, and programmatic decisions already made in the course of preparing the budget. Thus, the officials believed it would not be feasible for the Director of OMB to direct credit agencies to include in their fiscal year 1999 plans the discussion about common measures that our draft report recommended. OMB officials said that given the challenges of developing appropriate, results-oriented common performance measures for credit programs, it is unclear when or if such measures could be adopted by major credit agencies. OMB’s Senior Advisor for Cash and Credit Management suggested that as the Working Group’s efforts advance, two common results-oriented performance measures that may be considered appropriate for agencies that administer credit programs are (1) the number of loans a program made that are repaid successfully and (2) the percentage of customers satisfied with the program. However, he said that developing measures that would help isolate a credit program’s contribution to achieving a particular common result from the contribution of external factors may not be possible for all credit programs. The OMB officials also noted that performance information is just one key factor among many that will go into decisionmaking on management and budget policy issues. The OMB officials told us that OMB will ensure in calendar year 1998 that developing common results-oriented performance measures across credit programs is a priority agenda item for discussions among the Working Group members. They said that as part of these discussions, major credit agencies could share with one another their experiences in developing their individual fiscal year 1999 performance plans and their congressional committees’ reactions to those plans. On the basis of agencies’ experiences and congressional reactions, the OMB officials said they believe that OMB and the major credit agencies would have a better foundation from which to discuss common performance measures for the agencies’ credit programs. According to the OMB officials, this experience will provide OMB and the agencies with the basis for determining the feasibility of incorporating common measures and a discussion about such measures into future annual performance plans, where appropriate. We believe that OMB’s planned approach to use major credit agencies’ fiscal year 1999 performance planning efforts as the foundation for discussions among these agencies on common performance measures is responsive to the intent of our recommendation. In this regard, in response to OMB officials’ comments about the feasibility of the Director of OMB requiring credit agencies to include in their fiscal year 1999 annual performance plans the discussion about common measures that our draft report recommended, as mentioned earlier, we removed the reference to those agencies including such a discussion in their fiscal year 1999 plans. However, as also mentioned earlier, we retained the recommendation for those agencies to include such a discussion in their fiscal year 2000 annual performance plans and subsequent performance plans. Such discussions can serve as vehicles for highlighting many of the other cautionary notes that the OMB officials raised, such as the difficulties in developing measures that seek to isolate a credit program’s unique contributions to a particular result. We are sending copies of this report to the Majority Leader, House of Representatives; the Chairman and Ranking Minority Member, Committee on Government Reform and Oversight, House of Representatives; the Chairman and Ranking Minority Member, Committee on Governmental Affairs, United States Senate; the Secretaries of Education and Agriculture; the Acting Secretary of VA; and the heads of agencies that administer credit programs and are represented on the Federal Credit Policy Working Group. We also will make copies available to others on request. The major contributors to this report are listed in appendix IV. Please contact me on (202) 512-8676 if you have any questions. Overview of the Results Act The Results Act is the primary legislative framework through which agencies will be required to set strategic goals, measure performance, and report on the degree to which goals were met. It starts by requiring each federal agency to develop a strategic plan that covers a period of at least 5 years and includes the agency’s mission statement; identifies the agency’s long-term strategic goals; and describes how the agency intends to achieve those goals through its activities and through its human, capital, information, and other resources. The first strategic plans that the Act required agencies to develop were to be completed by September 30, 1997. Also, the act requires each agency to submit to the Office of Management and Budget (OMB), beginning for fiscal year 1999, an annual performance plan. The first annual performance plans were to be submitted to OMB in the fall of 1997. The annual performance plan is to provide the direct linkage between the strategic goals outlined in the agency’s strategic plan and what managers and employees do day to day. In essence, this plan is to contain the annual performance goals the agency will use to gauge its progress toward accomplishing its strategic goals and identify the performance measures the agency will use to assess its progress. Also, OMB will use individual agencies’ performance plans to develop an overall federal government performance plan that OMB is to submit annually to Congress with the president’s budget, beginning for fiscal year 1999. The Results Act also requires that each agency submit an annual report to the president and to the appropriate authorization and appropriations committees of Congress on program performance for the previous fiscal year (copies are to be provided to other congressional committees and to the public upon request). The first of these reports, on program performance for fiscal year 1999, is due by March 31, 2000; and subsequent reports are due by March 31 for the years that follow. However, for fiscal years 2000 and 2001, agencies’ reports are to include performance data beginning with fiscal year 1999. For each subsequent year, agencies are to include performance data for the year covered by the report and 3 prior years. Finally, in crafting the Results Act, Congress recognized that managerial accountability for results is linked to managers having sufficient flexibility, discretion, and authority to accomplish desired results. The Act authorizes agencies to apply for managerial flexibility waivers in their annual performance plans beginning with fiscal year 1999. The authority of agencies to request waivers of administrative procedural requirements and controls is intended to provide federal managers with more flexibility to structure agency systems to better support program goals. The nonstatutory requirements that OMB can waive under the Results Act generally involve the allocation and use of resources, such as restrictions on shifting funds among items within a budget account. Agencies must report in their annual performance reports on the use and effectiveness of any managerial flexibility waivers that they receive. Objectives, Scope, and Methodology Our first objective was to identify goals and measures established by the selected credit programs that related to the programs’ intended purposes and determine whether the programs had set target levels of performance for assessing their progress in achieving their desired results. Under the Results Act, target levels of performance are to enable a comparison of planned versus actual results achieved for a given year. Our second objective was to identify the challenges agency officials cited in developing performance information, including goals and measures, for the selected programs and any approaches those programs were taking to address those challenges. Our third objective was to describe the status of the Working Group’s effort to develop common performance measures for federal credit programs. For our review, we selected a nonrandom, purposive sample of five federal credit programs at three agencies. These programs were the Department of Veterans Affairs’ Loan Guaranty Program; the Department of Education’s William D. Ford Direct Loan Program and its Federal Family Education Loan Program (referred to in this report as Education’s direct and guaranteed student loan programs, respectively); and the Department of Agriculture’s Single-Family Housing (SFH) direct loan program and guaranteed loan program. We selected for our review credit programs that varied in terms of type of program (e.g., housing and education loans); mode of credit delivery (e.g., direct and guaranteed loans); and program size as measured by the amount of outstanding loans. According to data reported by OMB in the fiscal year 1998 budget and agency data, of the total amount of federal credit outstanding in fiscal year 1996 for guaranteed loans ($805 billion) and direct loans ($165 billion), these five programs represented about 32 and 18 percent, respectively. The smallest and largest of the guaranteed loan programs accounted for slightly less than 1/2 percent and 19 percent, respectively, of the fiscal year 1996 credit outstanding in loan guarantees governmentwide. Similarly, the smallest and largest of the direct loan programs held 7 percent and 11 percent, respectively, of the fiscal year 1996 credit outstanding in direct loans governmentwide. Because of the small and nonrandom nature of our sample, our observations and analyses are not generalizable to other federal credit programs. To address the first part of our first objective (i.e., to identify goals and measures established by the selected credit programs that related to the programs’ intended purposes), we compared goals and measures established by the programs as of May 1, 1997, to the programs’ intended purposes as identified by those programs or their respective agencies. According to agency officials, these goals and measures were generally the same ones the programs submitted to OMB and Congress with their respective agencies’ fiscal year 1998 budget presentations. The programs also were proposing to include these goals and measures for their agencies’ fiscal year 1999 annual performance plans under the Results Act. To determine whether a goal or measure was related to a program’s intended purposes, we reviewed available agency and credit program documentation for (1) a description of the program’s intended purposes and (2) a discussion that reasonably related that particular goal or measure to the program’s intended purposes. When agency documentation did not contain such a discussion, we examined the wording and considered the meaning of each program’s goals and measures and compared them to the program’s intended purposes to identify relationships between them. We also interviewed agency officials about the programs’ intended purposes and asked those officials to comment on the relationships we identified between the programs goals and measures and intended purposes. To address the second part of our first objective (i.e., to determine whether the selected programs set fiscal year 1998 target levels of performance for assessing their progress in achieving desired results), we identified those measures for which the programs had either (1) identified fiscal year 1998 targets or (2) reported prior year baseline data on those measures and indicated how performance on those measures was to change (i.e., increase or decrease) in fiscal year 1998 relative to the baseline. Our review did not address the reasonableness of the processes or methods the programs used to determine how to assess progress or establish target levels toward achieving the programs’ intended purposes; determine whether other, more appropriate, measures existed; or evaluate the feasibility of the targets the programs established. To address the first part of our second objective (i.e., to identify the challenges agency officials cited facing in developing performance information for the selected programs), we developed a data collection instrument that listed 49 activities that we identified as being associated with developing performance information. Examples of these activities included “determining a realistic target level of performance for annual performance goals” and “developing measures for assessing the net effect of the program compared with what would have occurred in the absence of the program.” To identify these activities, we referred to key steps and practices identified in our Executive Guide: Effectively Implementing the Government Performance and Results Act and other work that we had under way assessing the challenges agencies were facing in implementing performance measurement. We sent the instrument to agency officials responsible for and involved in the development of goals and measures for the selected programs and asked those officials to rate, using a five-point scale from “little or no” challenge to a “very great” challenge, how great a challenge each of the 49 activities was to perform for those programs. Officials also could indicate that they had not engaged in a particular activity. For purposes of this report, we refer to activities that any of the agency officials rated as a “great” or “very great” challenge as significantly challenging activities. We then interviewed those officials to discuss why they rated certain activities as significantly challenging. We analyzed their responses and related documentation to identify general challenges that led officials to report those activities as significantly challenging for the selected programs. To address the second part of our second objective (i.e., to identify any approaches that the selected programs were taking to address the challenges they identified to developing performance information), we interviewed the agency officials who responded to our data collection instrument to obtain information on the approaches they were taking to address those activities they had identified as significantly challenging and analyzed agency documentation. We also asked these officials to comment on the way in which we described the approaches the programs were taking to address the challenges in developing performance information that they had identified. We also considered prior and ongoing work we have done on the efforts of VA, Education, USDA, and other credit agencies to implement various credit and financial management reforms. To address our third objective (i.e., to describe the status of the Working Group’s effort to develop common performance measures for federal credit programs), we reviewed various documentation from OMB and members of the Working Group that described its effort to develop common performance measures for credit programs and the views of agency officials on those measures. We also talked to OMB officials and members of the Working Group, including agency officials at those agencies administering the programs we selected for our review, to obtain their views on the common performance measures proposed. Our review did not address the reasonableness of the processes or methods the Working Group’s task force used in determining how to assess progress for federal credit programs or determine whether other measures existed that may be more appropriate. Selected Credit Programs’ Intended Purposes and Examples of Goals and Measures They Presented in Their Fiscal Year 1998 Budgets Help veterans and active duty personnel purchase and retain homes in recognition of their service to the nation. Treat all veterans and other participants in the program in a courteous, responsive, and timely manner. Examples related to the program’s intended General goal(s) Performance measure(s) Assist veterans in obtaining home mortgage loans. Percent of respondents to the Lender Customer Satisfaction Survey who say they are satisfied with their overall interaction with VA. The general goal defines, in part, the intended purpose of helping veterans purchase homes. VA identified this goal and measure as related on the basis of the following rationale: Lender satisfaction addresses an intermediate result expected to lead to achieving this goal and intended purpose. Specifically, maximizing lenders’ satisfaction with their dealings with VA employees is expected to encourage lenders to participate in the program, which is expected to expand financing opportunities (i.e., available mortgage loans) for helping veterans purchase homes. Assist veterans in avoiding foreclosures. Foreclosure Avoidance Through Servicing (FATS) Ratio. The general goal defines this intended purpose, and VA identified this goal and measure as related. The FATS Ratio is to provide data on the extent to which foreclosures would have been greater had VA not pursued alternatives to foreclosure, such as intervening with the holder of the loan on behalf of the borrower to set up a repayment plan. Assist veterans in obtaining home mortgage loans. Percent of respondents to the Veteran Customer Satisfaction Survey who say they are satisfied with their overall contact with VA. VA identified this goal and measure as related. This measure defines, in part, “assisting” veterans in obtaining homes because it addresses, in summary form, treating veterans in a courteous and responsive manner. Assist veterans in obtaining home mortgage loans. Percent of respondents to the Veteran Customer Satisfaction Survey who say they are satisfied with the time it took to obtain a Certificate of Eligibility. VA identified this goal and measure as related. This measure defines, in part, “assisting” veterans in obtaining homes because it addresses treating veterans in a timely manner. Before a lender issues a VA guaranteed mortgage loan, VA must certify to the lender that the borrower is a veteran who is eligible for a loan guaranteed by VA. (continued) VA’s Loan Guaranty Program (continued) Operate in the most efficient manner possible to minimize costs and ensure the best use of the taxpayer’s dollar. Provide homeownership loans to: — very-low-income and low-income families (i.e., families who have incomes under 80% of median) who do not own adequate housing and cannot obtain credit from other sources; — eligible farm owners for housing for themselves or for farm laborers. Provide “supervised credit” to many rural borrowers to help them maintain their homes in times of financial crises through credit counseling, workout agreements, and moratoriums. Provide homeownership opportunities to moderate-income rural residents (i.e., between 80% and 115% of median). Utilize private lenders to provide mortgages to borrowers who would be unable to obtain credit without the guarantee. Examples related to the program’s intended General goal(s) Performance measure(s) Efficient credit and program management. Percent of early defaults of all loans originated. The goal describes the strategy and addresses intermediate results expected to lead to minimizing costs and ensuring the best use of the the taxpayer’s dollar. On the basis of the following rationale, the measure, which VA identified as related to this goal, addresses this intended result: Once a loan is put into default status, collection activities are initiated, which can ultimately include the foreclosure of the borrower’s home. Such activities are costly to the government and, thus, the taxpayer. Early defaults (i.e., defaults within 6 months of origination) are more likely than a later default to be due to a deficiency in the underwriting of the loans. Thus, efficient credit and program management is expected to reduce early defaults and, therefore, minimize costs and ensure the best use of the taxpayer’s dollar. No general goal for this intended purpose was identified by the program or agency. Number of rural families with improved or more suitable housing conditions. This measure includes the number of direct loans provided, or issued, to eligible borrowers. Thus, the rationale for the relationship is that the program provides loans, or “housing opportunities,” for improved or more suitable housing to very-low- to low-income families and farm owners who do not own adequate housing and who cannot obtain credit from other sources. No general goal for this intended purpose was identified by the program or agency. Percentage of borrowers current. The rationale for the relationship between this measure and helping rural borrowers maintain their homes follows: Rural borrowers encountering financial crises are likely to miss scheduled payments and thus are not likely to be counted as “current” in making scheduled loan payments. When a borrower is viewed as not current, the borrower’s loan is delinquent and put into default status. Once a loan is put into default status, collection activities are initiated, which can ultimately include the foreclosure of the borrower’s home. No general goal for this intended purpose was identified by the program or agency. Number of rural families with improved or more suitable housing conditions. This measure includes the number of guaranteed loans provided to eligible borrowers. Thus, the rationale for this relationship is that the program utilizes private lenders to provide mortgages, or homeownership opportunities, for improved or more suitable housing to moderate-income rural residents who would be unable to obtain credit without the guarantee. (continued) Provide students and their families with federally sponsored loans—using a streamlined student loan system that simplifies loan access and allows for flexible repayment—to help borrowers meet increasing postsecondary education costs and to reduce taxpayer costs. Examples related to the program’s intended General goal(s) Performance measure(s) Maintain a high level of borrower satisfaction. Rate of borrowers’ overall satisfaction with the program during the first year. The rationale for the relationship between this goal and measure and helping borrowers meet increasing postsecondary education costs is that: — providing loans using a streamlined system will simplify borrowers’ access to postsecondary education loans, thereby helping to satisfy those borrowers in simplifying their ability to meet increasing postsecondary education costs; and — using a streamlined system that allows for flexible repayment will help ease borrowers’ debt burden, thereby also helping to satisfy those borrowers in helping them meet increasing postsecondary education costs. Provide flexible repayment options so that debt burden is eased and defaults are minimized. Cohort default rate. The general goal defines the program’s strategy and intended purpose, and Education identified this goal and measure as related. The rationale for the relationship between the measure and intended purpose is that borrowers whose loans are placed into a default status are likely encountering debt burden and thus having difficulty meeting postsecondary costs. Once a loan is put into default status, debt collection activities are initiated, which are costly to the government and thus the taxpayer. In addition, by using a streamlined system that allows for flexible repayment, the program will help ease borrowers’ debt burden, thereby helping those borrowers meet increasing postsecondary education costs. (continued) Education’s direct student loan program (continued) Ensure access to capital for postsecondary education. Successfully implement and manage the direct student loan program. Provide students and their families with federally sponsored loans to help meet increasing postsecondary education costs. Ensure access to capital for postsecondary education. Examples related to the program’s intended General goal(s) Performance measure(s) Continue to provide a streamlined loan delivery system to attract schools to participate. Institutional direct loan program participation rate. This goal describes the strategy for achieving this intended purpose. In addition, this goal and measure describe an intermediate result the program expects to lead to achieving this intended purpose. Specifically, the program uses schools as the vehicles for providing loans to students and their families. By providing a streamlined loan delivery system, the program expects to increase schools’ participation, which is an intermediate result the program expects will lead to broader student access to capital for postsecondary education. Maintain a high level of school satisfaction. Rate of overall satisfaction with the direct student loan program. Institutional retention rate. This goal and these measures describe intermediate results the program also expects to lead it to achieving this intended purpose. As previously noted, the program uses schools as the vehicles for providing loans to students and their families. Increasing participating schools’ satisfaction is an intermediate result that the program expects will lead to encouraging those schools to stay as participants in the program; and increasing schools’ retention in the program is an intermedate result that the program expects will lead to broader student access to capital for postsecondary education. Continue to provide strong fiscal management of the program. Number of internal control program weaknesses identified in Education’s financial statement audit. The goal defines the intended purpose, and Education identified this goal and measure as related. Maintain a high level of borrower satisfaction from the time of loan origination through the end of the repayment period. Overall rate of borrower satisfaction with the guaranteed student loan program. The rationale for the relationship between this goal and measure and the intended purpose is that by providing students and their families with loans (through the guaranteeing of loans made by private lenders), the program will be helping to satisfy those borrowers in their ability to meet increasing postsecondary education costs. Ensure access to guaranteed loans in a changing marketplace. Number of borrower complaints. The goal defines the intended purpose, and Education identified the goal and measure as related. The number of borrower complaints is to include data on the number of borrowers who complain to Education about being denied access to a guaranteed postsecondary education loan. (continued) Education’s guaranteed student loan program (continued) Successfully deliver and manage the guaranteed student loan program in an efficient and cost-effective manner to help students and their parents meet postsecondary education costs. Examples related to the program’s intended General goal(s) Performance measure(s) Provide a program that is cost-effective for the taxpayer. Annual delinquency rate. Annual collection rate. Per unit administrative costs. The goal defines the intended purpose, and Education identified this goal and these measures as related. Monitoring the program’s performance in, for example, managing the program’s loan portfolio debt and the program’s administrative costs addresses providing a program that is cost-effective for the taxpayer, or successfully delivering and managing the program in an efficient and cost-effective manner. Major Contributors General Government Division Acknowledgements In addition to those named above, the following individuals made notable contributions to this report: From the Accounting and Information Management Division: Jeff Steinhoff, Director of Planning and Reporting; Dan Blair, Mary Ellen Chervenic, Michael J. Curro, Julie S. Tessauro, and McCoy Williams, Assistant Directors; and Rita A. Grieco and Carolyn Litsinger, Senior Evaluators. From the General Government Division: Joseph S. Wholey, Senior Advisor for Evaluation Methodology; and Stephanie Shipman, Assistant Director. From the Health, Education and Human Services Division: Joseph J. Eglin, Jr., Assistant Director; Paula N. Denman, Senior Evaluator; and Sara E. Edmondson, Senior Social Science Analyst. From the Office of General Counsel: Alan Belkin, Assistant General Counsel; and James M. Rebbe, Attorney. From the Resources, Community, and Economic Development Division: Robert S. Procaccini, Assistant Director. Related GAO Products Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). Managing For Results: Using the Results Act to Address Mission Fragmentation and Program Overlap (GAO/AIMD-97-146, Aug. 29, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Debt Collection: Improved Reporting Needed on Billions of Dollars in Delinquent Debt and Agency Collection Performance (GAO/AIMD-97-48, June 2, 1997). Managing For Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996). Financial Management: Continued Momentum Essential to Achieve CFO Act Goals (GAO/T-AIMD-96-10, Dec. 14, 1995). Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology—Learning From Leading Organizations (GAO/AIMD-94-115, May 1994). Credit Reform: Case-by-Case Assessment Advisable in Evaluating Coverage and Compliance (GAO/AIMD-94-57, July 28, 1994). Federal Credit Programs: Agencies Had Serious Problems Meeting Credit Reform Accounting Requirements (GAO/AFMD-93-17, Jan. 6, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed major credit agencies' efforts to implement the Government Performance and Results Act of 1993 (GPRA), focusing on: (1) goals and measures established by the selected credit programs that related to the programs' intended purposes; (2) whether the programs had set target levels of performance for assessing their progress in achieving their desired results; (3) the challenges agency officials cited in developing performance information, including goals and measures, for the selected programs and any approaches those programs were taking to address those challenges; and (4) the status of the Federal Credit Policy Working Group's effort to develop common performance measures for federal credit programs. GAO noted that: (1) in their efforts to implement GPRA, the five credit programs established goals and performance measures that appeared to be generally related to the programs' intended purposes; (2) if the selected programs collect accurate corresponding data on their actual performance, they should be able to monitor their progress in achieving desired results on those measures and have fiscal year (FY) 1998 baseline data to use in setting future targets for those measures; (3) although the selected programs have established goals, measures, and targets in their efforts to implement GPRA, GAO identified three general challenges the programs have been facing in developing performance information; (4) according to the Office of Management and Budget (OMB) and the Working Group, comparing results using common measures across credit programs allows program managers and other decisionmakers to identify best practices among those programs that have the potential for improving other credit programs' performance; (5) two general problems have limited the Working Group's progress in developing common performance measures for credit programs; (6) the Working Group anticipated that agencies that administer credit programs could include common financial and programmatic measures in their annual performance plans and reports under the Results Act; (7) however, OMB does not intend to require credit agencies to adopt common performance measures when consensus about the appropriateness of such measures has not been achieved; (8) GAO agreed that OMB should not force the use of common measures when concerns about their appropriateness exist, but the Working Group had not resolved those concerns and had not decided how and when those concerns would be addressed; (9) thus, it is unclear whether OMB and the credit agencies will maintain their current level of attention to developing common measures; (10) also unclear is the extent to which agencies that administer credit programs will include common measures for those programs in their annual performance plans that could provide useful information to decisionmakers interested in making performance and cost comparisons; and (11) GAO believes the potential benefits that could be realized from developing common performance measures, where appropriate, underscore the importance of OMB and the credit agencies continuing their efforts to develop and reach consensus on such measures.
Background Medicare, administered by HCFA within the Department of Health and Human Services (HHS), is a health insurance program that covers almost all Americans 65 years old and older and certain individuals under 65 years old who are disabled or have chronic kidney disease. The program, authorized under title XVIII of the Social Security Act, provides protection under two parts. Part A, the hospital insurance program, covers inpatient hospital services, posthospital care in skilled nursing homes, and care in patients’ homes. Part B, the supplementary medical insurance program, covers primarily physician services but also home health care for beneficiaries not covered under part A. Coverage Criteria To qualify for Medicare home health care, a person must be confined to his or her residence (homebound); under a physician’s care; and need part-time or intermittent skilled nursing care and/or physical therapy or speech therapy. The services must be furnished under a plan of care prescribed and periodically reviewed by a physician. Home health benefits covered by Medicare include part-time or intermittent nursing care provided by or under the supervision of a registered nurse; physical, occupational, and speech therapy; medical social services related to the patients’ health problems; and part-time or intermittent home health aide services when provided as an adjunct to skilled nursing or therapy care. Medicare beneficiaries may receive home health care as long as it is reasonable and necessary for the treatment of illness or injury; no limits exist on the number of visits or length of coverage. Medicare does not require copayments or deductibles for home health care. Medicare home health services must be furnished by Medicare-certified HHAs or by others under arrangement with such an agency. Agencies participating in the program must meet specific requirements of the Social Security Act. HHAs are reimbursed for the reasonable costs incurred in providing covered visits to eligible beneficiaries up to specified cost limits established for each area of the country. Medicare-certified HHAs are classified into one of three ownership categories. Proprietary HHAs are private, for-profit agencies. Voluntary agencies are private (nongovernmental), nonprofit agencies that are exempt from federal income taxation; for example, Visiting Nurse Associations and Easter Seal Societies. Government agencies are operated by a state or local government. Program Administration HCFA currently administers the home health care program through nineregional home health intermediaries (RHHI)—eight Blue Cross plans and the Aetna Life and Casualty Insurance Company. These intermediaries serve as a communication channel between HHAs and HCFA, make payments to HHAs for covered services provided to Medicare establish and apply payment safeguards to prevent program abuse. Changes in Eligibility Criteria Key to Home Health Growth Changes in the legal and regulatory provisions governing the home health benefit together with changes in HCFA’s policies have played a key role in the increase in the benefit’s use. At Medicare’s inception in 1966, the home health benefit under part A provided limited posthospital care of up to 100 visits per year that required a prior hospitalization of at least 3 days. In addition, the services could only be provided within 1 year after the patient’s discharge and had to be for the same illness. These restrictions were eliminated by the Omnibus Budget Reconciliation Act of 1980. With the implementation of the Medicare inpatient prospective payment system in 1983, the utilization of the home health benefit was expected to grow as patients were discharged from the hospital earlier in their recovery period. However, expenditures changed little over the next 5 years (see fig. 1). The Deficit Reduction Act of 1984 reduced the number of intermediaries processing home health claims, and HCFA intensified education of the home health intermediaries to promote more consistency in claims review. Additionally, HCFA instructed the intermediaries to increase the number of claims receiving medical review before payment. This increased review in addition to a requirement for more detailed documentation contributed to an increased claim denial rate—from 3.4 percent in 1985 to 7.9 percent in 1987. A lawsuit was filed in 1988 (Duggan v. Bowen) that struck down HCFA’s interpretation of benefit coverage requirements. As a result of the suit, HCFA revised the Medicare Home Health Agency and Medicare Intermediary manuals in 1989 so that the criteria for coverage of home health visits would be consistent with “part-time or intermittent care,” as required by statute, rather than “part-time and intermittent care,” as HCFA had been interpreting it. This change enabled HHAs to increase the frequency of visits because they no longer had to be intermittent. The requirements were also changed so that patients now qualify for skilled observation by a nurse or therapist if a reasonable potential for complications or possible need to change treatment existed. Further, the benefit now allows maintenance therapy where therapy services are required to simply maintain function rather than the previous criteria that patients show improvement from such services. The 1989 Medicare Home Health Agency Manual changes also required that intermediaries, in order to deny claims on the basis of medical necessity, determine that each denied visit was not medically necessary at the time services were ordered. Before this change, intermediaries were denying all visits beyond what the intermediary judged necessary; for example, denying 10 visits out of 50 visits claimed, if the intermediary could determine that the beneficiary could be adequately treated with 40 visits. The intermediary did not need to review each visit. This change has made it more costly for intermediaries to determine whether services are medically necessary and, therefore, fewer claims are denied. The effect of changes in Medicare law, regulations, and policy has been that home health care is now available to more beneficiaries, for less acute conditions, and for longer periods of time. For example, in 1992, approximately one-third of home health beneficiaries entered the program without a hospital stay at one time during the year. Of those who had been hospitalized, only half had a hospital stay in the 30 days before starting home health care. Medicare Beneficiaries Receiving More Home Health Services Since the Medicare Home Health Agency Manual and Medicare Intermediary Manual changes of 1989, the percentage of Medicare beneficiaries receiving home health services and the number of home health visits received per year per home health user have increased significantly. In 1989, 1.7 million beneficiaries (5.6 percent of the Medicare population) received home health care. In 1993, the number of beneficiaries receiving such care increased to 2.8 million (8.8 percent of the Medicare population). Beneficiaries receiving home health services are typically female and over 75 years old; however, the number of disabled beneficiaries under 65 years old receiving services has been growing. (See table II.1 in app. II.) The average number of visits received per home health beneficiary has also increased dramatically since 1989. From 1989 through 1993, the average number of visits received per year more than doubled, from 26 to 57 visits. Over the same period, the median number of visits almost doubled, from 13 to 24 visits (see fig. 2). Most of the increase in visits has resulted from an increased use of skilled nursing (average visits increased from 15 per year in 1989 to 26 visits per year in 1993) and home health aide visits (average visits increased from 25 visits per year for beneficiaries who received any aide visits in 1989 to 56 visits per year in 1993). The distribution of visits across home health beneficiaries has become increasingly skewed toward heavy users (see fig. 3). From 1989 to 1993, the percentage of users having more than 60 visits in a year increased from 10.6 percent to 25.7 percent. While beneficiaries who had 60 or fewer visits in 1993 averaged only 20 home health visits (with a median of 15 visits), those with more than 60 visits, averaged 163 visits (with a median of 125 visits). The percentage of beneficiaries receiving more than 210 visits in 1 year has also increased, from fewer than 1.0 percent in 1989 to 5.8 percent in 1993. Home Health Industry Expanding Rapidly The home health industry has experienced rapid growth since 1989. The number of Medicare-certified HHAs increased from 5,692 in 1989 to 7,864 at the end of 1994. Growth has occurred mainly in HCFA’s Dallas, San Francisco, and Chicago regions. (See fig. II.1 in app. II for individual state growth data.) Recent HHA growth has primarily taken place in proprietary agencies, while the percentage of more traditional nonprofit home health providers—visiting nurse associations and government agencies—has declined (table 1). In 1989, approximately 35 percent of all Medicare-certified HHAs were proprietary. In 1994, close to 50 percent of all HHAs were in this category. (See fig. II.1 in app. II for state breakdowns.) This increased percentage of proprietary agencies was responsible for 83 percent of the growth in the number of HHAs between 1989 and 1994. Utilization Varies by Geographic Area and Type of HHA A comparison of average visits per beneficiary receiving home health services in 1993 indicates that beneficiaries in certain HCFA regions—most notably in the Atlanta, Boston, and Dallas regions—receive considerably more services on average than beneficiaries in other areas (see table 2). (Refer to fig. II.3 in app. II for data on total home health visits per Medicare beneficiary by state.) A further breakdown of these figures by ownership category indicates that in all regions, proprietary HHAs provide many more services per case than voluntary or government-run agencies. (See fig. II.2 in app. II for state breakdowns.) On the national level, proprietary agencies have provided a significantly higher number of average visits per home health beneficiary since 1989 (see fig. 4). A recent study noted that some of the regional variation in services may reflect differences in the availability of substitute services. Additionally, the study reported some regional differences in patient characteristics; however, these differences did not seem to have a clear pattern that might partially explain variations in utilization. Another study indicated that regional variation could in part be explained by patient characteristics. For instance, the study found that compared with Medicare home health users nationally, beneficiaries in the East South Central region were more likely to be frail, chronically ill, and in poorer health. The study also noted that home health care in the East South Central region tended to be delivered outside large metropolitan counties and in counties that had unusually high percentages of elderly persons living in poverty (both characteristics associated with higher than average home health use). While evidence might suggest that the availability of substitute services and beneficiary case-mix may explain some of the regional variation in utilization of home health services, why proprietary agencies consistently provide more visits in all regions is not clear. To learn more about the differences between care provided by proprietary and other types of HHAs, we conducted an episode-of-care analysis for four diagnoses: diabetes, heart failure, hypertension, and hip fracture. (See app. I for our methodology and app. III for detailed results.) For these diagnoses, proprietary agencies, on average, provided care for the longest period of time and provided the most visits per episode during the period studied (see table 3). Although government-run agencies provided care for similar lengths of time as proprietary agencies, government-run HHAs provided 32 to 45 percent fewer visits to beneficiaries with these four diagnoses. Voluntary agencies, in general, provided care for the shortest period of time for all four diagnoses, but they provided slightly more visits per episode than government-run agencies. Variations in utilization between the different types of HHAs were most notable in cases of diabetes, which is regarded as a chronic problem, and less notable in cases of hip fracture, which is more of an acute problem. Several HCFA and intermediary officials expressed concern that the growing number of proprietary agencies may be generating increased utilization of home health services. They believe that because the beneficiary incurs no cost and little data exist on the effectiveness of different plans of care, HHAs primarily compete by offering greater numbers of services to beneficiaries. Some HHS Office of Inspector General and intermediary officials further believe that the nonprofit HHAs are being forced to offer increasingly more services in order to stay in business. Benefit Controls Weakened as Utilization Expands In two reports issued in 1981 and 1986, respectively, we criticized HCFA’s administration of the Medicare home health benefit. We reported that about 27 percent of the visits reviewed at 37 agencies and paid for under the benefit were questionable or improper. We attributed those problems to the vagueness of the coverage criteria (particularly uncertainty over the exact meaning of terms such as homebound and intermittent care), insufficient information being submitted with the claims upon which to base a coverage decision, and poor performance of the intermediaries in reviewing claims. We also noted that other control problems were adversely affecting proper utilization of the home health benefit, including insufficient physician involvement and inadequate monitoring of beneficiary status. In revisiting these issues, we found that while controls had improved during the mid- and late 1980s, they have largely deteriorated since then. Considerable Room for Interpretation of Coverage Criteria Remains Homebound Status The Social Security Act requires that a beneficiary be “confined to the home” (homebound) to be eligible for Medicare home health care. In our 1981 report, we found that determining whether beneficiaries are homebound is difficult due to the inadequacy of the definition provided by HCFA. The report recommended that HCFA’s criteria for determining homebound status be clarified and made more specific. The Omnibus Reconciliation Act of 1987 added a definition of homebound to the Social Security Act using the same wording as the HCFA Home Health Agency Manual definition. Therefore, the definition of homebound remains essentially unchanged and considerable discretion remains in interpreting and applying the homebound definition. “the condition of these patients should be such that there exists a normal inability to leave home and, consequently, leaving their homes would require a considerable and taxing effort. “ if the patient does in fact leave the home, the patient may nevertheless be considered homebound if the absences from the home are infrequent or for periods of relatively short duration, or are attributable to the need to receive medical treatment.” (See app. IV for a full definition.) Several HCFA and intermediary officials said that few denials are made on the basis that the beneficiary was not homebound. One intermediary official said that the RHHI made fewer than 10 denials a year based on the homebound criteria. An HCFA official further noted that although the RHHIs tend to interpret the homebound criteria fairly consistently, the criteria are so broad that very few claims are denied on the basis that the coverage criteria have not been met. Finally, even if intermediaries do make a denial based on the homebound criteria, so much room for interpretation still exists in the infrequent or short duration requirements that such denials may end up being reversed at the reconsideration or appeals level. A recent study conducted by one of the RHHIs identified some of the types of abuses that are difficult for the intermediary to prevent because of the range of interpretations possible for the homebound criterion. For example, the study identified an instance where a physician called the RHHI to complain that some of his patients were being told by an HHA that they were homebound because they did not own a car. The survey also revealed an example of a home health beneficiary who would put her home health care on hold so that she could go fishing for a week or two. She would then come back and resume her care. Intermittent Care The Medicare Home Health Manual sets the parameters of the term intermittent in two ways. The first pertains to beneficiary eligibility requirements; to meet the requirement for intermittent skilled nursing care, an individual must have a “medically predictable recurring need for skilled nursing services.” In most instances, the definition will be met if a patient requires a skilled nursing service at least once every 60 days. In contrast, a person expected to need more or less full-time skilled nursing care over an extended period of time would usually not qualify for home health benefits because he or she needs a higher level of care. The second parameter of intermittent pertains to the frequency of visits allowed by Medicare in a given time frame and is usually used together with the term part-time. According to the Medicare Home Health Agency Manual, intermittent care is defined as up to and including 28 hours per week of skilled nursing and home health aide services combined provided on a less than daily basis; up to 35 hours per week of skilled nursing and home health aide service combined that are provided on a less than daily basis, subject to review by fiscal intermediaries on a case-by-case basis, and determined on the basis of documentation justifying the need for and reasonableness of such additional care; or up to and including full-time (that is, 8 hours per day) skilled nursing and home health aide services combined that are provided and needed 7 days per week for temporary but not indefinite periods of time of up to 21 days with allowances for extensions in exceptional circumstances where the need for care in excess of 21 days is finite and predictable. Because a range of interpretations is possible for intermittent, the requirement is difficult to enforce. For example, individuals can be provided intermittent services (for example, blood tests or periodic skilled observation) every 60 days simply to qualify for aide services on a long-term basis. Under the part-time or intermittent coverage rules, determining whether someone who needs daily care for an extended period meets the intermittent requirement or might require institutional care is difficult. Moreover, without further review, to determine whether daily care itself is really necessary is not possible. During our recent investigation of a large home health organization, for example,employees alleged instances where managers instructed nurses to visit new patients daily for the first 14 or 21 days of care regardless of condition—intermediaries usually do not question daily visits during the first 21 days of care. Less Information Is Available to Intermediaries for Making Coverage Decisions In August 1985, HCFA implemented standardized medical information forms for HHAs to use in requesting payment from intermediaries. These plan-of-care and update forms, which were to be submitted with the initial claim and the claim closest to the recertification date 60 days later, gave medical reviewers more detailed information on each beneficiary’s general physiological condition, homebound status, functional limitations, nutritional requirements, services prescribed, and services received. The additional information was intended to increase the accuracy and consistency of coverage decisions. “These forms are no longer submitted routinely with the initial claim or other subsequent claim. The completed HCFA-485, signed by the physician, is retained in the HHA files and a copy of the signed form is submitted when requested for medical review. The HCFA-486 is completed only when required for medical review.” “assume that the type and frequency of services ordered are reasonable and necessary unless objective clinical evidence clearly indicates otherwise, or there is a lack of clinical evidence to support coverage.” Because the current billing form alone does not supply adequate information to make this type of determination, most bills are paid without question. Little Medical Review Is Done The regional home health intermediaries are responsible for procedures to assure that they only make payments for home health services that are covered by Medicare and avoid paying for services that are (1) provided to beneficiaries who do not meet Medicare home health criteria, (2) not reasonable or medically necessary, or (3) in excess of the services called for by the approved plan of treatment. Currently, the RHHI’s primary procedure for detecting noncovered services is medical review of claims. Prepayment Reviews The Consolidated Omnibus Budget Reconciliation Act of 1985 more than doubled the funds available for medical review and audit of home health and other Medicare claims. This allowed intermediaries to increase the number of medical reviews performed; they conducted medical reviews on 62 percent of home health claims processed in fiscal years 1986 and 1987. The increased number of claims subjected to medical review resulted in more denials and higher denial rates even though the percentage of claims being denied during medical review did not increase significantly. For example, in both 1985 and 1987, intermediaries denied about 10 percent of the claims subjected to medical review. However, because over twice as many claims were subjected to medical review in 1987, there were over twice as many denials. As a result, the HCFA-reported denial rate was 7.9 percent in 1987 compared with 3.4 percent in 1985. Due to budget cuts since 1989, however, intermediaries are now required to conduct medical reviews (pre- and postpayment) on a target of 3.2 percent of all claims, including home health claims. At the same time, home health claims volume increased from 5.5 million claims in 1989 to 16.6 million claims in 1994. Of the 3.7 percent of home health claims denied in fiscal year 1994, only 0.6 percent were denied because the services were determined, through medical review, to not be medically necessary or because the beneficiary did not meet the qualifying coverage criteria. As a result of decreased review, HHAs are less likely to be caught if they abuse the home health benefit. An HCFA official noted that HHAs are aware that the intermediary only reviews a small number of claims and, therefore, can take chances billing for noncovered services. As long as they do not trigger the criteria that would cause the claim to be flagged,HHAs can submit abusive claims that will never be reviewed. Besides covering so few claims, prepayment medical review is limited in its ability to detect noncovered care in that it is simply a paper review done at the offices of the RHHI. According to HCFA and intermediary officials, it is often not possible to obtain enough information from a paper review alone no matter how complete the medical records submitted, to determine whether a provider is abusing the benefit or committing fraud. If the codes are valid, the forms filled out properly, and no unusual patterns are identified during the FMR process, the claim goes through. For example, our investigation of a large home health organization turned up allegations that staff were directed to alter or falsify medical records to ensure continued or prolonged visits, including recording visits that were never made or noting that patients were homebound even after they were no longer confined to the home. To further illustrate, an intermediary official noted that sometimes the wrong diagnosis is put on the claim form to make beneficiaries appear sicker than they really are and, thus, in need of more care. Postpayment Review Postpayment utilization review differs from prepayment review in that its principal focus is on identifying HHAs that are providing significant amounts of noncovered care rather than on identifying services provided to specific beneficiaries. In 1982, HCFA implemented a selective postpayment utilization review program that has cost effectively identified extensive noncovered services paid for by Medicare. The essential component of postpayment review, comprehensive medical review (CMR), is a thorough postpayment evaluation of claims and medical documentation that may involve an audit at the provider’s site. On-site audits give the reviewer access to the information in the provider’s records, including plans of care and documentation of visits. According to records obtained from HCFA, only 51 on-site audits were conducted by the nine RHHIs combined in fiscal year 1994. Thus, fewer than 1 percent of all Medicare-certified HHAs were audited. Intermediaries are required to perform 10 on-site CMRs each year for all provider types, including, for example, outpatient, skilled nursing, and rehabilitation facilities. An HCFA representative noted that CMRs are so resource intensive that they may be done only in instances where a high level of return is expected. Because HHA claims may comprise a relatively small portion of an intermediary’s total claims volume, the intermediary may not do any home health CMRs. One of the best ways to verify information provided by the HHA is to visit beneficiaries at home. Beginning in 1984, intermediaries were required to make visits to a sample of five beneficiaries at targeted agencies to assess coverage status; however, this requirement was subsequently dropped due to cuts in contractor funding. In March 1995, HCFA revised the Medicare Intermediary Manual to say that intermediaries may perform visits to selected beneficiary homes but they are not required to do so. According to officials at the intermediaries visited, only one of the three was doing any beneficiary visits as part of its CMRs. A proposed sampling procedure for CMRs involves selecting a valid statistical sample of claims from agencies suspected of abusive practices and extrapolating the denial rate (and therefore payment recovery rate) in the sample to similar claim types during the same period. In our 1986 report, we suggested that by using statistically valid sampling techniques, such as those being used to estimate physician overpayments under Medicare part B, overpayments to HHAs for noncovered services could be projected to all claims submitted by the agency during the sampling period and could result in millions of dollars in additional recoveries. In addition, we recommended that HCFA require intermediaries to use such procedures. However, RHHIs are currently not required to use a projectable sample of home health visits to extend recoveries—recoveries are, therefore, limited to the cost of actual services reviewed and denied. HCFA is circulating a new draft sampling plan that delineates the methodology for selecting a representative sample. However, previous attempts to implement statistically valid postpayment sampling have not been successful, primarily due to opposition from the home health industry and other health care providers. Physicians Not Actively Involved in Monitoring Patient Care With the enactment of the Medicare program, it was expected that the physician would play an important role in determining utilization of services. Medicare law and regulations, therefore, require that home health items and services must be furnished under a plan of treatment established and periodically reviewed by a physician. HCFA requires that the plan is to be reviewed and recertified in writing by the attending physician at least every 62 days. The physician is expected but not required to see the patient. Few data exist about the current nature of physician involvement in home health care. Concerns have been raised, based on audits of certain HHAs and anecdotal reports, that physicians are not appropriately involved in planning and coordinating home health services. For example, both HCFA and intermediary officials expressed concern that HHAs were preparing the plans of treatment and the physicians were signing them with little or no review. A recent report issued by HHS’ Office of the Inspector General (OIG) that was based on a survey of physicians and HHAs around the country found that physicians generally have a relationship with patients for whom they sign plans of care. Physicians usually reported initiating referrals for home care and reviewing the plans of care that they sign; however, most do not prepare the plans of care themselves. The report also found that physicians were most involved when caring for patients with complex medical problems and were less involved when caring for patients with chronic or less complex conditions. Thus, physicians frequently are not aware of the ongoing HHA services being provided to patients and billed to the Medicare program. HHS’ Inspector General pointed out the importance of recognizing that physicians usually do not make home visits themselves to monitor the HHA services provided and do not directly manage the care that a patient receives from an HHA. An intermediary official noted that some physicians feel that because they are ordering nonmedical services, which will generally not harm the patient, not much review is required. The 1993 Aetna of Florida pilot study revealed examples of different levels of physician involvement. In one instance, a physician wrote that he took every Friday off to spend the whole day reviewing home health plans of care. Another physician, who received 100 plans of care a week, wrote a letter to his intermediary reprimanding it for asking him to read the plans of care. Our investigation of a large home health organization found that physicians typically rely on nurses’ verbal recommendations, written recommendations, or both. We also noted allegations that physicians’ signatures were forged and plans of care were altered after certification without the physicians’ knowledge. To compensate physicians for the time spent on preparing and reviewing home health plans-of-care forms, HCFA issued a new regulation in 1994 providing separate payment for physician care plan oversight services. As of January 1995, HCFA began allowing participating physicians to be paid for oversight requiring at least 30 minutes. Currently, the payment rate is approximately $81 per patient. Physicians and Beneficiaries Not Aware of Services Billed Neither the beneficiaries receiving nor the physicians ordering home health services are sent information about which services Medicare has paid. Beneficiaries do not receive an explanation of benefits because they are not billed for in-home services. Therefore, neither the physician nor the beneficiary has any way of knowing whether Medicare is paying the HHA for services not rendered or whether the home health services are provided according to the plan of care. Denied Claims Likely to Be Paid Under Waiver of Liability Under the waiver-of-liability provision of the Social Security Act (§ 1879) Medicare will pay for denied services if the beneficiaries and providers did not know and had no reason to know that the services were not medically reasonable and necessary or were based on the need for custodial rather than skilled care. In implementing this provision, HCFA generally presumed that HHAs did not know services were not covered as long as their number of denials did not exceed 2.5 percent of total visits billed. When a provider exceeded the 2.5-percent rate in a calendar quarter, Medicare would not reimburse the provider for denied services, usually for the next 3-month period. According to statistics obtained from HCFA, in fiscal year 1994 approximately half of all claims denied for lack of medical necessity or for not meeting the coverage criteria were eligible for waiver. Of those eligible for waiver, 73 percent were ultimately paid. In fiscal year 1994, the total amount reimbursed under waiver was approximately $45.5 million. Because so few claims are reviewed and so few technical and medical necessity denials are made, most providers, especially those who submit large numbers of claims, would never exceed the 2.5-percent rate threshold. In an earlier report, we noted that savings could be realized by changing the waiver-of-liability rules and recommended that HCFA establish more stringent eligibility requirements for the application of waiver of liability for health care providers under part A of Medicare. HCFA Striving to Address Problems In response to the changing climate surrounding home health care, the Administrator of HCFA convened an internal task force in the spring of 1994 (the Medicare Home Health Initiative) to examine the home health benefit from both a policy and an operations perspective. As of September 1995, the task force has held four open workgroup meetings at which HCFA officials solicited ideas and suggestions for benefit improvement from physician organizations, representatives of beneficiary groups, the home health industry, state governments and their Medicaid agencies, and others. The task force has also issued a draft revision of the conditions of participation, developed a pamphlet to better inform beneficiaries of what services are covered, and developed draft sampling instructions for postpayment utilization review. Further, the task force has implemented a four-state pilot program to investigate providing home health beneficiaries with claims information, begun pilots of team on-site medical review of HHAs, revised the Medicare Intermediary Manual to allow unannounced on-site audits, and implemented a two-state pilot program involving training state surveyors to assess patient eligibility as a part of HHA annual surveys. These efforts by HCFA are commendable and should help somewhat in gaining control of the use of the home health benefit. However, as discussed earlier in this report, HCFA cannot address many of the major problems, such as the changes in the manuals made in response to a court decision, that make it harder to control use of services and the shortage of funds to perform program safeguard activities. Conclusions The Medicare home health program is judged by HCFA as being very difficult to control. While quantifying how much of the recent growth in home health care is due to abuse of the benefit is not possible, lax benefit controls leave the door open for abuses such as overutilization to occur. While HCFA has made some notable attempts to remedy several specific problems, a number of fundamental issues remain. For example: In response to a court decision, HCFA revised its requirements for determining Medicare home health eligibility. The revisions made it possible for more beneficiaries to qualify for Medicare home health services and more HHAs to receive payment for higher numbers of visits and for longer periods of care. Historically, part A of Medicare’s home health benefit was directed at acute conditions after hospitalization. While many beneficiaries still use the benefit in this way, an increasing number of beneficiaries are receiving visits that are more directed at long-term care for chronic conditions. Physicians tend to depend on HHAs to design plans of care, especially for less complex cases, and agencies as a rule have incentives to furnish as many visits as possible. This combination can lead to the overprovision of services. Medicare has reduced on-site audits and reviews so that HHAs have less incentive to follow Medicare rules. The percentage of claims that are reviewed has decreased from over 60 percent in 1987 to approximately 3 percent in 1994. We have testified on a number of occasions that program safeguard activities are cost effective, returning close to $14 in savings for each $1 invested in 1994, and cuts in payment safeguard areas translate into increased program losses from fraud, waste, and abuse. When claims volume increases and medical review of claims declines, intermediaries’ ability to detect and prevent erroneous payments is substantially lessened. Further, even when claims are denied, they were often paid because the HHA qualified for a waiver of liability. It is nearly impossible for intermediaries to assess from paper review alone whether a beneficiary meets the eligibility criteria, whether the services received are appropriate given the beneficiary’s current condition, and whether the beneficiary is actually receiving the services billed to Medicare. Coverage criteria, such as confined to the home or intermittent, are not meaningful when the HHAs are in effect the only ones monitoring beneficiaries. Matters for Consideration by the Congress The emphasis of Medicare’s home health benefit program has recently shifted from primarily posthospital acute care to more long-term care. At the same time, HCFA’s ability to manage the program has been severely weakened by coverage changes mandated by court decisions and a decrease in the funds available to review HHAs and the care they provide. The Congress may wish to consider whether the Medicare home health benefit should continue to become more of a long-term care benefit or if it should be limited primarily to a posthospital acute care benefit. The Congress should also consider providing additional resources so that controls against abuse of the home health benefit can be better enforced. Agency Comments We provided HHS an opportunity to comment on our draft report, but it did not provide comments in time for them to be included in the final report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of HHS, the Administrator of HCFA, interested congressional committees, officials who assisted our investigation, and other interested parties. Copies also will be made available to others on request. Please contact me on (202) 512-6808 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix V. Scope and Methodology Our work was done primarily at HCFA headquarters. We also visited three regional home health intermediaries (in Chicago, Illinois; Milwaukee, Wisconsin; and Clearwater, Florida) and two HCFA regional offices (Chicago and Atlanta) to obtain workload and performance data, information concerning RHHI claims review operations, and an update on HCFA’s implemented and planned program changes. We also interviewed officials at HHS’ Office of the Inspector General in Baltimore and Atlanta. We reviewed pertinent laws, regulations, court decisions, and HCFA policies to identify changes in eligibility determination and medical review practices. And we reviewed studies related to home health benefit utilization and control issues. To identify home health growth patterns and variations in utilization, we analyzed data from Medicare’s Provider of Service and Home Health National Claims History files. These data include information on all paid claims for the period 1989 through 1993. We used data from the Provider of Service file to determine agency growth through time and across geographic regions and to identify provider ownership type. And we used the Medicare claims data to calculate mean and median home health visits, by total and by each type of service, broken out by geographic area and HHA ownership types. While the average visits per year provides a general indication of variations in utilization of home health services, it does not indicate the length of each individual’s episode of care nor does it provide a picture of the intensity of services provided during this time. To obtain a more in-depth look at variations in practice patterns, both across regions and among various types of HHAs, we conducted an episode-of-care analysis for four diagnoses: diabetes, heart failure, hypertension, and hip fracture.The first three diagnoses were selected because they are among the most common primary diagnoses associated with home health care. Hip fracture was selected because it is generally regarded as having a more predictable pattern of treatment with a more finite end point. We selected beneficiaries with one of the above primary diagnoses who began receiving home health services in 1992. We then tracked beneficiaries’ visits up to 210 days after their episode start date. The principal sources of our automated data were Medicare paid claims data systems, which are subject to periodic HCFA reviews and examinations. HCFA relies on the data obtained from these systems as evidence of Medicare-covered services and expenditures and to support its management and budgetary decisions. For this reason, we did not independently examine the internal and automatic data processing controls for automated systems from which we obtained data used in our analyses. With this exception, we conducted our work in accordance with generally accepted government auditing standards between July 1994 and December 1995. Detailed Data Tables 36.84 40.00 38.01 22.50 33.33 53.51 60.00 57.89 72.50 66.67 0.00 9.22 7.89 0.00 0.00 59.22 72.73 70.92 78.95 92.86 9.65 0.00 4.09 5.00 0.00 0.00 24.51 50.00 48.82 39.13 27.51 18.78 26.72 24.69 46.91 21.14 34.39 58.39 13.79 25.00 46.15 40.24 41.02 45.83 17.86 25.42 61.64 52.11 36.27 23.08 19.69 28.26 55.76 31.84 38.93 35.19 22.68 34.55 29.30 14.29 63.22 47.83 39.02 58.31 24.30 47.37 38.26 25.76 12.66 14.52 15.27 14.29 38.70 12.90 26.32 50.25 24.39 15.30 66.23 47.62 29.67 32.61 26.85 32.56 12.12 15.79 27.03 73.33 19.64 13.66 31.58 20.83 28.57 25.00 27.12 18.97 55.00 46.15 41.46 58.59 32.14 42.86 16.95 32.76 39.44 39.22 26.92 31.50 32.61 16.73 49.39 34.35 40.12 30.41 44.31 36.31 27.33 22.99 43.48 36.59 26.39 25.97 33.33 40.66 52.17 42.59 60.47 63.64 68.42 37.84 54.34 36.20 20.43 29.69 17.53 23.40 24.24 15.79 35.14 20.00 35.71 49.42 21.05 62.50 14.29 39.29 23.73 24.14 44.64 36.92 47.37 16.67 57.14 35.71 49.15 56.90 114 30 171 40 18 13 53 213 44 2 19 19 75 313 200 67 168 306 82 107 76 149 66 237 303 203 175 230 341 171 203 431 81 218 937 163 230 64 154 47 32 33 64 52 94 604 25 41 18 56 80 59 50.00 36.84 68.42 37.33 48.88 54.00 31.34 26.19 63.40 58.54 36.45 28.95 29.53 45.45 67.51 42.90 56.16 37.71 28.26 49.56 31.58 15.76 77.03 59.26 59.63 75.45 12.14 33.13 44.78 28.13 46.75 10.64 18.75 18.18 57.81 23.08 55.32 63.41 32.00 78.05 16.67 42.86 38.75 27.12 63.16 26.32 45.33 51.12 33.50 35.82 20.83 32.03 37.80 39.25 23.68 32.21 28.79 19.83 42.57 28.57 48.00 33.04 37.54 42.11 33.99 17.87 28.40 24.31 16.76 33.53 30.67 34.78 42.19 35.71 65.96 71.88 66.67 31.25 15.38 38.30 27.98 52.00 14.63 55.56 28.57 48.75 57.63 (Figure notes on next page) Figure II.2: Average and Median Number of Visits per Beneficiary per Year, 1989 and 1993 Figure II.3: Home Health Visits per Medicare Beneficiary, 1993, in Descending Order Episode-of-Care Analysis The following figures present tables (figs. III.1 to III.4) that show the average length of episode and the average number of visits per episode for patients with a primary diagnosis of diabetes, heart failure, hypertension, and hip fracture for the different types of HHAs. Length of episode refers to the average period of time during which a beneficiary receives care, and visits per episode refers to the average number of home health services a beneficiary receives during that time. We examined episodes of care beginning during 1992. For these episodes we tracked care throughout 1992 and 1993. Much variation in both lengths of episode and average number of visits per episode can be seen among the different types of agencies for these four diagnoses. For example, on a national level, proprietary agencies provided an average of 53 visits to beneficiaries with diabetes over an average period of 64 days. Government agencies, on the other hand, provided an average of 29 visits to diabetic beneficiaries over a similar period of time. The variation in utilization between the different types of agencies is less pronounced in cases of hip fracture, which may be regarded as an acute condition, than in cases of diabetes, heart failure, and hypertension, which may be regarded as more chronic conditions. Variations in utilization are also seen across geographic regions. For example, beneficiaries diagnosed with hypertension receiving care in the Atlanta or Dallas regions received more care for longer periods of time than beneficiaries in other regions with the same diagnosis. (See fig. III.3.) HHAs in these two regions, on average, consistently provided more care for cases of diabetes, heart failure, and hypertension, while HHAs in the Boston region provided the most care to beneficiaries with hip fracture. Some of the variation between regions may be explained by case-mix differences and availability of alternative sources of care. And some of the differences are probably due to geographic variations in practice patterns. Table III.5 shows the average number of two types of visits provided to beneficiaries—skilled nursing visits and home health aide visits. Again, proprietary agencies provided more of these types of services for all diagnoses. For example, in cases of hypertension, proprietary agencies provided almost twice as many skilled nursing visits as voluntary agencies during a beneficiary’s episode of care. (Figure notes on next page) Figure III.2: Average Episode Length and Visits per Episode—Heart Failure, 1992-93 (Figure notes on next page) Figure III.3: Average Episode Length and Visits per Episode—Hypertension, 1992-93 (Figure notes on next page) Figure III.4: Average Episode Length and Visits per Episode—Hip Fracture, 1992-93 HCFA Definition of Confined to the Home § 204.1: Medicare Home Health Intermediary Manual (HCFA Publication 11) A. eligible to receive covered home health services under both Part A and Part B, the law requires that a physician certify in all cases that the beneficiary is confined to his home. (See § 240.1.) An individual does not have to be bedridden to be considered as confined to his home. However, the condition of these patients should be such that there exists a normal inability to leave home and, consequently, leaving their homes would require a considerable and taxing effort. If the patient does in fact leave the home, the patient may nevertheless be considered homebound if the absences from the home are infrequent or for periods of relatively short duration, or are attributable to the need to receive medical treatment. Absences attributable to the need to receive medical treatment include attendance at adult day centers to receive medical care, ongoing receipt of outpatient kidney dialysis, and the receipt of outpatient chemotherapy or radiation therapy. It is expected that in most instances absences from the home which occur will be for the purpose of receiving medical treatment. However, occasional absences from the home for nonmedical purposes, e.g., an occasional trip to the barber, a walk around the block or a drive, would not necessitate a finding that the individual is not homebound so long as the absences are undertaken on an infrequent basis or are of relatively short duration and do not indicate that the patient has the capacity to obtain the health care provided outside rather than in the home. Patient Confined to His Home--In order for a beneficiary to be Generally speaking, a beneficiary will be considered to be homebound if he has a condition due to an illness or injury which restricts his ability to leave his place of residence except with the aid of supportive devices such as crutches, canes, wheelchairs, and walkers, the use of special transportation, or the assistance of another person or if he has a condition which is such that leaving his home is medically contraindicated. Some examples of homebound patients which are illustrative of the factors to be taken into account in determining whether a homebound condition exists would be: (1) a beneficiary paralyzed from a stroke who is confined to a wheelchair or who requires the aid of crutches in order to walk; (2) a beneficiary who is blind or senile and requires the assistance of another person in leaving his place of residence; (3) a beneficiary who has lost the use of his upper extremities and, therefore, is unable to open doors, use handrails on stairways, etc., and, therefore, requires the assistance of another individual in leaving his place of residence; (4) a patient who has just returned from a hospital stay involving surgery who may be suffering from resultant weakness and pain and, therefore, his actions may be restricted by his physician to certain specified and limited activities such as getting out of bed only for a specified period of time, walking stairs only once a day, etc.; and (5) a patient with arteriosclerotic heart disease of such severity that he must avoid all stress and physical activity; and (6) a patient with a psychiatric problem if his illness is manifested in part by a refusal to leave his home environment or is of such a nature that it would not be considered safe for him to leave his home unattended, even if he has no physical limitations. The aged person who does not often travel from his home because of feebleness and insecurity brought on by advanced age would not be considered confined to his home for purposes of receiving home health services unless he meets one of the above conditions. A patient who requires speech therapy but does not require physical therapy or nursing services must also meet one of the above conditions in order to be considered as confined to his home. Although a patient must be confined to his home to be eligible for covered home health services, some services cannot be provided at the patient's residence because equipment is required which cannot be made available there. If the services required by an individual involve the use of such equipment, the home health agency may make arrangements with a hospital, , or a rehabilitation center to provide these services on an outpatient basis. (See § 200.2 and § 206.5.) However, even in these situations, for the services to be covered as home health services the patient must be considered as confined to his home; and to receive such outpatient services it may be expected that a homebound patient will generally require the use of supportive devices, special transportation, or the assistance of another person to travel to the appropriate facility. If for any reason a question is raised as to whether an individual is confined to his home, the agency will be requested to furnish the intermediary with the information necessary to establish that the beneficiary is homebound as defined above. B. wherever he makes his home. This may be his own dwelling, an apartment, a relative's home, a home for the aged, or some other type of institution. However, an institution may not be considered a patient's residence if it: Patient's Place of Residence--A patient's residence is 1. Meets at least the basic requirement in the definition of a hospital, i.e., it is primarily engaged in providing by or under the supervision of physicians, to inpatients, diagnostic and therapeutic services for medical diagnosis, treatment, and care of disabled, or sick persons, or rehabilitation services for the rehabilitation of injured, disabled, or sick persons, or 2. Meets at least the basic requirement in the definition of a , i.e., it is primarily engaged in providing to inpatients skilled nursing care and related services for patients who require medical or nursing care, or rehabilitation services for the rehabilitation of injured, disabled, or sick persons. All nursing homes that participate in Medicare and/or Medicaid as skilled nursing facilities, and most facilities that participate in Medicaid as intermediate care facilities meet this basic requirement. In addition, many nursing homes which do not choose to participate in Medicare or Medicaid meet this test. Check with your fiscal intermediary or Medicare regional office before serving nursing home patients. Thus, if an individual is a patient in an institution or distinct part of an institution which provides the services described in (A) or (B) above, he is not entitled to have payment made for home health services under either Part A or Part B since such an institution may not be considered his residence. When a patient remains in a participating [skilled nursing facility] following his discharge from active care, the facility may not be considered his residence for purposes of home health coverage. GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments The following team members also contributed to this report: Adrienne S. Friedman, Senior Evaluator; MaryEllen Fleischman, Computer Specialist; and Mary W. Reich, Attorney Advisor. Related GAO Products Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). Medicare: Increased Denials of Home Health Claims During 1986 and 1987 (GAO/HRD-90-14BR, Jan. 24, 1990). Medicare: Need to Strengthen Home Health Care Payment Controls and Address Unmet Needs (GAO/HRD-87-9, Dec. 2, 1986). Savings Possible by Modifying Medicare’s Waiver of Liability Rules (GAO/HRD-83-38, Mar. 4, 1983). The Elderly Should Benefit From Expanded Home Health Care but Increasing These Services Will Not Insure Cost Reductions (GAO/IPE-83-1, Dec. 7, 1982). Response to the Senate Permanent Subcommittee on Investigations’ Queries on Abuses in the Home Health Care Industry (GAO/HRD-81-84, Apr. 24, 1981). Medicare Home Health Services: A Difficult Program to Control (GAO/HRD-81-155, Sept. 25, 1981). Home Health Care Services—Tighter Fiscal Controls Needed (GAO/HRD-79-17, May 15, 1979). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the growth in the use of Medicare home health benefits, focusing on the: (1) changes in the home health industry; (2) composition of Medicare home health users; (3) differences in utilization of home health benefits across geographic areas; (4) incentives to overuse Medicare home health benefits; and (5) effectiveness of payment controls in preventing payment for services not covered by Medicare. GAO noted that: (1) the growth in Medicare's home health benefits resulted from less restrictive Health Care Financing Administration (HCFA) guidelines issued in 1989; (2) 2.8 million Medicare beneficiaries received home health services in 1993, up from 1.7 million in 1989; (3) during the same period, the average number of home health care visits doubled from 26 visits per year in 1989 to 57 visits per year in 1993; (4) more than 25 percent of home health beneficiaries received at least 60 visits per year; (5) between 1989 and 1994, the number of Medicare-certified home health agencies grew from 5,692 to 7,864; (6) proprietary home health agencies provided beneficiaries with 78 visits per year, while voluntary and government agencies provided beneficiaries with 46 visits per year; (7) home health beneficiaries with the same diagnosis received more visits from proprietary agencies than from non-profit agencies; and (8) Medicare's home health services can be improved by subjecting claims to medical review and audit, requiring visits from intermediaries and physicians to beneficiaries, and determining whether beneficiaries are qualified for such service, and actually need or receive the service billed to Medicare.
Background The Berry Amendment generally prohibits DOD from using appropriated or other available funds for the procurement of certain items that have not been grown, reprocessed, reused, or produced in the United States. Enacted in a 1941 defense appropriations act, the restriction initially ensured that American troops wore uniforms and ate food grown or produced in the United States. For more than 50 years, the Berry Amendment consistently appeared in annual appropriations acts. The scope of the restriction has changed over time to include additional items and exceptions. The current version, codified in 2001, restricts DOD purchases of food, clothing, certain fabrics, specialty metals, and certain tools. The specialty metals requirement was added to the Berry Amendment in 1972. DOD implemented the specialty metals requirement by applying it to all contracts where the specialty metal is purchased directly by the government or the prime contractor and to all subcontract tiers for six major classes of programs—aircraft, missiles, ships, tank-automotive, weapons, and ammunition. For these programs, the prime contractor must include a clause that requires all subcontractors to comply with the Berry Amendment’s specialty metals requirement. In addition, the Defense Federal Acquisition Regulation Supplement (DFARS) identifies those metals considered to be specialty metals, to include titanium, certain types of steel, and other assorted metals and alloys. In 1996, Congress made clear that the Berry Amendment does apply to all commercial item purchases. The Berry Amendment includes a number of exceptions to the requirement to buy certain domestically produced articles. For example, the requirement does not apply to the extent that the Secretary of Defense or the Secretary of the military department concerned determines that satisfactory quality and sufficient quantity of an item cannot be procured as and when needed at U.S. market prices. Since May 2001, DOD policy specifies that the authority to approve a Berry Amendment waiver is not delegable below the Secretarial level, and the waiver is to include an analysis of alternatives and a certification as to why such alternatives are unacceptable. Additional exceptions to the Berry Amendment are allowed for items already determined to be unavailable in the United States and specialty metals purchased from a qualifying country, i.e., one that has signed a memorandum of understanding with the United States. The Berry Amendment does not include explicit criteria to be used or requirements to be met to support and document a waiver. The Air Force’s internal policy, the Air Force Federal Acquisition Regulation Supplement, provides instruction as to what information the Air Force decision makers would generally expect to be provided if asked to approve a Berry Amendment waiver. The Air Force policy calls for the contracting officer to conduct market research to determine if an article or suitable substitute is available from a domestic source. If the article or substitute is not available, the contracting officer contacts Air Force headquarters, which in turn confers with the Department of Commerce (Commerce) to request a list of possible domestic sources. If Air Force headquarters notifies the contracting officer that domestic sources have not been identified by Commerce, Air Force policy then specifies that the contracting officer shall submit a determination and finding in a specified format for the Secretary of the Air Force’s approval. This format is to describe the market research performed, any alternatives/substitutes considered and why these alternatives/substitutes are not satisfactory, the total estimated cost of the item(s) being acquired, the circumstances precluding the buying of a domestic end item, and the impact if the waiver is not approved. Air Force Waiver Lacked Thorough Analysis The Air Force did not conduct a thorough analysis of opportunities for compliance with the Berry Amendment on a system-by-system basis in approving a broad, permanent waiver covering 23 commercial derivative aircraft systems. The Air Force initiated the waiver at the headquarters level after it became aware of problems with implementing the Berry Amendment. In supporting the waiver, the Air Force did not conduct market research as called for in its policy, thoroughly review alternatives, or include an explanation as to why it believed that alternatives did not exist for each of the systems in the waiver. We identified several instances that highlight the Air Force’s lack of thoroughness in its analysis to support the waiver. Air Force Initiated Waiver after Identifying Problems with Berry Amendment Implementation According to a senior Air Force official, the Deputy Assistant Secretary (Contracting) and several Air Force officials met with titanium industry representatives in November 2002 to discuss their concerns that some aircraft manufacturers were not meeting the Berry Amendment requirement for domestic specialty metals. Subsequently, the Air Force formed an Integrated Product Team in March 2003 to study the history and requirements of the Berry Amendment’s specialty metals provision and to review the Air Force’s compliance. This team conducted a review of Air Force Materiel Command contracts and uncovered a number of contracts that lacked the clause that implements the Berry Amendment. The Air Force buying commands attempted to negotiate with contractors to add the required contract clause to those contracts. However, many commercial derivative aircraft contractors refused to accept the specialty metals provision that would require all contracts and subcontracts related to aircraft programs to be compliant with the Berry Amendment. In the summer of 2003, the Air Force official who led the waiver effort told us he visited an aircraft manufacturer, two of its subcontractors (including a titanium producer), and an engine manufacturer to evaluate the difficulty of complying with the Berry Amendment specialty metals requirement. Following these visits, the Air Force official concluded that other contractors involved in the Air Force’s acquisition and support of commercial derivative aircraft systems would also have difficulty complying with the Berry Amendment. According to Air Force officials, they initiated the waiver process at the headquarters level instead of following the established procedure of receiving individual requests from field contracting officers involved in acquiring or supporting these systems. Officials stated that this method was intended to ensure a consistent and comprehensive approach to supporting the waiver. Air Force headquarters collected supporting documentation that included letters from contractors and memos from the military users of commercial derivative aircraft systems. These companies indicated it would be “commercially impracticable” or otherwise not possible to comply with the Berry Amendment. In addition, memos from representatives of the military users of the aircraft indicated that the alternatives presented to them were not feasible. In September 2003, the Secretary of the Air Force signed a temporary Berry Amendment waiver, effective through April 1, 2004, which covered future aircraft deliveries under current acquisition contracts, as well as current and future support contracts, for 19 commercial derivative aircraft systems. In doing so, the Secretary of the Air Force made several findings, including the following: Contractors stated they could not comply with the Berry Amendment’s specialty metals restriction “without substantial changes to their manufacturing and supplier management processes,” which would “cause substantial, largely unquantifiable, cost and schedule impacts.” Pursuing Berry Amendment compliance could make contractors’ commercial products less competitive in the worldwide market. The systems at issue are produced on the same production lines used to support the commercial marketplace and generally comprise a minute portion of the contractors’ overall commercial business. Several contractors informed the Air Force they would no longer accept contracts if the provisions implementing the Berry Amendment were included. On the basis of these findings, the Secretary of the Air Force determined compliant commodities for certain commercial derivative aircraft systems could not be acquired as and when needed in satisfactory quality and sufficient quantity at U.S. market prices, the waiver was needed to sustain ongoing operations of these systems and avoid major mission impacts, and the waiver would be of limited duration while Congress considered changes to the Berry Amendment in the fiscal year 2004 legislative cycle. However, these legislative changes did not occur, and in April 2004 the Secretary signed a permanent waiver that covered 23 commercial derivative aircraft systems—which included 4 additional systems— exempting all of them from the Berry Amendment requirements. The permanent waiver relied on the same findings as the temporary waiver. Air Force Analysis Lacked Market Research and a Thorough Review of Alternatives The Air Force policy identifies the need to conduct market research prior to proceeding with a Berry Amendment waiver. According to the policy, the Air Force is to request a list of possible domestic sources from the Department of Commerce and draft a market research report indicating what companies were contacted. The Air Force acquisition official who drafted the policy told us that market research also includes advertising in official government sources for contracting opportunities. Officials from Commerce’s Bureau of Industry and Security and International Trade Administration informed us that there was no record of the Air Force requesting Commerce’s assistance in identifying domestic sources for the support of commercial derivative aircraft on the waiver. While this waiver encompasses 23 different aircraft systems and certain related acquisition and support contracts, the Air Force did not conduct market research on each system included in the waiver. A senior Air Force acquisition official told us that it was unnecessary to conduct market research for each system because Air Force officials were knowledgeable about the aerospace industry and did not need to contact the Department of Commerce for assistance. Another senior official who led the waiver effort indicated that the original aircraft manufacturer owned the technical data rights and, in some cases, was the primary supplier of these spare parts. Therefore, this official believed that in some instances it would be difficult and costly to purchase technical data rights so suppliers other than the original aircraft manufacturer and its subcontractors could produce the parts. Moreover, this same Air Force official became convinced that no company could provide compliant spare parts after site visits to an aircraft manufacturer, which accounted for 11 systems in the waiver, and two of its suppliers (including a titanium producer) as well as an engine manufacturer. However, these findings were not documented in the waiver. DOD and Air Force policies also specify the need to identify alternatives and explain why such alternatives are unacceptable. In May 2001, the Deputy Secretary of Defense directed that each military department’s Secretary ensure that alternatives that do not require a waiver under the Berry Amendment are presented to the relevant military users before requesting a waiver. The military users must certify in writing why such alternatives are unacceptable before the Secretary may approve a waiver. The Air Force policy calls for similar information. To address DOD and Air Force policy requirements, the Air Force included 13 memos from military user representatives in the waiver’s supporting documentation, representing 22 of the 23 aircraft systems on the waiver. As specified in Air Force policy, most of these memos address the impact on the system if the waiver is not approved and state that the compliant alternatives had been considered. Specifically, memos representing 18 aircraft systems state that they had considered compliant alternatives and rejected them as not feasible, without stating what those alternatives were. Memos for 3 aircraft systems make no reference to whether alternatives had been considered. Only 1 memo representing a single aircraft system contains an assessment of a potential alternative and the delay it would cause to the aircraft’s mission if selected, although other Air Force documentation indicated that the alternative would not satisfy the Berry Amendment requirement. Though most of the memos state that alternatives had been considered, we found that in several instances military users and their representatives who prepared the memos were not presented with alternatives. A senior Air Force official who led the waiver effort acknowledged that the military users’ memos contain boilerplate language about the consideration and rejection of alternatives that would be compliant with the Berry Amendment. Air Force program management officials, contracting officers, military users, and a senior acquisition official told us that the Air Force did not identify and pursue compliant alternatives because they did not believe there were any available. For example, in many instances, contracting officers and program managers stated that the only realistic option was to pursue a Berry Amendment waiver. However, the waiver documentation lacked an explanation as to why the Air Force did not believe any alternatives were available. Air Force Did Not Consider Possible Compliant Options The Air Force missed opportunities to assess possible compliant options. For instance, the Air Force and Boeing have entered into a contract, referred to as the Rights Guard agreement, that could allow the Air Force to order technical data for military derivatives of the Boeing 707, 727, 737, and 747 commercial aircraft and to use that data to facilitate the competitive procurement of replenishment spare parts. This contract was in effect at the time the waiver was being considered and covered 8 of the 23 systems on the waiver, representing 636 (or 51 percent) of the commercial derivative aircraft in the waiver. The senior Air Force official who led the waiver effort, and a field contracting official who oversaw support contracts for almost 90 percent of the aircraft on the Rights Guard agreement, told us they did not consider this contract as a means to acquire parts that would be compliant with the Berry Amendment. Further, this senior acquisition official was unaware that the contract applied to several Boeing models included in the waiver. While this contract would not have resolved the compliance issues for all of the aircraft systems listed on the waiver, this official acknowledged it might have allowed the Air Force to achieve compliance for a limited number of spare parts procurements for certain systems. The Air Force also did not question the contractors’ inability to be compliant on military unique spare parts. For example, we previously reported on the Air Force award of a $7.9 million contract to Boeing in September 2003 for 24 engine cowlings used on the E-3 Airborne Warning and Control System (AWACS), a Boeing 707 aircraft modified for military use. These engine cowlings were similar to those used on the commercial 707, but were modified to meet military requirements. Boeing proposed to manufacture these engine cowlings rather than subcontracting the work as it did in the original E-3 AWACS production contracts. This required the company to include in its contract proposal the cost of acquiring production equipment to manufacture these parts. The temporary waiver of the Berry Amendment that included the E-3 AWACS was issued at the same time that the Air Force awarded the engine cowlings contract. However, it did not question Boeing’s inability to produce compliant cowlings in-house. The waiver documentation did not include any discussion or other indication that the Air Force questioned company assertions that it could not meet Berry requirements, specifically for military unique items. In addition, the Air Force did not fully evaluate the cost of bringing contractors into compliance. Although one company’s representatives said that compliance would be costly, the Air Force did not validate what the actual costs would be and did not assess whether the cost of complying would be similar for the other manufacturers of commercial derivative aircraft. For example, Gulfstream officials said that they performed a high- level review—which was provided to Air Force contracting officers—that showed that about 0.2 percent of the total value of aircraft parts on the C-37A originates in countries not exempt from the Berry Amendment. However, the Air Force did not validate this estimate or determine the cost or effort necessary for Gulfstream or any other similarly situated contractor to achieve compliance. Finally, the Air Force did not consider its leverage as the primary customer of the T-6 aircraft, given that the U.S. government accounts for 364 out of 435 aircraft ordered as of August 2005, with planned purchases of an additional 782 aircraft through 2015. The Air Force will also need to purchase spare parts for the life of the aircraft system. According to Raytheon, the company selects and establishes a supplier base during the design, development, and testing of its commercial aircraft, resulting in suppliers being certified by the Federal Aviation Administration. However, the Air Force did not ask Raytheon what steps it would need to take and what costs would be involved in complying with the Berry Amendment requirement. Air Force Did Not Recognize Some Systems Were Already Covered under Other Regulatory Exceptions By not conducting a system-by-system review, the Air Force was unaware that some systems were already covered under other regulatory exceptions to the Berry Amendment. For example, one of the exceptions allows specialty metals to be procured from a qualifying country. The TG-15 support contract was already exempt from the Berry Amendment specialty metal restriction because this training glider was manufactured in Germany, a qualifying country. In another example, the senior Air Force officials were not aware that the TG-10 and TG-14 support contracts were already covered under the regulatory exception for certain foreign manufactured equipment. This exception allows DOD to purchase spare and replacement parts for foreign manufactured equipment when domestic parts are deemed unavailable. Air Force contracting officials in the field previously determined that spare parts for these two training gliders were unavailable domestically, as these aircraft are manufactured in the Czech Republic and Brazil. The support contract for these systems was modified to cite this exception 6 months before they were added to the permanent waiver. Air Force officials did not consider any of these other regulatory exceptions prior to including these training gliders in the waiver. Only after we identified that these training gliders were already exempted did the acquisition officials consult with contracting officials at Oklahoma City Air Logistics Center—those responsible for managing the support contract—to determine whether these exceptions had ever been considered. Had the Air Force done so before finalizing the permanent waiver, it may have discovered that these training gliders were already covered through other regulatory exceptions. This illustrates the Air Force’s lack of thoroughness by not coordinating the waiver with all of the appropriate contracting officials in the field. Conclusions The Berry Amendment was enacted to strengthen the industrial base to ensure that it could produce essential items for defense purposes. Although the Department of Defense relies on commercial products to satisfy some of its military requirements, it remains responsible for assessing opportunities to satisfy the requirements of the Berry Amendment. The Air Force’s failure to follow established policies and its decision to combine 23 aircraft systems in one waiver diminished the persuasiveness of the waiver’s support. By not thoroughly analyzing each system on the waiver, the Air Force treated all systems as if they had the same compliance problems, when in fact several of the systems had unique circumstances that should have been considered and documented before approving a waiver. Additionally, the Air Force did not fully document its position on the lack of alternatives and has limited the possibility of future review concerning these systems through the execution of a permanent waiver. Recommendation for Executive Action Because the Air Force did not thoroughly analyze each system on the waiver or fully document its position on the lack of alternatives, we are making two recommendations to DOD so that it can improve the waiver’s support or modify it as necessary. Specifically, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following two actions: Conduct an analysis of each commercial derivative aircraft system included in the waiver to consider opportunities to achieve compliance with the Berry Amendment requirements or to document why such compliance is not possible. This should include conducting market research, including consultation with the Department of Commerce, and assessing alternatives such as obtaining technical data rights to manufacture compliant parts, identifying compliant suppliers for military unique parts, determining the cost or effort for bringing contractors into compliance, and considering if systems are already exempted under other regulatory exceptions. Assess, on a periodic basis, whether changes have occurred in the supplier base for each aircraft system included in the waiver that would provide opportunities to procure domestically produced items as required by the Berry Amendment. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD concurred with both of our recommendations. In response, DOD will direct the Air Force to conduct an analysis of each commercial derivative aircraft system included in the Berry Amendment waiver and to periodically assess whether changes have occurred in the supplier base that would provide opportunities to procure domestically produced items. In addition, DOD provided comments from the Air Force, which indicated the Air Force’s concurrence with our recommendations and its intent to develop a plan to review the current waiver and rescind or modify it as appropriate. DOD and Air Force responses are reprinted in appendix II. We incorporated the Air Force’s technical comments in the report as appropriate. In its general comments, the Air Force stated that the waiver is reasonable and necessary and that the draft report fails to acknowledge the circumstances and rationale that compelled it to execute the waiver. The Air Force also indicated that the report did not clearly articulate the scope of the current waiver, which covers future spare parts purchases, but does not include future aircraft purchases. While the Air Force stated that the waiver is reasonable and necessary, our report shows that the Air Force did not follow established policy when it did not thoroughly analyze the opportunities for compliance on a system- by-system basis. Had it conducted market research and thoroughly reviewed alternatives for each system on the waiver, the Air Force could have strengthened the persuasiveness of the waiver’s support. We are encouraged that the Air Force has concurred with our recommendation to reevaluate the support for each of the systems on the waiver. The Air Force also stated that our report did not acknowledge the circumstances and rationale for the waiver. We disagree with this assertion. Our first finding discusses at length the reasons the Air Force considered a waiver necessary and outlines the waiver’s rationale based on the Air Force’s supporting documentation. While we agree that it was necessary for the Air Force to promptly address Berry Amendment compliance issues, this should not have precluded the Air Force from conducting a thorough analysis on how to achieve compliance on a system-by-system basis, especially during the 6-month period that the temporary waiver was in force. In addition, the Air Force indicated that the report did not clearly articulate the scope of the current waiver. Although the draft report correctly described the scope of the waiver, we made changes throughout the report to specify and emphasize that the scope of the waiver covers future aircraft deliveries under current acquisition contracts and current and future support contracts. The waiver does not apply to commercial derivative aircraft systems not listed on the waiver or future contracts for systems on the waiver entered into after the waiver’s effective date. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Preston M. Geren, Acting Secretary of the Air Force; and interested congressional committees. We will also provide copies to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or calvaresibarra@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Commercial Derivative Aircraft Included in the Air Force Waiver Appendix II: Comments from the Department of Defense and the Air Force Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, John Neumann, Assistant Director; Noah Bleicher; Greg Campbell; Jeffrey Hartnett; Robert Lee; Lillian Slodkowski; and Adam Vodraska made key contributions to this report.
In April 2004, the Secretary of the Air Force approved a permanent waiver of the requirements of the Berry Amendment for 23 commercial derivative aircraft systems, representing more than 1,200 aircraft in the Air Force's inventory. The Berry Amendment generally requires the Department of Defense (DOD) to purchase certain domestically grown or produced items, including specialty metals used in defense systems such as aircraft. Waivers to the Berry Amendment can be granted under certain circumstances. GAO was asked to evaluate the supporting evidence and analysis that the Air Force relied on to waive the Berry Amendment. GAO did not conduct a legal analysis of the waiver. The Air Force did not follow established policy when evaluating the need for a waiver of the Berry Amendment for 23 commercial derivative aircraft systems. Specifically, the Air Force did not thoroughly analyze the opportunities for compliance with the Berry Amendment on a system-by-system basis, thereby diminishing the persuasiveness of the waiver's support. The Air Force's review of its compliance with the Berry Amendment regarding these systems began in early 2003 when it became aware that some aircraft manufacturers could not meet the Berry Amendment requirements. Faced with this problem, a senior Air Force acquisition official visited an aircraft manufacturer, two of its subcontractors (including a titanium producer), and an engine manufacturer. The Air Force's conclusion, based on these visits and knowledge of the aerospace industry, was that other contractors involved in the Air Force's acquisition and support of commercial derivative aircraft systems would also have difficulty complying with the Berry Amendment. In September 2003, the Secretary of the Air Force signed a temporary waiver that was initiated at the headquarters level and covered 19 systems. That was followed in April 2004 with a permanent waiver of the Berry Amendment for these 19 systems plus another 4. Air Force policy calls for certain actions before issuing a waiver, including conducting market research and conducting an analysis of what alternatives are available and why they are not acceptable. In this instance, the Air Force did not conduct market research for each system, as it believed no company could produce compliant parts--a position not explained in the waiver's supporting documents. The Air Force documented an analysis of alternatives for only 1 aircraft system in the waiver. Memos representing 18 other aircraft systems state that alternatives to the waiver had been considered and rejected as not feasible but did not identify what the alternatives were, while memos for 3 additional aircraft systems make no reference to whether alternatives had been considered. The Air Force provided no documentation about its analysis of alternatives for the 1 remaining aircraft system in the waiver. After discussions with representatives for all 23 aircraft systems, GAO concluded that the Air Force did not document alternatives or thoroughly review possible options to achieve compliance with the Berry Amendment for many of the aircraft systems. GAO has identified several instances that highlight the Air Force's lack of thoroughness in its waiver process for the 23 aircraft systems. For example, the Air Force did not question contractors' inability to provide compliant spare parts when they were military unique and therefore not the same as the parts used in commercial aircraft. Also, the Air Force included some aircraft systems in the waiver that were already covered under other regulatory exceptions to the Berry Amendment.
Background pay health care providers and beneficiaries and are reimbursed for their administrative expenses incurred in performing the work. Over the years, HCFA has consolidated some of Medicare’s operations, and the number of contractors has fallen from a peak of about 130 to about 70 in 1996. Generally, intermediaries are the contractors that handle part A claims submitted by “institutional providers” (hospitals, skilled nursing facilities, hospices, and home health agencies); carriers are those handling part B claims submitted by physicians, laboratories, equipment suppliers, and other practitioners. HCFA’s efforts to guard against inappropriate payments have been largely contractor-managed operations, leaving the fiscal intermediaries and carriers broad discretion over how to protect Medicare program dollars. As a result, there are significant variations in contractors’ implementation of Medicare’s payment safeguard policies. Medicare’s managed care program covers a growing number of beneficiaries—nearly 5 million in 1996—who have chosen to enroll in a health maintenance organization (HMO) to receive their medical care rather than purchasing services from individual providers. The managed care program, which is funded from both the part A and part B trust funds, consists mostly of risk contract HMOs and enrolled about 4 million Medicare beneficiaries in 1996. The HMOs are paid a monthly amount, fixed in advance, by Medicare for each beneficiary enrolled. In this sense, the HMO has a “risk” contract because regardless of what it spends for each enrollee’s care, the HMO assumes the financial risk of providing health care within a fixed budget. HMOs profit if their cost of providing services is lower than the predetermined payment but lose if their cost is higher than the payment. Recent Funding, Other Initiatives Revitalize Waning Efforts to Review Claims, Deter Abuse Over the last 7 years, HCFA and its claims processing contractors have struggled to carry out critical claims review and provider audit activities with a budget that, on a per-claim basis, was seriously declining. For example, between 1989 and 1996, the number of Medicare claims climbed 70 percent to 822 million, while during that same period, claims review resources grew less than 11 percent. Adjusting for inflation and claims growth, the amount contractors could spend on review shrank from 74 cents to 38 cents per claim. Implications of Reduced Funding for Payment Safeguards Consider the effect of inadequate funding on reviewing home health claims. After legislation in 1985 more than doubled claims review funding, contractors did medical necessity reviews for 62 percent of the home health claims processed in 1986 and 1987. By 1989, however, contractors’ claims review target had been lowered to 3.2 percent. One HCFA official noted that home health agencies are aware that their Medicare intermediary reviews only a small number of claims and, therefore, they can take chances billing for noncovered services. The plunge in the number of cost report audits has also weakened Medicare’s efforts to avoid paying excessive costs. Providers subject to these audits are those paid under Medicare’s cost-based reimbursement systems—such as hospital outpatient departments, skilled nursing facilities, and home health agencies. These providers are reimbursed on the basis of the actual costs of providing services, rather than on charges. Each year, cost-based providers submit reports that detail their operating costs throughout the preceding year and specify the share related to the provision of Medicare services. Using this information, the intermediaries determine how much Medicare should reimburse the provider institutions, some of which have received interim Medicare payments throughout the year based on estimates of expected costs. Without an audit of the provider’s cost report, however, the intermediary can only reconcile the figures provided and cannot determine the appropriateness of the costs reported. In practice, only a fraction of providers is subject to audits. Between 1991 and 1996, the chances, on average, that an institutional provider would be audited fell from about 1 in 6 to about 1 in 13. The Impact of Recent Legislation and Other Initiatives With the passage of the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the cycle of declining funding for anti-fraud-and-abuse activities has been broken. For fiscal year 1997, the act boosts the contractors’ budget for program safeguard activities to 10 percent higher than it was in 1996; by 2003, the level will be 80 percent higher than in 1996, after which it remains constant. These additional amounts, however, essentially stabilize per-claim safeguard expenditures at about 1996’s level. For example, we project that payment safeguard spending for 2003 will be just over one-half the level of 1989 spending after adjusting for inflation. establishing a program run jointly by the Department of Justice and HHS to coordinate federal, state, and local law enforcement efforts against fraud in Medicare and other health care payers; establishing a national health care fraud data collection program; and enhancing penalties and establishing health care fraud as a separate criminal offense. Another important fraud-fighting effort is the 2-year, multiagency project called Operation Restore Trust. Participating agencies include the HHS Inspector General, HCFA, and the Administration on Aging, as well as the Department of Justice and various state and local agencies. The project targets Medicare abuse and misuse in the areas of home health, nursing homes, and medical equipment and supplies. In its first year, Operation Restore Trust reported recovering $42.3 million in inappropriate payments: $38.6 million were returned to the Medicare trust fund and $3.7 million to the Treasury as a result of these efforts. It also resulted in 46 convictions, imposed 42 fines, and excluded 119 fraudulent providers from program participation. In addition, many of the targeted home health agencies were decertified. Operation Restore Trust is scheduled to be closed out as a demonstration project in May 1997. This effort, as well as HCFA’s progress in adopting fraud and abuse detection software and its development of a national provider tracking system, is discussed further in our high risk report. Management Problems Also Affect Medicare Payments and Operations Notwithstanding funding increases, several problems independent of adequate funding and related to HCFA’s oversight of Medicare have implications for curbing unnecessary spending and conducting program operations effectively. One chronic problem is that HCFA has not coordinated contractors’ payment safeguard activities. For example, as was anticipated when the program was set up, part B carriers establish their own medical policies and screens, which are the criteria used to identify claims that may not be eligible for payment. Certain policies and the screens used to enforce them have been highly effective in helping some Medicare carriers avoid making unnecessary or inappropriate payments. However, the potential savings from having these policies and screens used by all carriers have been lost, as HCFA has not adequately coordinated their use among carriers. For example, for just six of Medicare’s top 200 most costly services in 1994, the use of certain carriers’ medical policy screens by all of Medicare’s carriers could have saved in the millions to hundreds of millions of dollars annually. However, HCFA’s leadership has been absent in this area, resulting in the loss of opportunity to avoid significant Medicare expenditures. In addition, several technical and management problems have hampered HCFA’s acquisition of the Medicare Transaction System (MTS), a major claims processing system that aims at consolidating the nine different claims processing systems Medicare currently uses. First, HCFA had not completely defined its requirements 2 years after awarding a systems development contract. Second, HCFA’s MTS development schedule has had significant overlap among the various system-development phases, increasing the risk that incompatibilities and delays will occur. Finally, HCFA has conducted the MTS project without adequate information about the system’s costs and benefits. Before MTS is completed, HCFA must oversee several essential information management transitions in the Medicare claims processing environment. One involves the shifting of claims processing workloads from contractors who leave the program to other remaining contractors. Similar workload shifts in the past have produced serious disruptions in processing claims promptly and accurately, delays in paying physicians, and the mishandling of some payment controls. A second issue involves HCFA’s plan to consolidate Medicare’s three part A and six part B systems into a single system for each part. This plan will require several major software conversions. A third issue involves the “millennium” problem—revising computerized systems to accommodate the year-digit change to 2000. HCFA does not yet have plans for monitoring contractors’ progress in making their systems “millennium compliant.” Medicare Managed Care Incurs Separate Risks Risk contract HMOs, Medicare’s principal managed care option, bear their own set of risks for taxpayers and beneficiaries. These plans currently enroll about 10 percent of Medicare’s population and have shown rapid enrollment growth in recent years. Because HMOs have helped private sector payers contain health care costs and limit the excess utilization encouraged by fee-for-service reimbursement, these HMOs have cost-control appeal for Medicare, while offering potential advantages to beneficiaries. However, as we recently testified, a methodological flaw in HCFA’s approach to paying HMOs has produced excess payments for some plans. Moreover, because higher HMO enrollment produces higher excess payments, enrolling more beneficiaries in managed care could increase rather than lower Medicare spending unless the method of setting HMO rates is revised. A second problem, of particular concern to beneficiaries, is that HCFA has been lax in enforcing HMO compliance with program standards, while not keeping beneficiaries adequately informed of the benefits, costs, and performance of competing HMOs. In 1995, we reported that, despite efforts to improve its HMO monitoring, HCFA conducted only paper reviews of HMOs’ quality assurance plans, examining only the description rather than the implementation of HMOs’ quality assurance processes. Moreover, the agency was reluctant to take action against noncompliant HMOs, even when there was a history of abusive sales practices, delays in processing beneficiaries’ appeals of HMO decisions to deny coverage, or patterns of poor quality care. HCFA also misses the opportunity to supplement its HMO regulatory efforts by not keeping the Medicare beneficiary population well-informed about competing HMOs. As we reported in 1996, HCFA has a wealth of data, collected for program administration and contract oversight purposes, that it does not package or disseminate for consumer use. For example, HCFA does not provide beneficiaries with any of the comparative consumer guides that the federal government and other employer-based health insurance programs routinely distribute to their employees and retirees. Instead, HCFA collects information only for its internal use—records of each HMO’s premium requirements and benefit offerings, enrollment and disenrollment data (monthly reports specifying for each HMO the number of beneficiaries that joined and left that month), records of enrollees’ complaints, and results of certification visits to HMOs. By not publishing disenrollment rates or other comparative performance measures, HCFA misses an opportunity to show beneficiaries which plans have a good record and hinders HMOs’ efforts to benchmark their own performance. Initiatives Intended to Address Risk Contract Program Problems HCFA acknowledges the problems we identified in Medicare’s risk contract program. To tackle the difficulties in setting HMO payment rates, HCFA has been conducting several demonstration projects that examine ways to modify or replace the current method of determining HMO payment rates. In addition, HIPAA gives HCFA more flexible sanction authority, such as suspending an HMO’s right to enroll Medicare beneficiaries until deficiencies are corrected, while providing HMOs the statutory right to develop and implement a corrective action plan before HCFA imposes a sanction. the information in an electronic format rather than in print, however, may make it less accessible to the very individuals who would find it useful. The information, according to HCFA, will have to be “downloaded and customized for local consumption.” HCFA expects the primary users of this information to be beneficiary advocates and Medicare insurance counselors. HCFA is also planning a survey to obtain beneficiaries’ perceptions of their managed care plans and does not expect preliminary results before the end of 1997. In another key initiative, HCFA is helping to develop a new version of the Health Plan Employer Data and Information Set (HEDIS 3.0) that will incorporate measures relevant to the elderly population. The measures will enable comparisons to be made among plans of the enrollees’ use of such prevention and screening services as flu shots, mammography, and eye exams for diabetics. As of January 1997, Medicare HMOs are required, from the time they renew their contract, to report on HEDIS 3.0 clinical effectiveness measures. HCFA intends to summarize the results and include them in comparability charts currently being developed. Conclusion Many of Medicare’s vulnerabilities are inherent in its size and mission, making it a perpetually attractive target for exploitation. That wrongdoers continue to find ways to dodge safeguards illustrates the dynamic nature of fraud and abuse and the need for constant vigilance and increasingly sophisticated ways to protect against gaming the system. Judicious changes in Medicare’s day-to-day operations involving HCFA’s improved oversight and leadership, its appropriate application of new anti-fraud-and-abuse funds, and the mitigation of MTS acquisition risks are necessary ingredients to reduce substantial future losses. Moreover, as Medicare’s managed care enrollment grows, HCFA must work to ensure that payments to HMOs better reflect the cost of beneficiaries’ care, that beneficiaries receive information about HMOs sufficient to make informed choices, and that the agency’s expanded authority to enforce HMO compliance with federal standards is used. To adequately safeguard the Medicare program, HCFA needs to meet these important challenges promptly. This concludes my statement. I am happy to take your questions. Contributors For more information on this testimony, please call Donald C. Snyder, Assistant Director, on (202) 512-7204. Other major contributors to this statement included Thomas Dowdal and Hannah F. Fein. Related GAO Products High Risk Series Reports on Medicare Medicare (GAO/HR-97-10). Medicare Claims (GAO/HR-95-8). Medicare Claims (GAO/HR-93-6). Medicare Fee-for-Service Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare: Millions Can Be Saved by Screening Claims for Overused Services (GAO/HEHS-96-49, Jan. 30, 1996). Medicare Transaction System: Strengthened Management and Sound Development Approach Critical to Success (GAO/T-AIMD-96-12, Nov. 16, 1995). Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). Medicare: Commercial Technology Could Save Billions Lost to Billing Abuse (GAO/AIMD-95-135, May 5, 1995). Medicare: New Claims Processing System Benefits and Acquisition Risks (GAO/HEHS/AIMD-94-79, Jan. 25, 1994). Medicare Managed Care Medicare HMOs: HCFA Could Promptly Reduce Excess Payments by Improving Accuracy of County Payment Rates (GAO/T-HEHS-97-78, Feb. 25, 1997). Medicare: HCFA Should Release Data to Aid Consumers, Prompt Better HMO Performance (GAO/HEHS-97-23, Oct. 22, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed efforts to fight fraud and abuse in the Medicare program. GAO noted that: (1) it is not surprising that because of the program's size, complexity, and rapid growth, Medicare is a charter member of GAO's high risk series; (2) in this year's report on Medicare, GAO is pleased to note that both the Congress and the Health Care Financing Administration, the Department of Health and Human Services' agency responsible for running Medicare, have made important legislative and administrative changes addressing chronic payment safeguard problems that GAO and others have identified; and (3) however, because of the significant amount of money at stake, GAO believes that the government will need to exercise constant vigilance and effective management to keep the program protected from financial exploitation.
Background AIG is an international insurance organization serving customers in more than 130 countries. As of June 30, 2011, AIG reported assets of $616.8 billion and revenues of $34.1 billion for the preceding 6 months. AIG companies serve commercial, institutional, and individual customers through worldwide property/casualty networks. In addition, AIG companies provide life insurance and retirement services in the United States. Regulation of the Company Federal, state, and international authorities regulate AIG and its subsidiaries. Until March 2010, the Office of Thrift Supervision (OTS) was the consolidated supervisor of AIG, which was a thrift holding company by virtue of its ownership of the AIG Federal Savings Bank. As the consolidated supervisor, OTS was charged with identifying systemic issues or weaknesses and helping ensure compliance with regulations that govern permissible activities and transactions. The Federal Reserve System was not a direct supervisor of AIG. Its involvement with the company was through its responsibilities to maintain financial system stability and contain systemic risk that may arise in financial markets. AIG’s domestic life and property/casualty insurance companies are regulated by the state insurance regulators in the state in which these companies are domiciled. The primary state insurance regulators include New York, Pennsylvania, and Texas. These state agencies regulate the financial solvency and market conduct of these companies, and they have the authority to approve or disapprove certain transactions between an insurance company and its parent or its parent’s subsidiaries. These agencies also coordinate the monitoring of companies’ insurance lines among multiple state insurance regulators. For AIG in particular, these regulators have reviewed reports on liquidity, investment income, and surrender and renewal statistics; evaluated potential sales of AIG’s domestic insurance companies; and investigated allegations of pricing disparities. Finally, AIG’s general insurance business and life insurance business that are conducted in foreign countries are regulated by the supervisors in those jurisdictions. AIG’s Financial Difficulties AIG’s financial difficulties stemmed primarily from two sources:  Securities lending. Until 2008, AIG had maintained a large securities lending program operated by its insurance subsidiaries. The securities lending program allowed insurance companies, primarily AIG’s life insurance companies, to lend securities in return for cash collateral, which was then invested in investments such as residential mortgage- backed securities (RMBS).  Credit default swaps. AIG had been active, through its AIG Financial Products Corporation (AIGFP) unit, in writing insurance-like protection called credit default swaps (CDS) that guaranteed the value of CDOs. In September 2008, the Board of Governors of the Federal Reserve System (Federal Reserve Board), FRBNY, and Treasury determined that market events could cause AIG to fail. According to officials from these entities, AIG’s failure would have posed systemic risk to financial markets. Consequently, the Federal Reserve System and Treasury took steps to help ensure that AIG obtained sufficient funds to continue to meet its obligations and could complete an orderly sale of operating assets and close its investment positions in its securities lending program and AIGFP. From July through early September in 2008, AIG faced increasing liquidity pressure following a downgrade in its credit ratings in May 2008, which was due in part to losses from its RMBS investments. The company was experiencing declines in the value and market liquidity of the RMBS assets that served as collateral for its securities lending operation, as well as declining values of CDOs against which AIGFP had written CDS protection. These losses in value forced AIG to use an estimated $9.3 billion of its cash reserves in July and August 2008 to provide capital to its domestic life insurers following losses in their RMBS portfolios and to post additional collateral required by the trading counterparties of AIGFP. AIG attempted to secure private financing in September 2008 but was unsuccessful. On September 15, 2008, credit rating agencies downgraded AIG’s debt rating, which resulted in the need for an additional $20 billion to fund its added collateral demands and transaction termination payments. Following the credit rating downgrade, an increasing number of counterparties refused to transact with AIG for fear that it would fail. Also around this time, the insurance regulators decided they would no longer allow AIG’s insurance subsidiaries to lend funds to the parent company under a credit facility that AIG maintained, and they demanded that any outstanding loans be repaid and that the facility be terminated. In September 2008, another large financial services firm—Lehman Brothers Holdings, Inc. (Lehman)—was on the brink of bankruptcy. As events surrounding AIG were developing over the weekend of September 13–14, 2008, Federal Reserve System officials were also addressing Lehman’s problems. On September 15—the day before the Federal Reserve Board voted to authorize FRBNY to make an emergency loan to AIG—Lehman filed for bankruptcy. Stock prices fell sharply, with the Dow Jones Industrial Average and the Nasdaq market losing 504 points and 81 points, respectively. Federal Assistance to AIG Because of concerns about the effect of an AIG failure, in 2008 and 2009, the Federal Reserve System and Treasury agreed to make $182.3 billion available to assist AIG. First, on September 16, 2008, the Federal Reserve Board, with the support of Treasury, authorized FRBNY to lend AIG up to $85 billion through a secured revolving credit facility that AIG could use as a reserve to meet its obligations. This debt was subsequently restructured in November 2008 and March 2009 to decrease the amount available under the facility, reduce the interest charged, and extend the maturity date from 2 to 5 years, to September 2013. By January 2011, AIG had fully repaid the facility and it was closed. In October 2008, the Federal Reserve Board approved further assistance to AIG, authorizing FRBNY to borrow securities from certain AIG domestic insurance subsidiaries. Under the program, FRBNY was authorized to borrow up to $37.8 billion in investment-grade, fixed-income securities from AIG in return for cash collateral. These securities were previously lent by AIG’s insurance company subsidiaries to third parties. This assistance was designed to allow AIG to replenish liquidity used to settle securities lending transactions, while providing enhanced credit protection to FRBNY in the form of a security interest in the securities. This program was authorized for up to nearly 2 years but was terminated in December 2008. In late 2008, AIG’s mounting debt—the result of borrowing from the Revolving Credit Facility—led to concerns that the company’s credit ratings would be lowered, which would have caused its condition to deteriorate further. In response, the Federal Reserve Board and Treasury in November 2008 announced the restructuring of AIG’s debt. Under the restructured terms, Treasury purchased $40 billion in shares of AIG preferred stock (Series D), and the cash from the sale was used to pay down a portion of AIG’s outstanding balance from the Revolving Credit Facility. The limit on the facility also was reduced to $60 billion, and other changes were made to the terms of the facility. This restructuring was critical to helping AIG maintain its credit ratings. To provide further relief, FRBNY also announced in November 2008 the creation of two new facilities to address some of AIG’s more pressing liquidity issues. AIG’s securities lending program continued to be one of the greatest ongoing demands on its working capital, and FRBNY announced plans to create an RMBS facility—Maiden Lane II (ML II) —to purchase RMBS assets from AIG’s U.S. securities lending portfolio. The Federal Reserve Board authorized FRBNY to lend up to $22.5 billion to ML II; AIG also acquired a subordinated $1 billion interest in the facility, which would absorb the first $1 billion of any losses. In December 2008, FRBNY extended a $19.5 billion loan to ML II to fund its portion of the purchase price of the securities. The facility purchased $39.3 billion face value of the RMBS directly from AIG subsidiaries (domestic life insurance companies). As part of the ML II transaction, the $37.8 billion Securities Borrowing Facility established in October before was repaid and terminated. As of August 17, 2011, ML II owed $7.3 billion in principal and interest to FRBNY. In addition, FRBNY announced plans to create a second facility—ML III—to purchase multisector CDOs on which AIGFP had written CDS contracts. This facility was aimed at facilitating the restructuring of AIG by addressing one of the greatest threats to AIG’s liquidity position. In connection with the purchase of the CDOs, AIG’s CDS counterparties agreed to terminate the CDS contracts, thereby eliminating the need for AIG to post additional collateral as the value of the CDOs fell. The Federal Reserve Board authorized FRBNY to lend up to $30 billion to ML III. In November and December 2008, FRBNY extended a $24.3 billion loan to ML III. AIG also paid $5 billion for an equity interest in ML III, which would absorb the first $5 billion of any losses. As of August 17, 2011, ML III owed $11.2 billion in principal and interest to FRBNY. When the two AIG Maiden Lane facilities were created, FRBNY officials said that the FRBNY loans to ML II and ML III were both expected to be repaid with the proceeds from the interest and principal payments, or liquidation, of the assets in the facilities. The repayment is to occur through cash flows from the underlying securities as they are paid off. Accordingly, FRBNY did not set a date for selling the assets; rather, it has indicated that it is prepared to hold the assets to maturity if necessary. In March 2011, FRBNY announced it declined an AIG offer to purchase all ML II assets, and said that instead, it would sell the assets in segments over an unspecified period, as market conditions warrant, through a competitive sales process. In March 2009, the Federal Reserve Board and Treasury announced plans to further restructure AIG’s assistance. Among other items, debt owed by AIG on the Revolving Credit Facility would be reduced by up to about $26 billion in exchange for FRBNY’s receipt of preferred equity interests in two special purpose vehicles (SPV) created to hold the outstanding common stock of two AIG life insurance company subsidiaries—American Life Insurance Company (ALICO) and AIA Group Limited (AIA). Also in March 2009, the Federal Reserve Board and Treasury announced plans to assist AIG in the form of lending related to the company’s domestic life insurance operations. FRBNY was authorized to extend credit totaling up to approximately $8.5 billion to SPVs to be established by certain AIG domestic life insurance subsidiaries. As announced, the SPVs were to repay the loans from the net cash flows they were to receive from designated blocks of existing life insurance policies held by the insurance companies. The proceeds of the FRBNY loans were to pay down an equivalent amount of outstanding debt under the Revolving Credit Facility. However, in February 2010, AIG announced that it was no longer pursuing this life insurance securitization transaction with FRBNY. Treasury also has provided assistance to AIG. As noted, in November 2008, Treasury’s Office of Financial Stability announced plans under the Troubled Asset Relief Program (TARP) to purchase $40 billion in AIG preferred shares. AIG entered into an agreement with Treasury whereby Treasury agreed to purchase $40 billion of fixed-rate cumulative preferred stock of AIG (Series D) and received a warrant to purchase approximately 2 percent of the shares of AIG’s common stock. The proceeds of this sale were used to pay down AIG’s outstanding balance on the Revolving Credit Facility. In April 2009, AIG and Treasury entered into an agreement in which Treasury agreed to exchange its $40 billion of Series D cumulative preferred stock for $41.6 billion of Series E fixed-rate noncumulative preferred stock, allowing for a reduction in leverage and dividend requirements. The $1.6 billion difference between the initial aggregate liquidation preference of the Series E stock and the Series D stock represents a compounding of accumulated but unpaid dividends owed by AIG to Treasury on the Series D stock. Because the Series E preferred stock more closely resembles common stock, principally because its dividends were noncumulative, rating agencies viewed the stock more positively when rating AIG’s financial condition. Also in April 2009, Treasury made available a $29.835 billion equity capital facility to AIG whereby AIG issued to Treasury 300,000 shares of fixed-rate noncumulative perpetual preferred stock (Series F) and a warrant to purchase up to 3,000 shares of AIG common stock. The facility was intended to strengthen AIG’s capital levels and improve its leverage. On January 14, 2011, with the closing of a recapitalization plan for AIG, the company repaid $47 billion to FRBNY, including the outstanding balance on the original $85 billion Revolving Credit Facility. With that, AIG no longer had any outstanding obligations to FRBNY. AIG’s Federal Securities Filings As a publicly traded company, AIG makes regular filings with SEC. In December 2008, AIG filed two Form 8-K statements related to ML III. These filings included ML III contract information and did not initially include a supporting record known as “Schedule A”—a listing of CDOs sold to ML III, including names of the counterparties, valuations, collateral posted, and other information. Questions arose about FRBNY’s role in AIG’s filings and the degree to which the Reserve Bank may have influenced the company’s filing decisions, as well as whether the company’s filings satisfactorily disclosed the nature of payments to the counterparties. AIG’s Crisis Came Amid Overall Market Turmoil AIG’s financial difficulties came as financial markets were experiencing turmoil. A sharp decline in the U.S. housing market that began in 2006 precipitated a decline in the price of mortgage-related assets—particularly mortgage assets based on subprime loans—in 2007. Some institutions found themselves so exposed that they were threatened with failure, and some failed because they were unable to raise capital or obtain liquidity as the value of their portfolios declined. Other institutions, ranging from government-sponsored enterprises such as Fannie Mae and Freddie Mac to large securities firms, were left holding “toxic” mortgages or mortgage- related assets that became increasingly difficult to value, were illiquid, and potentially had little worth. Moreover, investors not only stopped buying private-label securities backed by mortgages but also became reluctant to buy securities backed by other types of assets. Because of uncertainty about the liquidity and solvency of financial entities, the prices banks charged each other for funds rose dramatically, and interbank lending conditions deteriorated sharply. The resulting liquidity and credit crunch made the financing on which businesses and individuals depend increasingly difficult to obtain. By late summer 2008, the effects of the financial crisis ranged from the continued failure of financial institutions to increased losses of individual savings and corporate investments to further tightening of credit that would exacerbate an emerging global economic slowdown. The Possibility of AIG’s Failure Drove Federal Reserve Aid after Private Financing Failed A year before the first federal assistance to AIG, warning signs of the company’s financial difficulties began to appear. Over the following months, the Federal Reserve System received information about AIG’s deteriorating condition from a variety of sources and contacts, and it stepped in to provide emergency assistance as possible bankruptcy became imminent in mid-September 2008. Attempts to secure private financing, which would have precluded or limited the need for government intervention, failed as the extent of AIG’s liquidity needs became clearer. Both the Federal Reserve System and AIG considered bankruptcy issues, with AIG deciding independently to accept federal assistance in lieu of bankruptcy. Because of urgency in financial markets by the time the Federal Reserve System intervened, officials said there was little opportunity to consider alternatives before extending the initial assistance in the form of the Revolving Credit Facility. When AIG’s financial troubles persisted after the Revolving Credit Facility was established, the company and the Federal Reserve System considered a range of options for further assistance. Throughout the course of AIG assistance, the company’s credit ratings were a critical consideration, according to Federal Reserve System officials, as downgrades would have triggered large new liquidity demands on the company and could have jeopardized government repayment. As a result, Federal Reserve System assistance reflected rating agency concerns, although both FRBNY and the rating agencies told us the rating agencies did not participate in the decision-making process. The Federal Reserve Monitored AIG’s Deteriorating Condition in 2008 and Took Action as Possible Bankruptcy Was Imminent The difficulties that culminated in AIG’s crisis in September 2008 began to draw financial regulators’ attention in 2007, when issues arose relating to the company’s securities lending program and the CDS business of its AIGFP subsidiary (see fig. 1). In December 2006, AIG’s lead state insurance regulator for the company’s domestic life insurers (“lead life insurance regulator”) began a routine examination of AIG in coordination with several other state regulators. During the examination, the state regulators identified issues related to the company’s securities lending program. Prior to mid-2007, state regulators had not identified losses in the securities lending program, and the lead life insurance regulator had reviewed the program without major concerns. As the examination continued into the fall of 2007, the program began to show losses resulting from declines in the value of its RMBS portfolio. The lead life insurance regulator told us the program had become riskier as a result of how AIG had invested cash collateral it received from its lending counterparties—in RMBS rather than in safer investments. The RMBS investments were declining in value and had become less liquid, AIG told us. Regulators recognized that left unaddressed, AIG’s practices in the securities lending program, including the losses they observed, could create liquidity risks for AIG. In particular, these declines could lead AIG’s securities lending counterparties to terminate their borrowing agreements, thereby requiring AIG to return the cash collateral the counterparties had posted, which AIG had invested in the RMBS. According to the lead life insurance regulator, about 20 percent of the funds AIG had collected as collateral remained in cash, indicating a potentially large liquidity shortfall if the counterparties terminated their transactions. The lead life insurance regulator also noted that AIG was disclosing relatively little information in its regulatory filings about the program and its losses, which were off- balance sheet transactions. Another state insurance regulator told us that as part of its review, it noted that AIG life insurance companies engaging in securities lending were not correctly providing information in annual statements or taking an appropriate charge against capital for the securities lending activities. This regulator said it began discussions with the company about securities lending in 2006. AIG told us it was unaware of the regulator’s concerns. The lead life insurance regulator met with AIG management in October and November 2007 and presented the securities lending issues it had noted at a “supervisory college” meeting held by AIG’s then-consolidated regulator, OTS. The lead life insurance regulator told us it did not share with all participants that it had identified off-balance-sheet losses but that it privately advised OTS that it saw unrealized losses building in AIG’s securities lending portfolio, with the total reaching an estimated $1 billion by November 2007. It also told us this was the first time OTS learned about issues in the company’s securities lending program. At the time, OTS had concerns about a different matter at AIG. According to OTS, in late 2007, it began to have concerns about AIGFP’s practices for valuing the CDOs on which the company wrote CDS protection, in particular whether the company’s valuations corresponded to market values. Part of the concern was that AIGFP’s CDS counterparties were seeking collateral from the company based on their own valuations. Thus, in general, there were difficulties in assessing the value of the CDOs behind the company’s CDS contracts. According to AIG’s lead life insurance regulator, OTS did not communicate its concerns about AIGFP to state insurance regulators at the supervisory college meeting in November 2007. As a result, the lead life insurance regulator told us it did not understand the extent of potential risks AIGFP posed to the AIG parent company that in turn could have created risks for the regulated insurance subsidiaries. AIG executives and advisors told us that the company made thorough disclosures about securities lending program issues, including losses and the manner in which collateral was being invested, by the third quarter of 2007. They said that state regulators did not identify issues of which the company was not aware and disclosing publicly. AIG notified regulators in early 2008 that the securities lending program had experienced significant losses as of December 2007, at which time the lead life insurance regulator told us it began efforts to coordinate regular communication among the states. Results of the examination of the securities lending program provided greater disclosure of information to regulators, such as credit ratings of underlying securities in the pool of securities in which AIG had invested its counterparties’ collateral. By February 2008, regular meetings were being held among AIG and state insurance regulators. As the monitoring continued into 2008, state insurance regulators held a number of in-person and phone meetings with AIG executives, as the company took steps to increase its liquidity and improve cash-flow management within the securities lending program. The lead life insurance regulator told us that prior to the stepped-up monitoring, the company’s limited disclosure about the program did not allow the regulators to understand the extent of the problem. Overall, the lead life insurance regulator said, the consensus among the state regulators was that securities lending issues, while of concern, did not present imminent danger as long as AIG’s counterparties did not terminate their lending transactions. Meanwhile, AIG management had already taken steps to bolster liquidity and cash flow management—beginning in August 2007, AIG told us—and the regulators hoped the company would recover investment losses as market conditions improved. Moreover, the lead life insurance regulator had a guarantee from the AIG parent company to cover up to $5 billion in losses stemming from the program. The lead life insurance regulator said this provided some comfort as a backstop, but it was not certain that the company had the money to fulfill that agreement. Our review indicated that neither OTS nor state insurance regulators communicated with the Federal Reserve System about AIG’s problems before the summer of 2008. FRBNY officials told us they monitored financial institutions not regulated by the Federal Reserve System, including AIG, based on publicly available information, as part of monitoring overall financial market stability. In particular, FRBNY e-mails from late 2007 and January 2008 indicated that staff were monitoring AIG’s exposure and potential losses related to the subprime mortgage market. For instance, market updates were circulated to FRBNY officials in October and November 2007 highlighting multibillion dollar write-downs in AIG’s subprime mortgage portfolio. Additionally, in January 2008, an FRBNY staffer sent a market report of a private research firm to the then- FRBNY President that included analyses and estimates of AIG’s losses for its RMBS, CDO, and CDS activities. In February 2008, FRBNY staff wrote a memorandum on AIGFP’s CDS portfolio, which FRBNY officials said was prepared as part of FRBNY’s regular monitoring of market events. The report, circulated to some FRBNY staff, noted unrealized losses related to the CDS portfolio and AIG’s exposure to the subprime mortgage market. During the spring and summer of 2008, internal FRBNY e-mails show that FRBNY officials circulated information on a range of AIG issues, including reports about the company’s earnings losses, widening CDS spreads, potential credit rating downgrades, and worsening liquidity and capital positions. FRBNY officials told us that the level of monitoring and internal reporting conducted for AIG was consistent with that of other institutions not regulated directly by the Federal Reserve System. Under financial pressure, AIG raised $20 billion in new capital in May 2008 and also considered additional private financing options. AIG raised the capital through three sources: common stock, hybrid securities, and debt financing. The purpose, according to communication between FRBNY staff and the then-FRBNY President, was to address liquidity demands stemming from AIGFP’s requirements to post cash collateral to its CDS counterparties. In addition, FRBNY intended to have discussions with OTS to further understand the liquidity impact of AIGFP’s CDS portfolio. This meeting occurred 3 months later in August 2008. Also during the summer of 2008, AIG considered joining the Federal Home Loan Bank System (FHLB) via the company’s insurance subsidiaries. Such membership could have allowed AIG’s insurance operations to pledge some of their qualified assets against an extension of credit. AIG executives told us the company discarded the idea after learning that funds its subsidiaries might have received would not have been accessible to the parent company. By July 2008, AIG’s then-chief executive officer had concerns that the company’s securities lending program could generate a liquidity crisis, according to interviews we conducted. He shared these concerns with AIG’s Board of Directors, telling them the only source from which the company could secure enough liquidity if such a crisis occurred was the government. He thought it was unlikely the company could approach the capital markets again after raising $20 billion only 2 months before. On July 29, the chief executive officer approached the then-FRBNY President seeking government assistance. During the meeting, the chief executive officer said he explained AIG’s liquidity situation and requested access to the Federal Reserve System discount window. According to the chief executive officer, the President did not think Federal Reserve System officials could or would do that because if the discount window was made available to AIG, it would likely precipitate the liquidity crisis the company wanted to avoid. The chief executive officer noted that the Federal Reserve System had allowed other nondepository institutions to borrow from the discount window after the failure of Bear Stearns Companies, Inc. (Bear Stearns), but said the argument failed to alter the FRBNY President’s position. In the weeks following this meeting, FRBNY officials and staff continued to gather information on AIG’s condition and liquidity issues and to circulate publicly available information. For instance, an e-mail sent in the first week of August 2008 to FRBNY officials highlighted the concerns of one rating agency about AIG’s deteriorating liquidity situation due to strains from its securities lending program and CDS portfolio. The message concluded that AIG needed to raise a large amount of additional capital. On August 11, 2008, FRBNY officials held their first meeting with OTS staff regarding AIG. According to a subsequent FRBNY e-mail, the meeting was an introductory discussion about AIG’s situation and other issues that could affect companies like AIG, such as problems facing monoline insurance companies. Topics discussed relating to AIG included the company’s raising of capital in May 2008, its liquidity and capital positions, liquidity management, rating agency concerns, and problems associated with AIGFP and the securities lending program. In addition, a report on August 14 from an FRBNY staff member who attended the meeting warned staff about AIG’s increasing capital and liquidity pressures, asset and liability mismatches, and the potential for credit rating downgrades, saying AIG needed to take action on these issues. FRBNY officials told us that previously, OTS staff had not communicated information about AIG that FRBNY staff would have flagged as issues to raise with FRBNY management. While FRBNY continued monitoring AIG’s situation into September 2008, FRBNY staff also raised concerns internally about the company’s ability to manage its liquidity problems. On August 18, 2008, FRBNY staff circulated a new research report on AIG by a large investment bank, which highlighted concern that AIG management may be unable to accurately assess its exposures or losses given the complexity of the company’s businesses. In its own memorandum on September 2, FRBNY noted that AIG’s liquidity position was precarious and that the company’s asset and liability management was inadequate given its substantial liquidity needs. Further, a memorandum circulated among FRBNY officials on September 14, which discussed possible lending to AIG, stated that one rating agency’s rationale for potentially downgrading the company stemmed from concerns about AIG’s risk management, not its capital situation. A private research report, also circulated that day, further detailed the view of the rating agency that even if AIG were to raise capital, it might not offset risk management concerns. FRBNY officials told us AIG had fragmented and decentralized liquidity management before the government intervention. Liquidity management became the responsibility of the AIG holding company in early 2008. As one official stated, AIG understood corporate-level liquidity needs but not the needs of subsidiaries, including AIGFP. Leading up to the weekend of September 13–14, 2008, AIG made renewed attempts to obtain discount window access while also initiating efforts to identify a private-sector solution. On September 9, AIG’s then- chief executive officer met again with the then-FRBNY President in another attempt to obtain relief, this time by means of becoming a primary dealer. According to the AIG chief executive, the President said he had not considered this option and would need to respond later. The chief executive told us that he did not receive a response and that he made another effort to contact the FRBNY President on September 11 but was unsuccessful. Meanwhile, AIG also made an inquiry about federal aid to Federal Reserve Board staff, according to a former member of the Federal Reserve Board. According to the former FRBNY President, at the time, a variety of firms, including AIG, were inquiring about discount window access, and he did not recall in his meetings with the AIG chief executive that the AIG chief executive conveyed any evidence or concern about an acute, impending liquidity crisis at the company. On Friday, September 12, 2008, AIG began assembling private equity investors, strategic buyers, and sovereign wealth funds to discuss funding and investment options. Also, AIG’s then-chief executive officer said he spoke with the then-FRBNY President again about the company’s liquidity problems, saying that although the company was pursuing private financing, any solution would require assistance from the Federal Reserve System. Federal Reserve System officials and AIG executives held a meeting, during which the company provided details about its liquidity problems and actions it was considering to address them. According to the FRBNY President, September 12 was the first time the Federal Reserve System received nonpublic information regarding AIG, which indicated AIG was facing “potentially fatal” liquidity problems. One option discussed at that meeting was whether AIG could borrow from the discount window through its thrift subsidiary. FRBNY officials told us, however, that the thrift only had $2 billion in total assets and only millions of dollars in assets that could be used to collateralize a loan, which would have been small relative to AIG’s overall liquidity needs. According to an FRBNY summary of the meeting, AIG mentioned its plan to become a primary dealer over a 6- to 12-month period, but FRBNY officials determined this was not viable because its liquidity needs were immediate. On the morning of September 13, according to an internal communication, AIG executives asked Federal Reserve System officials about how to obtain an emergency loan under the authority provided in section 13(3) of the Federal Reserve Act. Officials responded that the company should not to be optimistic about such assistance. Over the September 13–14, 2008, weekend, FRBNY officials conducted various analyses related to AIG, including an evaluation of the company’s systemic importance, before the Federal Reserve Board ultimately decided to authorize government assistance on September 16. We found at least one instance of quantitative analysis of the systemic risk AIG posed to the financial system. In this analysis, historical equity returns of AIG were assessed, with a conclusion that the company was not systemically important. However, FRBNY officials told us that this analysis was conducted prior to the September 15 bankruptcy of Lehman and did not take into account market conditions that followed that event. Beyond this example, officials could not say whether any other quantitative analyses were conducted regarding systemic risk posed by AIG. Internal correspondence and documents indicate that officials’ assessment of AIG’s systemic risk relied primarily on qualitative factors. For instance, documents show that officials assessed the potential impact on subsidiaries of the AIG parent company filing for bankruptcy, the potential response of state insurance regulators in that situation, and differences between a failure of AIG and Lehman. Officials told us the Lehman bankruptcy was a key factor in how they assessed the systemic risk of an AIG failure, given what they believed would be the strain AIG’s bankruptcy would place on financial markets. Officials told us that had the Federal Reserve System prevented failure of Lehman Brothers, they would have reassessed the potential systemic impact of an AIG bankruptcy. A former senior AIG executive expressed a similar idea to us, saying that had AIG’s crisis occurred before that of Lehman Brothers, the Federal Reserve System would have not provided any assistance to AIG, which would have led to its failure. On September 16, a day after Lehman filed for bankruptcy, an FRBNY official sent a memorandum to the then-FRBNY President and other officials assessing the expected systemic impacts of an AIG failure, including an analysis of the qualitative factors previously discussed. Officials decided that a disorderly failure of AIG posed systemic risk to the financial system, and on that basis, the Federal Reserve Board approved the $85 billion Revolving Credit Facility. They said the only other viable outcome besides the assistance package would have been bankruptcy. Although the Federal Reserve System had various contacts and communications about AIG’s difficulties in the months preceding aid to the company, officials appear to have not acted sooner for various reasons. FRBNY’s then-President has said that because the Federal Reserve System was not AIG’s regulator, it could not have known the full depth of the company’s problems prior to AIG’s September 12 warning. In addition, FRBNY officials told us that from March to September 2008, following the collapse of Bear Stearns, they were intensively involved in monitoring the remaining four large investment banks (Merrill Lynch, Lehman, Goldman Sachs, and Morgan Stanley) not then supervised by the Federal Reserve System. They said the concern was the possibility of another collapse like that of Bear Stearns, and this unusual effort consumed a significant amount of management attention. As AIG’s Needs Became Clearer, Private Financing Failed, Prompting the Federal Reserve to Become More Involved Following AIG’s unsuccessful requests for discount window access, the company and the Federal Reserve System pursued what became a two- phase private-financing effort in advance of the ultimate government intervention. In the week beginning September 15, 2008, AIG faced pressing liquidity needs, and expected to receive rating agency downgrades. The company anticipated this would result in $13 billion to $18 billion in new liquidity demands, primarily stemming from collateral postings on AIGFP CDS contracts. The ability to raise private financing was a key issue for AIG because private funding could have reduced or eliminated the company’s need for government assistance. Further, as discussed later, the inability to obtain private financing was a condition for Federal Reserve System emergency lending. For the first phase of attempts to secure private financing, which AIG led, the company had developed a three-part plan that envisioned raising equity capital, making an asset swap among its insurance subsidiaries, and selling businesses. In the second phase of attempts to secure private financing, which began on September 15, 2008, FRBNY assembled a team of bankers from two large financial institutions to pursue a syndicated bank loan. For the first phase, AIG assembled private equity investors, strategic buyers, and sovereign wealth funds over the weekend of September 13– 14. These parties considered scenarios ranging from equity investments in AIG life insurance subsidiaries to purchases of AIG assets. In all, we identified at least 14 entities as participating in the first phase (see table 1). This effort identified at least $30 billion in potential financing—well short of estimated needs that ran as high as $124 billion. Throughout the September 13–14, 2008, weekend, private equity firms and strategic buyers weighed investments in AIG’s life insurance subsidiaries, although they had concerns about the parent company’s solvency and liquidity needs. On September 12, AIG asked an investment bank advisor to assist in contacting potential investors and to provide financial information to these entities to assist in their assessments of whether and under what terms they could invest in AIG. Also on September 12, AIG engaged two investment banks and an advisor to research and identify options to raise $20 billion in private financing. According to the advisor, it was not certain at the time whether AIG was facing a problem of insolvency or liquidity. According to participants with whom we spoke, the process at AIG over the weekend consisted of a series of formal and informal meetings, during which they discussed potential investments and received briefings from AIG about its financial condition and estimates of its liquidity shortfall. Participants in the process told us there was uncertainty whether any private investment could satisfy AIG’s liquidity needs and what those specific needs were. One private equity firm told us that AIG did not provide an agenda for the weekend, and although it said the process became more organized on September 14, the firm did not receive data it ordinarily obtains when considering an investment. According to another private equity firm, AIG did not provide clear direction amid what the private equity firm described as a chaotic environment. This private equity firm added that some bankers expressed frustration that the process could have been less hurried had AIG started it earlier. As noted, one element of AIG’s three-part plan during the first phase contemplated raising equity capital from commercial sources. We identified two proposals the company received. First, on September 14, a private equity firm, a sovereign wealth fund, and an insurance company together made a $30 billion proposal to AIG. The offer included a private equity investment totaling $10 billion in exchange for a 52 percent stake in two life insurance subsidiaries. In addition, according to our review, the potential investors included four other elements in their plan. 1. The proposal would have created $20 billion in liquidity from an exchange of assets between AIG’s property/casualty and life insurance subsidiaries. This swap required approval of the New York State Insurance Department (NYSID). 2. The proposal relied on the Federal Reserve System granting AIG access to its discount window for a $20 billion line of credit, to be secured by bonds from the asset swap. 3. The proposal required that rating agencies commit to maintaining the company’s credit rating at AA-. 4. The proposal required replacement of AIG senior management, including the chief executive officer. A former senior AIG executive said AIG’s Board of Directors rejected the proposal because it was an inadequate bid with insufficient private equity contribution and many conditions. Another private equity firm told us that it also made an offer to AIG, proposing to buy an AIG insurance subsidiary at a discounted price of $20 billion. Like other firms participating in the first phase, the private equity firm determined that investing in one of AIG’s life insurance subsidiaries, rather than the parent company, posed less financial risk. AIG rejected the proposal, according to the private equity firm. Our review showed that other private equity firms present over the weekend considered investing in AIG, but no formal proposals resulted. For instance, one private equity firm contemplated a $10 billion investment in AIG life insurance subsidiaries in exchange for a 30 percent ownership interest, contingent upon additional financing from commercial banks or the Federal Reserve System. Another private equity firm said it considered an investment in AIG but was unable to make an offer given time pressure and its available investment capacity. The second part of AIG’s three-part plan during the first phase was an asset swap. In addition to being incorporated into one of the plans discussed earlier, the asset swap was also a standalone option. The company contemplated an exchange of assets between AIG property/casualty and life insurance subsidiaries to make available $20 billion in securities to pledge for cash, but this plan was contingent upon approval from NYSID. AIG executives told us they first contacted the then-Superintendent of NYSID late on September 12, 2008, in an effort to assess whether such a swap was feasible. According to our review, NYSID assisted AIG in developing the idea, although it never reached final approval. A condition for approval was that the swap would be part of a comprehensive solution that would include raising equity capital and selling assets—conditions that ultimately were not met. Additionally, state insurance regulators wanted to ensure that the property/casualty companies that would be involved in the plan would still have sufficient capital to protect policyholders after the asset swap occurred. According to a former senior AIG executive, the asset swap would have generated $20 billion in securities for AIG to use as security for borrowing, yielding the company $16 billion to $18 billion in cash proceeds. Toward that end, the company explored repurchase agreements, secured by assets from the swap, with two investment banks. One of the investment banks committed to $10 billion in such repurchase financing, and it noted that another investment bank was contemplating an additional $10 billion in repurchase financing. This second investment bank told us, however, that it considered providing the full $20 billion in repurchase financing to the company. According to executives of the bank, the deal never materialized because certain assets they thought AIG would post as collateral for the financing were unavailable. For the third part of its plan, AIG or its advisor contacted strategic buyers in an effort to generate cash from asset sales. On September 12, AIG offered to sell its property/casualty business for $25 billion to another insurance company. However, according to the potential buyer, the deal proved to be too expensive given time pressure. In another potential deal with the same company, AIG revived previous discussions regarding a guarantee of $5.5 billion of guaranteed investment contracts that AIGFP had written. The guarantee would have allowed AIG to avoid posting $5.5 billion in collateral in the event of a credit rating downgrade in exchange for a one-time fee. The fee contemplated was in the form of a transfer of life settlement polices from AIG to the insurance company. According to an executive of the insurance company, negotiations surrounding the fee continued until September 15, but the parties could not reach an agreement. An FRBNY e-mail also showed internal discussions about two other asset sales to other insurance companies—potential purchase of AIG’s Variable Annuity Life Insurance Company for $8 billion and potential purchase of another AIG subsidiary for $5 billion. In addition to these possible sales, an AIG advisor told us about a potential $20 billion deal with a sovereign wealth fund that was considering asset purchases. According to the advisor, the fund’s primary interest was in purchasing tangible assets, such as real estate. By late in the day on September 14, the first phase of efforts to identify private financing had failed, for reasons including financing terms, time constraints, and uncertain AIG liquidity needs, according to those involved. Two private equity firms indicated that a private solution was not possible without assistance from the Federal Reserve System to assure AIG’s solvency. Similarly, according to a former senior AIG executive, potential investors wanted assurances of solvency before making any investments, and the Federal Reserve System was the only entity in a position at the time to provide such assurances. AIG executives with whom we spoke acknowledged that any investments in the parent company would have been risky. In addition, two would-be investors also told us that a weekend was too little time to construct a deal that would usually take at least 4 weeks. As these participants and AIG executives noted, there was not enough time or money to assist the company. Moreover, participants said the company lacked an understanding of its own liquidity needs, and there was insufficient data to support would-be investors’ decision making. As table 2 shows, AIG’s liquidity needs grew as analysis of the company’s financial situation progressed over the weekend. Over the weekend of September 13–14, 2008, as AIG attempted to secure private financing, the Federal Reserve System avoided actions that could have signaled to companies or other regulators that it would assist AIG. Officials received AIG requests for Federal Reserve System assistance on at least five occasions during approximately the week leading up to September 14. As noted, one of these instances occurred during a meeting between Federal Reserve System officials and AIG executives on the morning of September 13. A Federal Reserve System internal communication documenting the meeting shows that during a discussion about emergency lending under section 13(3) of the Federal Reserve Act, officials indicated to AIG that an emergency loan would send negative signals to the market. Officials told us that during the meeting, they discouraged AIG from relying on a section 13(3) loan. Meanwhile, an e-mail from an FRBNY official communicated to staff that they should avoid conveying to firms or other regulators that the Federal Reserve System was taking responsibility for AIG. Although Federal Reserve System officials were downplaying assistance to AIG, records we reviewed show they began considering the merits of lending to AIG as early as September 2 and continuing through the September 13–14 weekend. One communication we reviewed noted that allowing AIG to borrow through the Federal Reserve System’s Primary Dealer Credit Facility could support an orderly unwinding of the company’s positions but questioned whether such assistance was necessary for AIG’s survival. In addition, e-mails on September 13 show officials considering the operational aspects of lending to AIG through the Primary Dealer Credit Facility, including an evaluation of the collateral available for AIG to post against a loan. Reflecting other concerns, a September 14 communication discussed the merits and drawbacks of lending to AIG. The merits included the possibility that Federal Reserve System lending could prevent an AIG bankruptcy and the potential impacts on global markets that could follow. The drawbacks included that such a loan could diminish AIG’s incentives to pursue private financing to solve its problems. Similarly, some staff preliminarily discussed reasons why the Federal Reserve System should not lend to AIG. These staff were concerned that although there could be short-term benefits, such as helping to stabilize the financial system, the potential moral hazard costs would be too great, according to information we reviewed. Federal Reserve Board officials told us that, given insufficient information and the speed at which events unfolded, no written staff recommendation on whether to lend to AIG was ever finalized or circulated to the Federal Reserve Board. While Federal Reserve System officials considered implications of lending to AIG, they also analyzed the company’s financial condition, including its liquidity position and risk exposures. FRBNY officials told us that staff were instructed to “understand” the nature and size of AIG’s exposures. According to internal correspondence, officials established a team to develop a risk profile of the AIG parent company and its subsidiaries and to gather information, such as financial data. They also worked on a series of memorandums over the weekend highlighting issues at AIG. Much of the analysis focused on the exposures of AIGFP. In addition, records from the weekend show that officials evaluated AIG’s asset- backed securities and CDS portfolio, the company’s systemic importance, and bankruptcy-related issues. According to FRBNY officials, a team from FRBNY’s Bank Supervision Group looked at public information to assess AIG’s condition and, in particular, whether the company’s insurance subsidiaries were a source of financial strength for the company. Officials also met with AIG executives to discuss the company’s liquidity risks. The company provided information detailing the financial institutions with the largest exposures to the company, including credit, funding, derivatives and CDS exposures. The Federal Reserve System also monitored AIG’s discussions with potential investors and NYSID on September 13–14. As noted, Federal Reserve System officials met with AIG executives on September 13. According to minutes from the meeting, although the company needed financing immediately, asset sales could require 6–12 months to complete. For that reason, as noted in the summary of the meeting, AIG expressed interest in Federal Reserve System lending facilities to support its liquidity needs as it sold assets. Federal Reserve System records also indicate uncertainty among officials about whether a private-sector solution would be forthcoming over the weekend. For example, on the morning of September 13, Federal Reserve Board and FRBNY officials discussed telling AIG that it could not rely on the Federal Reserve System for financing, so that the company would focus on its own actions to solve its problems. On the night of September 14, a Federal Reserve Board official described two private equity plans under consideration, both of which were conditioned on Federal Reserve System assistance. After AIG had rejected one plan, a question was raised on what would prompt AIG to consider restructuring or a strategic partnership. Further, an e-mail from September 14 shows the view of one official that AIG was unwilling to sell assets it thought would offer profit-making potential in the future, while at the same time attempting to use the situation to its advantage to convince the Federal Reserve System to offer discount window access. According to the official who wrote the e-mail, AIG was avoiding difficult but viable options to secure private financing. As part of its weekend monitoring of private-sector efforts, officials also had discussions with NYSID and AIG about the status of plans being considered. In addition, one FRBNY official told us of a meeting with a private equity firm over the weekend in order to assess whether its plans to finance AIG were genuine. Overall, FRBNY officials told us that they acted as observers to the events unfolding at AIG over September 13–14 and did not participate in any negotiations on private financing. Rather, they told us their primary focus was addressing the Lehman crisis occurring that same weekend. Officials had meetings throughout the weekend with senior executives of various financial institutions about the Lehman situation. During these meetings, the issue of AIG arose. FRBNY officials told us they received assurances from chief executive officers of three financial institutions present that they were working on AIG’s problems and would address the company’s liquidity needs. Although the Federal Reserve System’s own monitoring of the situation that weekend showed AIG was unable to arrange private financing, an FRBNY official told us there was no information calling into question the financial institutions’ assurances that they would handle the AIG situation. Rather, the Lehman bankruptcy on September 15 and its effect on financial markets eventually called the assurances into question, the official told us. A related issue arose regarding assurances and AIG’s regulators. FRBNY officials said in Congressional testimony that state insurance regulators and OTS had assured them over the September 13–14 weekend that a private-sector solution was available for AIG, and that officials had no basis to question those assurances. State insurance regulators, however, told us no such assurances were given. According to Federal Reserve System officials, they did not consult OTS about AIG’s condition, given the time pressure of events. Further, records we examined indicate that AIG and Federal Reserve System officials themselves communicated the difficulties the company encountered in attempting to obtain private financing over the weekend. Following the failure of the AIG-led weekend efforts, FRBNY began what became the second phase of the private-financing effort on Monday, September 15, 2008. This attempt moved away from equity investments or asset sales and instead focused on syndicating a loan. FRBNY records we reviewed show that some officials continued to believe on September 15 that AIG had options to solve its problems on its own. Nonetheless, FRBNY called together a number of parties and urged them to come up with a private loan solution. According to our review, participants in the meeting included AIG, Treasury, three investment banks, an AIG advisor, an FRBNY advisor, and NYSID. The then-FRBNY President initiated this effort late in the morning of September 15 and requested that the two investment banks identify a commercial bank loan solution for AIG. According to investment banks we interviewed, the FRBNY President did not specify any deadlines or provide special instructions to the financial institutions but asserted that government assistance was not an option. One of the investment banks told us that participants focused on four areas during the second phase—assessing liquidity needs, valuing assets, creating loan terms, and identifying potential lenders. Participants contemplated a $75 billion syndicated loan, consisting of $5 billion contributions from 15 financial institutions. According to FRBNY, the banks envisioned that AIG would need 6 months to sell assets and repay the loan. While the banks worked to create a loan package, FRBNY focused on assessing the exposures to AIG of regulated financial entities, nonbank institutions, and others. Late on September 15, according to our review, the participants reported to the then-FRBNY President about difficulties in securing a loan, to which the President responded with a request that they continue—but this time, also considering a potential government role. According to a former Treasury official, the then-FRBNY President said the Federal Reserve System would provide $40 billion in financing for AIG, but the participants would have to find the remainder. This was the first instance we identified in which officials indicated externally that they would consider government assistance. According to an investment bank, the participants then continued discussions. Nonetheless, the loan effort failed. By the night of September 15, officials concluded private firms could not find the resources to solve the problem, the former FRBNY President told us. The next day, the then-FRBNY President ended the second phase of attempts to find private financing for AIG. The former President told us he could not recall the first mention of government intervention, but that he believed the possibility of government assistance was discussed with the Federal Reserve Board and Treasury on the night of September 15. Participants and FRBNY officials provided varying explanations for why the second phase failed. According to one of the investment banks, AIG’s liquidity needs at the time exceeded the value of any security to back a loan. Therefore, the participants on September 15 did not attempt to line up syndication partners. In addition, one senior AIG executive expressed the view that the Federal Reserve System waited too long to understand and act on the company’s problems. FRBNY officials, however, cited a desire by the banks to protect their finances amid general market turmoil that was exacerbated by the Lehman bankruptcy. They added that private-sector collateral concerns notwithstanding, the collateral AIG used to back the $85 billion Revolving Credit Facility fully secured the Federal Reserve System to its satisfaction, a condition of section 13(3) emergency lending. On the morning of September 16, 2008, the then- Secretary of the Treasury, the Chairman of the Federal Reserve Board, and the then-FRBNY President held a conference call regarding AIG. According to an FRBNY official on the call, the three agreed that the Federal Reserve Board should approve lending to the company. The former FRBNY President told us nothing more could have been done to secure private financing, as the extent and severity of AIG’s liquidity needs, coupled with mounting panic in financial markets that was accelerated by the failure of Lehman, meant private firms had no capacity to satisfy AIG’s needs. Later that day, after the two failed efforts at private financing, the Federal Reserve Board authorized FRBNY to enter into the Revolving Credit Facility with AIG to avoid what officials judged to be unacceptable systemic consequences if AIG filed for bankruptcy. The Federal Reserve Offered AIG Help in Avoiding Bankruptcy, and AIG Made the Final Decision to Accept Government Assistance By September 12, 2008, as AIG headed into the weekend meetings aimed at identifying private financing, the company had also begun considering bankruptcy issues, as it faced possible failure during the week of September 15. According to a former senior AIG executive, around September 12, the company engaged legal counsel to begin preparations for a possible bankruptcy. As noted, AIG also gave a presentation to FRBNY officials on September 12, which included information about possible impacts of bankruptcy. After AIG’s presentation, FRBNY officials began their own assessment of the prospect and possible effects of AIG’s failure, focusing on the systemic consequences of bankruptcy and how the legal process of filing might unfold. On September 14, FRBNY held a discussion about AIG with risk managers of an investment bank as well as the Office of the Comptroller of the Currency. According to a meeting record, AIG would have been forced to file for bankruptcy on September 15, absent private financing to meet its liquidity demands. Officials’ concern about the systemic effect of an AIG bankruptcy included whether such a filing would have prompted state insurance commissioners to seize AIG insurance subsidiaries. According to FRBNY officials, regulatory seizures of AIG’s insurance subsidiaries following a bankruptcy filing would have complicated any efforts to rescue the company because AIG’s businesses were interconnected in areas such as operations and funding. Therefore, according to the officials, discrete seizures by individual state insurance regulators would have made bankruptcy unworkable. In addition, foreign authorities were becoming concerned, and bankruptcy could have resulted in insurance regulators worldwide seizing hundreds of AIG entities. According to the officials, they looked at the experience of previous insurance company failures, but none were comparable to AIG’s situation. According to our review, both AIG executives and a number of government officials expressed concerns about possible seizures of AIG assets shortly before the Federal Reserve Board authorized the Revolving Credit Facility. For example, at an AIG Board meeting on September 16, an AIG executive stated that NYSID would seize the company’s New York insurance units if AIG went into bankruptcy. A former senior AIG executive told us that on September 16, at least three state insurance regulators said they would seize AIG insurance subsidiaries in their states if the parent company filed for bankruptcy. In a number of records we examined, government officials also stressed the likelihood that insurance subsidiaries would be seized, particularly those experiencing financial difficulties. State insurance regulators were less certain of the likelihood of seizure, according to our review. A former state insurance official told us that he cautioned FRBNY officials that seizures were highly likely. AIG’s lead life insurance regulator told us it considered the possibility of intervention, but added that states generally have an incentive not to place insurance companies into receivership, as that has negative connotations that could diminish companies’ value. Several state insurance officials overseeing AIG’s property/casualty and life insurance businesses told us that bankruptcy of the AIG parent company would not have required them to act as long as the insurance subsidiaries were solvent, and they did not foresee insolvency. Two state insurance regulators also told us they did not communicate to the Federal Reserve System or AIG that they would intervene in the company’s subsidiaries. State insurance officials said that in the past, their approach has been to monitor the situation when a parent company filed for bankruptcy—for example, Conseco, Inc.— because statutory provisions protected insurance company assets. In offering to assist AIG, the Federal Reserve Board sought specifically to give the company the means to avoid a bankruptcy filing because of concerns about systemic risk, officials told us. Our review showed that beyond offering a way to avoid such a filing, the Federal Reserve Board had no direct role in the AIG board’s consideration of bankruptcy on September 16. On that day, an AIG board meeting had already been scheduled at 5 p.m. to discuss the possibility of bankruptcy, according to a former senior AIG executive. After the Federal Reserve Board offer earlier in the day, the meeting became a discussion about government assistance versus filing for bankruptcy, the former executive said, which was described as the only available alternative. According to information we reviewed, the AIG board’s view was that the terms of the government’s offer were unacceptable, given a high interest rate and the large stake in the company—79.9 percent—the government would take at the expense of current shareholders. AIG executives telephoned FRBNY officials during the AIG board meeting in an effort to negotiate terms of the Revolving Credit Facility, but the FRBNY officials said the terms were nonnegotiable and that the company had no obligation to accept the offer. During the AIG board meeting, AIG’s advisors also discussed implications of a potential bankruptcy filing. This discussion included the value of potential future asset sales and the value of the company’s subsidiaries generally, as well as legal advice on what the company’s fiduciary duties were in any such event. As part of its bankruptcy issues consideration, AIG’s board also contemplated debtor-in-possession financing from an investment bank. But AIG told us its financial adviser believed such financing would have been difficult in light of then-current market conditions, and a former senior AIG executive told us AIG would have required debtor-in-possession funding of unprecedented size at a time when markets were volatile. The AIG board decided that government assistance was the best option because that would best protect AIG’s value, according to records we reviewed. Additionally, a former senior AIG executive told us that AIG accepted the Federal Reserve System’s offer of assistance because of uncertainty about how bankruptcy proceedings would unfold. Ultimately, 10 of the 11 directors voted to accept the federal loan offer. AIG executives and advisors stressed to us that the only matter presented for consideration that day was whether to accept the Federal Reserve System’s loan offer. As part of that, however, directors considered issues and implications that might arise from a bankruptcy filing, they said. The executives said that at that point, the company was not prepared to file for bankruptcy if it did not accept the loan, and no bankruptcy petition had been prepared for filing or directed to be prepared. AIG executives told us that after accepting the Federal Reserve System loan, they did not consider bankruptcy issues again but rather focused on devising solutions to the company’s problems. FRBNY officials told us that as a practical matter, AIG’s acceptance of the Revolving Credit Facility had effectively precluded bankruptcy as an option, at least in the short term, because it would have immediately put the funds that FRBNY had loaned to AIG at risk. Nevertheless, FRBNY continued to examine bankruptcy as an alternative to additional government assistance over the next several months following the establishment of the Revolving Credit Facility, according to records we examined. For instance, in briefing slides circulated to FRBNY officials on October 7, one FRBNY staff member argued that bankruptcy was the least-cost resolution for AIG, even though the company continued to pose systemic risk. Also, Federal Reserve Board staff began gathering data on the systemic implications of an AIG bankruptcy and devising a contingency plan to protect the banking system. A bankruptcy advisor to FRBNY told us that officials continued to discuss bankruptcy in lieu of federal assistance throughout the fourth quarter of 2008 and into early 2009. Internal FRBNY briefing slides from February 2009 show consideration of the consequences and costs of bankruptcy versus further government assistance, including restructuring of the government’s TARP investment in AIG and additional capital commitments for AIG’s subsidiaries. The assessment concluded that bankruptcy costs would reflect loss of the government’s TARP investment in preferred stock, plus any additional losses from unpaid portions of the Revolving Credit Facility. It further noted that AIG would be more likely to repay the government if it received more assistance than if it filed for bankruptcy. Moreover, due to AIG’s interconnections with other financial institutions, bankruptcy had other potential costs to the government, such as the possibility that other institutions with exposure to AIG would need subsequent government support. There could also be a run on the life insurance industry, the assessment noted. The Federal Reserve Board also weighed effects of bankruptcy when considering additional government assistance, according to minutes of a Federal Reserve Board meeting on February 19, 2009. The minutes show that given the potential costs of bankruptcy to AIG’s insured parties, the governors generally agreed that stabilizing AIG with more government aid was the only option at that point, notwithstanding concerns over potentially increased taxpayer exposure. In addition to these concerns, FRBNY, its bankruptcy advisor, and state insurance regulators also cited other factors that complicated the viability of bankruptcy for either the AIG parent company or its subsidiaries. First, according to the advisor, AIG’s Delaware-based federal savings bank, as well as the company’s foreign and domestic insurance subsidiaries, could not file for bankruptcy protection because they were not eligible to be Chapter 11 debtors. State insurance regulators told us that if AIG failed, then the parent company, its AIGFP unit, and other entities would have filed for bankruptcy, but that state insurance laws prevented the parent company from accessing insurance subsidiary assets to satisfy claims of any entities other than policyholders. FRBNY’s advisor told us that the legal limitations on any partial bankruptcy were as important to assessing whether to provide assistance to AIG as the issues concerning the company’s close connections with other entities. Second, AIG’s parent company had guaranteed many liabilities of its subsidiaries. For example, AIGFP relied on the strength of the parent company’s finances and credit ratings. As a result, according to FRBNY’s bankruptcy advisor, a bankruptcy of either the parent or AIGFP would have constituted a default under AIGFP’s CDS contracts, potentially leading to termination of the contracts and additional demands for liquidity. As noted in a document circulated among FRBNY officials on October 7, 2008, a default on AIGFP’s CDS contracts could have involved a large number of the company’s counterparties. Moreover, according to an advisor, the CDS contracts were defined as agreements that would have been exempt from automatic stay under the U.S. bankruptcy code. As a result, AIGFP’s CDS counterparties could have terminated their contracts notwithstanding an AIG bankruptcy filing, obligating AIG to pay the counterparties early termination amounts on those transactions. FRBNY’s bankruptcy advisor told us that neither AIGFP nor the parent company, as guarantor of AIGFP’s obligations, would have had the funds to pay the cost of early terminations of all such positions in AIGFP’s derivatives portfolio, including CDS and other types of derivatives. As discussed earlier, FRBNY briefing slides indicated that AIG’s bankruptcy at the time would have resulted in $18–24 billion in funding needs. Also, because some of the company’s CDS counterparties were European banks, the potential economic loss from a default could have affected the global banking system. Another concern underlying officials’ bankruptcy considerations was whether refusing to provide additional support for AIG beyond the original aid would have hurt the government’s reputation or market confidence, according to records we reviewed. For instance, one memorandum notes that allowing AIG to fail after providing the Revolving Credit Facility would have caused loss of market confidence in government support, which could have had systemic consequences. FRBNY officials told us that a similar concern existed about preserving confidence in policymakers and that withdrawing from the Federal Reserve System’s strategy only weeks after the Revolving Credit Facility was extended would have been extraordinary. There were similar confidence issues with respect to AIG that contributed to decisions on assistance. An FRBNY advisor told us there were questions of whether AIG could survive a bankruptcy proceeding because the company had built its business model on long-term customer confidence. For example, the advisor noted that during the fall of 2008, customers were saying they would not renew their coverage without a solution in place to address AIG’s problems. Another advisor opined that if AIG filed for bankruptcy, officials could have avoided moral hazard and criticism over use of additional public funds. However, bankruptcy also could have led to further market deterioration at a time when there was already uncertainty about Lehman and other financial issues, the advisor said. FRBNY officials told us they continued to consider contingency plans for AIG, including the desirability of bankruptcy, until around August 2009, by which time new board members and a new chief executive had been named. According to officials, the contingency planning reflected overall concerns about financial market stability that persisted beyond the September 2008 weekend of the Lehman bankruptcy and AIG crisis. For example, officials told us that between September 16, 2008, and January 2009, insurance companies other than AIG lost approximately $1 trillion in market value, and many of them were on the verge of bankruptcy. By the end of 2009, however, the company’s situation had improved to a point that bankruptcy ceased to be a focus in consideration of options, according to the officials. Given the Crisis, There Was Little Time to Consider Alternatives for Initial Aid, but AIG and the Federal Reserve Considered a Range of Options for Later Assistance FRBNY officials told us that overwhelming pressure to act quickly at the time the Revolving Credit Facility was established prevented them from thoroughly considering other options. They said this pressure was the result of three factors:  They did not understand the size and nature of AIG’s liquidity needs until AIG’s presentation on September 12, 2008.  AIG, as noted, faced a potential credit rating downgrade on September 15 or 16 that would have generated large demands for cash.  The company was unable to roll over commercial paper at maturity, so large cash commitments would have been due on September 17. Officials told us that given these constraints, there was no time to engage advisors and fully explore options. Still, records we examined show that some alternatives were considered. An FRBNY staff memorandum from September 13, 2008, cited two alternatives to the Revolving Credit Facility. One was to lend to AIG through an intermediary to which a Reserve Bank had the authority to lend, such as a commercial bank or primary dealer. Officials told us the problem with this idea was uncertainty whether an intermediary would execute any plan and under what terms. The other idea was to provide financing to AIG from Treasury or NYSID. Officials told us, however, that at that time, Treasury had no authority to offer assistance and NYSID did not have the necessary funds. There was also discussion before the Revolving Credit Facility of potential financing through the FHLB system. FRBNY e-mails on September 15, 2008, show consideration of whether AIG could secure FHLB financing through its insurance subsidiaries, which as noted earlier, AIG itself had contemplated over the summer of 2008. The e-mails note that AIG’s federal savings bank was a member of the FHLB of Pittsburgh and indicate that the FHLB of Dallas was willing to lend to AIG against high- quality collateral. Nevertheless, FRBNY officials said the time constraints prevented meaningful exploration of solutions other than to either let AIG fail or to provide the emergency loan. In the week following establishment of the Revolving Credit Facility, officials began their own assessment of AIG’s condition before considering options for additional assistance. Previously they had relied on information from AIG and those involved in private financing efforts. Records we reviewed show that on September 17, 2008, the day after AIG accepted the Revolving Credit Facility, FRBNY had a team at AIG to monitor collateral valuation practices, risk management, and exposures of various subsidiaries. According to FRBNY officials, there were two main objectives during that first week: (1) to forecast AIG’s liquidity situation to better understand the company’s needs moving forward and (2) to verify that the Revolving Credit Facility was secured and that AIG’s draws against it did not exceed the value of posted collateral. FRBNY officials said that they wanted to develop their own views on these matters and engaged three advisors for assistance during that week. Following initial assessments, FRBNY and its advisors shifted attention to considering additional options for AIG. According to FRBNY officials, they already had begun to think about other ways to provide aid they believed AIG would need while still in the process of drafting documents for the Revolving Credit Facility. For that reason, officials said, they drafted a credit agreement for the facility that would allow them to make changes in government support for AIG without the company’s consent. FRBNY officials said their general approach in considering options was to have AIG bear a cost for any benefit received, so that the company had a strong economic incentive to repay assistance. According to these officials, FRBNY had no interest in providing funds beyond the initial Revolving Credit Facility unless the clear purpose was to stabilize the company. Also, the officials said they did not want aid to create negative incentives in the company that could create reliance on government protection, and they were mindful of rating agencies’ concerns. Further, avoiding arrangements that created a continuing relationship with AIG was important. An FRBNY advisor told us that this approach also included trying to contain the problems at AIGFP. FRBNY officials told us that in general, the process for developing options, given the objectives cited previously, was to brainstorm ideas while taking note of applicable constraints or barriers. In the end, the available options narrowed to essentially the plans that were implemented. FRBNY officials said that in developing options, one element remained constant—the expectation that AIG’s source of repayment for its emergency lending would be through liquidation or sale of whole subsidiaries, rather than through company earnings. Officials did not consider company earnings alone to be sufficient in light of AIG’s needs to reduce its size and stabilize itself through recapitalization. Further, the officials told us that while the private-sector lending plan of September 15, 2008, contemplated liquidating the company in 6 months, they were doubtful that could be achieved. According to these officials, liquidation over a short period would have led to additional credit rating downgrades, furthering concerns about AIG’s rating-sensitive business model. After the initial provision of aid, AIG’s liquidity problems remained and the original terms of the Revolving Credit Facility contributed to higher debt costs. Officials were concerned the company’s credit ratings would be lowered, which would have caused its condition to deteriorate further. There were also continuing concerns about AIG’s solvency. As discussed in October 2008, market doubts about solvency stemmed from concerns about liquidity, the company’s exposure to RMBS and asset-backed securities (via its CDS transactions), and the impact of AIG’s difficulties on the business prospects of its insurance subsidiaries. FRBNY officials noted that in addition to its own particular problems, AIG also was facing the same difficulties as other financial institutions at the time, such as the loss of access to the commercial paper market. In the weeks following the announcement of the Revolving Credit Facility, AIG’s actual and projected draws on the facility grew steadily (see fig. 2). AIG used almost half the facility by September 25 and was projected to begin approaching the $85 billion limit by early October. Ultimately, AIG’s actual use of the facility peaked at $72.3 billion on October 22, 2008. In response to AIG’s continuing difficulties, FRBNY officials told us that they considered a range of options leading up to the November 2008 restructuring of government assistance. However, our review identified that the first possibility for modifying assistance to AIG came from the private-sector. We found that on September 17, 2008, a consultant contacted the Chairman of the Federal Reserve Board and the then- FRBNY President to raise an idea, suggested by a client, to form an investor group that was willing to purchase about $40 billion of the $85 billion Revolving Credit Facility. The client said such a purchase would be advantageous to the Federal Reserve System because it would provide a positive signal to financial markets and could transfer some of the risk of the loan to the private parties, whose involvement would also demonstrate that the Revolving Credit Facility had commercial appeal. Federal Reserve Board officials told us that this idea, which came only days after the failure to obtain private financing for AIG, did not develop further. Earlier, as AIG’s board contemplated government assistance on September 16, the former FRBNY President told the company he was willing to consider an offer for private parties to take over the credit facility. The President characterized the idea as a preliminary offer and told us he understood one feature was to make the investors’ $40 billion investment senior to the government’s interest. That would have significantly increased the risk of the FRBNY loan, making the Reserve Bank more vulnerable to a loss, the former President said. He said allowing FRBNY’s interest to become subordinate to that of private investors would not have been in the best interest of taxpayers. During October 2008, the Federal Reserve System considered options that included what became ML II and ML III, as well as an accelerated asset sales process and government purchases of AIG’s life insurance subsidiaries. As discussed earlier, officials expected that AIG would have to divest assets to generate cash to repay the government’s loan. Toward that end, the Federal Reserve Board asked staff to encourage AIG to sell assets with greater urgency, according to information we reviewed from October 2008. In addition, as FRBNY briefing slides from October 2, 2008, show, officials contemplated other options, including financial guarantees on the obligations of AIGFP and its CDS portfolio, increasing the $85 billion available under the Revolving Credit Facility, and becoming the counterparty to the company’s securities lending portfolio (the latter of which was acted upon, with the Securities Borrowing Facility). FRBNY officials also considered a proposal to directly support AIG’s insurance subsidiaries, to preserve their value, according to the October 2 slides. The presentation notes that these potential support actions would include “keepwell” agreements and excess-of-loss reinsurance agreements, which would ultimately terminate upon sale of the subsidiary. It further noted that this approach would have allowed officials to address credit rating concerns by severing the link between the ratings of AIG’s parent and its subsidiaries. When considering options for AIG, FRBNY officials said they also took into account legal barriers, which eliminated some of the alternatives contemplated, such as guarantees, keepwell agreements, and ring- fencing of AIG’s subsidiaries. Under section 13(3) of the Federal Reserve Act, a Reserve Bank’s authority did not extend beyond making loans authorized by the Federal Reserve Board that were secured to the Reserve Bank’s satisfaction. Moreover, officials told us they had no authority to issue a guarantee. In mid-October, Federal Reserve Board and FRBNY staff discussed options, such as a guarantee or keepwell agreement, with Federal Reserve Board staff being opposed to these options. The staffs also discussed the possibility of Treasury providing such arrangements and whether these options were important in case of a credit rating downgrade. The issues were whether the government could protect the value of the AIG insurance subsidiaries that collateralized the FRBNY credit facility and prevent the abrupt seizure of those companies by state insurance regulators. As for ring-fencing, officials told us it was not viable due to time constraints and the lack of a legal structure to facilitate it. An FRBNY advisor told us that Treasury may have been able to provide a guarantee to AIG but that the amount of any guarantee would have been subject to limitations. The advisor added that the guarantee also raised moral hazard issues. As Federal Reserve System officials continued to consider the best approach for AIG, other relief became available. In late October 2008, some AIG affiliates began to access the Federal Reserve System’s newly created Commercial Paper Funding Facility. The Emergency Economic Stabilization Act of 2008, enacted the same month, gave Treasury the authority to make equity investments, which it used to make its $40 billion investment in AIG in November 2008. Meanwhile, according to records and interviews with FRBNY officials, AIG proposed plans—including the provision of additional government funds to purchase CDOs that were the subject of the company’s CDS contracts and a repurchase facility with the government—in which AIG would purchase assets in a transaction similar to what ML III did. The officials told us that while they aimed to stem AIG’s liquidity drains, they also wanted to limit erosion of the company’s capital, and a repurchase facility would have jeopardized that objective. In addition, the repurchase facility would have placed FRBNY in a continuing relationship with AIG, which FRBNY officials told us was generally an unwanted outcome for any option. Ultimately, the assistance provided to AIG in the 2 months following the Revolving Credit Facility included the Securities Borrowing Facility, ML II and ML III, restructuring of the Revolving Credit Facility’s terms, the Commercial Paper Funding Facility, and assistance from Treasury under TARP. Before the March 2009 restructuring of government assistance, FRBNY and its advisors continued to consider more possibilities for assisting AIG, in particular, for helping it sell assets. According to one advisor, AIG faced a number of challenges in the months leading up to this second restructuring of government assistance. For example, AIG was expecting a loss for the fourth quarter of 2008 of $40 billion, which was $15 billion more than its loss in the previous quarter. (The actual loss AIG reported was $61.7 billion, which was reported at the time as being the largest quarterly loss in U.S. corporate history.) In addition, AIG’s asset-sale plan was under pressure from low bids, delays, and limited interest from buyers who lacked financing in a fragile credit market. As a result of these and other issues, FRBNY officials expected AIG to receive a credit rating downgrade. In response, both the company and FRBNY considered a number of new options. According to company records, AIG considered a package of options that included asset and funding guarantees, a debt exchange to reduce the Revolving Credit Facility, and recapture of fees the company paid on the Revolving Credit Facility worth $1.7 billion plus interest. Ideas of FRBNY or its advisors included additional TARP investments by Treasury, $5 billion in guaranteed financing for AIG’s International Lease Finance Corporation, and nationalization of the company. The latter, as noted in the records of an advisor, included provisions for winding down AIGFP, converting Treasury’s preferred stock investment under TARP into common stock, and providing government guarantees of all AIG obligations. FRBNY and its advisors continued to develop options after the restructuring on March 29, 2009, but that was the last time the Federal Reserve Board formally authorized assistance for AIG, as the company’s prospects began to stabilize. According to records we reviewed, these options included creation of a derivatives products company with a government backstop to engage in transactions with AIGFP’s derivative counterparties and separating AIGFP from the AIG parent company to mitigate risks the subsidiary posed. According to FRBNY officials, their general attitude toward AIG and consideration of options in the months following the Revolving Credit Facility was to listen and observe, trying to see how the firm was attempting to solve its problems. This approach sometimes meant they did not share information or plans with AIG—for example, when they were considering details for ML III or expected contingencies if the government decided not to provide additional support for the company. AIG executives described their relationship with FRBNY as collaborative and said that FRBNY officials did not deter the company from proposing solutions. They also noted there was frequent contact between the company and FRBNY. Overall, FRBNY officials told us that they led the development of options, while relying on three advisors for expertise in designing structures and analyzing scenarios. FRBNY engaged advisors primarily for evaluation of technical details, as staff did not have the expertise to conduct the depth of analysis and modeling required, for example, in creating ML II and ML III. FRBNY officials also told us they gave guidance to AIG while focusing on options that would stabilize the company and provide repayment of the government assistance—although those goals were not always aligned. In mid-October 2008, for instance, AIG approached officials about the company’s idea for the repurchase facility noted earlier. FRBNY officials said they told the company not to pursue that course but to continue attempts to negotiate terminations with its CDS counterparties. Officials said that they were in a good position to assess ideas AIG proposed because they had begun work related to ML II and ML III in the weeks after the establishment of the Revolving Credit Facility. Credit Ratings Were a Key Consideration in AIG Assistance Although the performance of credit rating agencies during the financial crisis has drawn criticism, Federal Reserve System officials said AIG’s credit ratings were central to decisions about assistance because rating downgrades could have triggered billions of dollars in additional liquidity demands for the company. Downgrades could also have jeopardized AIG’s asset sales plan and repayment of government aid, if a downgrade led to events that significantly reduced the value of AIG assets. As a result, FRBNY joined with AIG to address rating agency concerns throughout the course of government assistance to the company. Beginning in late 2007, AIG’s exposure to the subprime mortgage market and its deteriorating derivatives portfolio raised concerns among rating agencies, rating agency executives told us. In February 2008, AIG announced a material weakness in the valuation of its CDS portfolio, leading Moody’s Investors Service to lower its ratings outlook for AIG senior debt from stable to negative. In the same month, other rating agencies also placed AIG on negative outlook, suggesting the possibility of a future downgrade. As 2008 progressed, AIG executives met with rating agencies to discuss the company’s situation. Following reviews of AIG’s deteriorating condition and the announcement of losses for the first quarter of 2008, Moody’s Investors Service, Standard & Poor’s, Fitch Ratings, and A.M. Best Company all downgraded AIG’s ratings in May 2008. Over the summer of 2008, AIG communicated with rating agencies about its development of a strategic plan to address its problems. The company expected to announce the plan at the end of September, a former AIG executive told us. On August 6, AIG announced a second quarter loss of $5.36 billion. Rating agencies initially said they would hold off action until the company’s chief executive officer presented the new strategic plan, the former executive told us. By late August, however, rating agencies had indicated to AIG that they would review the company and probably downgrade its rating, the executive said. This development, the senior executive added, was ultimately responsible for the company’s liquidity crisis in September 2008. In the weeks leading up to AIG’s crisis weekend of September 13–14, rating agencies cited concerns about mounting problems in AIG’s CDS portfolio and indicated they would lower AIG’s credit ratings unless the company took actions to prevent the move. Other rating agency concerns included AIG’s declining stock price, its liquidity position in general, and its risk management practices above and beyond capital needs. One rating agency said that during the second week of September, concerns about AIG’s financial condition increased greatly over a short period of time. Immediately after the Federal Reserve Board authorized the Revolving Credit Facility, the potential for downgrades following the announcement of an expected quarterly loss effectively established a deadline for the Federal Reserve System as it worked to restructure its assistance to the company. FRBNY officials told us they timed restructuring plans to coincide with AIG’s release of its third quarter results on November 10, 2008, because they expected that an announcement of a quarterly loss would result in a downgrade without a strategy to further stabilize the company. By early October, Federal Reserve Board staff identified forestalling a ratings downgrade as the priority because a downgrade would hurt AIG subsidiaries’ business, among other problems. Although the Federal Reserve System’s Securities Borrowing Facility implemented earlier had helped to prevent downgrades, rating agencies wanted to see additional measures taken. FRBNY also considered asking rating agencies to take a “ratings holiday,” whereby the rating agencies would agree not to downgrade AIG. Information we reviewed further indicates that leading up to the announcement of restructuring of government assistance in November 2008, FRBNY and Federal Reserve Board officials were concerned about ratings and whether options they were considering would prevent a downgrade. October 26 briefing slides from an FRBNY advisor detailed various rating agency concerns, including ongoing liquidity and capital problems at AIGFP, the parent company’s debt levels following the Revolving Credit Facility, and risks associated with executing AIG’s asset sales plan. When the Federal Reserve Board considered authorization of the restructuring package, a key factor was rating agency concerns. Ratings implications continued to factor into officials’ decisions leading up to the second restructuring of government assistance in March 2009 but with a greater focus on AIG’s asset sale plans and the performance of its insurance subsidiaries. According to an FRBNY advisor, potential losses, combined with AIG’s deteriorating business performance, difficulties selling assets, and a volatile market environment, meant that a ratings downgrade was likely unless the government took additional steps to assist the company. FRBNY officials told us a main rating agency concern was whether AIG could successfully execute its restructuring plan over the multiyear period envisioned. Both rating agency executives and FRBNY officials told us they had no contact with one another concerning AIG before September 16. After the establishment of the Revolving Credit Facility, FRBNY officials began to develop a strategy for communicating with the rating agencies to address their concerns. They told us that they implemented this approach after the rating agencies contacted them in the week following September 16, 2008, seeking to understand what the government had planned. FRBNY officials also said there was a rating agency concern that the FRBNY loan was senior to AIG’s existing debt. As a result, according to the officials, it became clear early that the rating agencies would play a key role, because further downgrades would have a serious impact on AIG and cause further harm to financial markets. In response to rating agency issues, officials said they provided information about the Revolving Credit Facility in the 2 weeks following authorization of the lending, but not about AIG or potential future government plans. FRBNY engaged three advisors to develop its strategy for rating agency communications. As part of the effort, FRBNY officials began participating in discussions between AIG and the rating agencies about the implications of government assistance on AIG’s ratings. FRBNY officials told us they generally met with AIG and rating agencies together, but that officials had some independent discussions with the rating agencies, along with a Treasury official, to confirm details of federal plans to assist the company. These separate sessions were not, however, related to what AIG itself was doing or intended to do, the officials said. In general, interacting with rating agencies in this way was new for FRBNY officials, who told us they were concerned that talking to the rating agencies without AIG present could influence the ratings without allowing AIG to have any input. They also noted that the proper relationship was between the rating agencies and the company, as FRBNY was not managing AIG. FRBNY officials said that they viewed the rating agencies as a limiting factor in considering options but not necessarily a driving force, as restructuring efforts focused on stabilizing AIG and not necessarily on preventing a downgrade. AIG’s business partners, brokers, and bank distribution channels had concerns about the company’s ratings, because a specified credit rating can be required to transact business, the officials said. But FRBNY’s policy objective was to prevent a disorderly failure of AIG, and FRBNY officials said they did not believe that would have been possible if AIG was downgraded to the levels rating agencies were considering. The rating agencies, FRBNY officials said, were an indicator of how the market would view AIG upon implementation of various solutions. They added that the rating agencies wanted to hear solutions and that the government was flexible and committed to helping AIG but did not wish to participate in decision making. Several rating agencies told us they did not see their role in discussions with AIG executives and FRBNY officials as becoming involved in decision making or management of AIG. Instead, meetings with AIG were standard in nature, whereby the agencies would gather information, react to plans, or share perspectives on potential ratings implications of contemplated actions. Representatives from one rating agency described, for example, meetings at which AIG presented its plans and the agency commented about the potential implications on ratings in general without mentioning a specific rating that would result. Similarly, another agency told us that it would ask questions about options AIG presented but did not offer input or recommendations regarding individual plans. The agency added that legal barriers prevented it from suggesting how to structure transactions so that a company could improve its rating. FRBNY officials concurred with the rating agencies’ description of their role. They said the agencies did not indicate what they considered acceptable or provide detailed feedback on government plans. To the contrary, FRBNY officials told us that they would have liked for the rating agencies to provide instructions on minimum actions needed to maintain AIG’s ratings. But the agencies frequently pointed out that they did not want to be in the position of effectively running the company by passing judgment on various plans. FRBNY officials said that they generally understood the rating agencies’ concerns, but did not make specific changes to the restructured Revolving Credit Facility, ML II, or ML III based on rating agency feedback. FRBNY’s Maiden Lane III Design Likely Required Greater Borrowing, and Accounts of Attempts to Gain Concessions From AIG Counterparties are Inconsistent After the first extension of federal assistance to AIG—the Revolving Credit Facility—ML III was a key part of the Federal Reserve System’s continuing efforts to stabilize the company. We found that in designing ML III, FRBNY decided against plans that could have reduced the size of its lending or increased the loan’s security, as it opted against seeking financial contributions from AIG’s financial counterparties. We also found that the Federal Reserve Board approved ML III with an expectation that concessions would be negotiated with the counterparties, but that FRBNY made varying attempts to obtain these discounts, which could have been another way to provide greater loan security or to lower the size of the government’s lending commitment. FRBNY officials told us, however, that the design they pursued was the only option available given constraints at the time, and that insistence on discounts in the face of counterparty opposition would have put their stabilization efforts at serious risk. In creating ML III, FRBNY sought to treat the counterparties alike, with each of them receiving full value on their CDO holdings. However, because the circumstances of individual counterparties’ involvement with AIGFP varied, the counterparties’ perception of the value of ML III participation likely varied as well. Need to Resolve Liquidity Issues Quickly Drove the Federal Reserve’s Decision Making on Maiden Lane III The financial pressures on AIGFP arose primarily from collateral calls on approximately 140 CDS contracts on 112 mortgage-related, multisector CDOs with $71.5 billion in notional, or face, value for about 20 financial institution counterparties. To address AIGFP’s difficulties, FRBNY had three broad approaches it could take, according to the then-FRBNY President: (1) let AIG default on the CDS contracts that were causing its liquidity problems; (2) continue to lend to AIG, so it could meet its obligations under those CDS contracts; or (3) restructure the CDS contracts to stop the financial pressure. FRBNY chose the third approach, and officials said that in the subsequent design of a specific structure for ML III, time pressure was a key factor. Collateral figured prominently in ML III assistance. Shortly prior to ML III’s creation in November 2008, AIGFP had posted approximately $30.3 billion in collateral to its counterparties. AIG faced the prospect of being required to post still more collateral if there were further declines in the market value of the CDOs being covered, which could have created significant additional liquidity demands for the company. In addressing AIGFP’s liquidity risk from additional collateral calls, FRBNY contracted with financial advisors in September and October 2008. These advisors, among other things, developed alternatives, forecasted scenarios of macroeconomic stress to be used in decision making, calculated the value of CDOs that would be included in ML III, and helped develop messages to describe ML III to AIG’s rating agencies. According to FRBNY and its advisors, the process of considering options was collaborative, with FRBNY providing guiding principles and direction and the advisors developing detailed designs. FRBNY’s goal was to have a structure in place before AIG’s quarterly earnings announcement on November 10, 2008, when AIG was expected to report a large loss that likely would have resulted in a credit rating agency downgrade, which in turn, would have caused additional CDS collateral calls for AIGFP. FRBNY and its advisors considered three alternatives designed to halt AIGFP’s liquidity drain, each of which contemplated differing funding contributions and payments to AIGFP’s CDS counterparties. As illustrated in figure 3, the alternatives were: the as-adopted ML III structure, in which FRBNY loaned and AIG contributed funds to the ML III vehicle;  a “three-tiered” structure, in which AIG and FRBNY, plus AIGFP’s counterparties, would have contributed funds to the structure; and  a “novation” structure, in which AIGFP’s CDS contracts would have been transferred to a new vehicle funded by FRBNY, AIG, and collateral previously posted to AIGFP’s counterparties. The as-adopted structure. Under the as-adopted ML III structure, AIG’s counterparties received essentially par value—that is, the notional, or face, value—for their CDOs (or close to par value after certain expenses). They did so through a combination of receiving payments from ML III plus retaining collateral AIG had posted to them under the company’s CDS contracts. In return, the counterparties agreed to cancel their CDS contracts with AIG. The as-adopted ML III structure was financed with a $24.3 billion FRBNY loan in the form of a senior note and a $5 billion AIG equity contribution, resulting in an 83/17 percent split in total funding, respectively. ML III used these funds to purchase the CDOs from AIG counterparties at what were determined to be then-fair market values. The AIG equity contribution was designated to absorb the first principal losses the ML III portfolio might incur. The three-tiered structure. Under the three-tiered alternative, the counterparties choosing to participate would have received less than par value for their CDOs. This would have been through a combination of retaining collateral AIG had posted and receiving payment from ML III for the sale of their CDOs, but also making funding contributions to ML III. In return, as in the as-adopted structure, the counterparties would have canceled their CDS contracts with AIG and transferred the CDOs to the structure. The three-tiered structure would have been financed with an FRBNY loan in the form of a senior note and an AIG equity contribution, as in the as-adopted structure, plus loans from AIGFP counterparties in the form of “mezzanine” notes. As under the as-adopted structure, the AIG equity contribution would have absorbed the first principal losses. In contrast to the chosen model, however, the counterparties’ mezzanine contribution would have covered losses exceeding the AIG equity amount. Thus, under the three-tiered option, FRBNY’s loan would have been more secure because it would have had both the AIG and the mezzanine contributions to absorb principal losses. The mezzanine contribution could have reduced the size of FRBNY’s loan to ML III. However, the potential size of FRBNY’s loan under this plan was not known, FRBNY officials told us. It would have depended on the size of the mezzanine contribution and hence the counterparties’ willingness to participate, they said. The novation structure. Under this structure, the counterparties choosing to participate would have kept their CDOs, rather than selling them to the ML III vehicle. The CDS protection on the CDOs would have remained, except that losses protected by the CDS contracts would be paid by the ML III vehicle and not AIG. Counterparties would have consented to AIGFP novating, or transferring, their CDS contracts to the vehicle. In return, the counterparties would have received par payment from ML III only if a CDO credit event occurred, such as bankruptcy or failure to pay. The counterparties would also have continued to pay CDS premiums, but to the vehicle rather than to AIGFP, which had initially sold them the protection. The novation structure would have been financed with an FRBNY guarantee; the collateral AIG had previously posted to the counterparties, which the counterparties would have remitted to the vehicle; and an AIG equity contribution. Overall, novation would have meant that the counterparties would not have initially received par value in return for canceling their CDS contracts. Instead, the CDS coverage would have continued. Even assuming that legal issues, discussed in the following section, could have been resolved, FRBNY would have needed to fully fund the vehicle, essentially lending an amount equal to the difference between par value and collateral already posted by AIG to the counterparties, FRBNY officials told us. FRBNY and its advisors identified a number of merits and drawbacks for each of the three ML III options. The as-adopted ML III structure had lower execution risk than the other structures, FRBNY officials told us, meaning there was lower risk that the vehicle would ultimately not be implemented after the parties agreed to terms. It was also the simplest structure. However, it could have required a greater FRBNY financial commitment, and after the AIG equity contribution, there were no other funds contributed to offset potential losses. The three-tiered structure, with its counterparty contributions, could have required a smaller FRBNY loan and provided FRBNY greater protection because the counterparty funding would have absorbed any principal losses that exceeded AIG’s equity contribution. This added protection would have been a major benefit in providing more security for the FRBNY loan, according to FRBNY officials, because at the time, financial markets were in turmoil and it was difficult to know when declines would end. However, according to FRBNY, the three-tiered structure would have required complex, lengthy negotiations with the counterparties, including pricing of individual securities in the portfolio. An FRBNY advisor told us those negotiations could have taken a year or longer. The structure also would have required discussion on how potential losses would be shared among the counterparties. Under this option, credit rating agencies might also have had to rate notes issued by ML III to the counterparties, which would have required time. Further, the structure would have created ongoing relationships between counterparties and FRBNY, which an advisor said created the potential for conflicts due to the Federal Reserve System’s supervisory relationships. In particular, FRBNY officials told us, the key feature of the three-tiered structure was that it would have forced the counterparties into a new position: being required to absorb losses on their own assets and perhaps those of other counterparties participating in the vehicle. It would have been a significant undertaking—lengthy negotiations with no assurance of success—to persuade the counterparties to take that risk, the officials said, although they did not have any such discussions with counterparties before rejecting this option. However, they told us that they were aware of difficulties in AIG’s efforts to negotiate with its counterparties during this time, and that these negotiations factored into their expectations about the three-tiered option. The novation option could also have reduced the amount of ML III payments made to the counterparties. However, according to FRBNY, the chief factor against novation was that officials did not think they had the legal authority to execute this kind of structure because it likely would not have met the Federal Reserve System’s requirement to lend against value. In addition, according to FRBNY and an advisor, any novation structure would have been complex; would have required counterparty consent, including agreement to give up the collateral if the structure was to be fully funded; could have caused concern among credit rating agencies; and would have required giving up the opportunity for potential future gains in CDO value because the vehicle would not have owned the CDO assets. An advisor also cited concern that a novation structure would drain liquidity from the financial system during a time of market weakness because the counterparties would give up collateral AIG had already provided to them to the new vehicle, where it would no longer be available to the counterparties for their own uses. In all, there would have been considerable execution risk while under great time pressure, FRBNY officials said. FRBNY and its advisors assessed the three structures against their goals of both meeting policy objectives and stabilizing AIG. Policy objectives included lending against assets of value; ensuring that FRBNY funding would be repaid, even in a stressed economic environment; speed of execution; and avoiding long-term relationships with counterparties. AIG stabilization objectives included eliminating AIGFP’s liquidity drain stemming from CDS collateral calls while limiting the burden on the company through the contribution AIG would make to ML III. Other stabilization objectives were avoiding accounting rules that would have required AIG to consolidate any ML III structure onto its own books and also enabling AIG to share in potential gains once the federal lending and the company’s equity position were repaid. FRBNY officials told us they ultimately chose the as-adopted ML III structure because it was the only one that worked, given the constraints at the time. According to FRBNY, time to execute was the most important objective, and compared to the other alternatives, the as-adopted ML III structure was simpler, could be executed more quickly, and had lower execution risk. As noted, the value the counterparties received under the as-adopted ML III structure came from two sources—retaining the collateral AIGFP had already posted to them, plus payments from ML III to purchase their CDOs. By the time of ML III in November 2008, much of the collateral the counterparties had received from AIG had been funded with proceeds from FRBNY’s Revolving Credit Facility. Accounting for use of these loan proceeds, of the $62.1 billion in value the counterparties received through the process of establishing the ML III vehicle, about 76 percent came from FRBNY, as shown in table 3. FRBNY officials designed the as-adopted ML III with a focus on three main features: (1) the debt and equity structure of the vehicle, (2) the different interest rates to be used to calculate payments to FRBNY and AIG on their respective contributions, and (3) a division of future earnings between FRBNY and AIG. The first key design feature involved establishing the debt and equity structure of the total funding provided to ML III so that the FRBNY loan would be repaid even under conditions of extreme economic stress and so that AIG’s equity contribution would be sufficient to protect the FRBNY loan. The Federal Reserve Board authorized FRBNY to extend a loan of up to $30 billion to ML III, secured with the CDOs that ML III would be purchasing. The actual amount of the loan was $24.3 billion, which, coupled with a $5 billion AIG equity contribution, provided total funding of $29.3 billion to ML III. The allocation between the FRBNY loan and the AIG equity contribution was a balance between providing safety for the loan and knowledge that FRBNY’s previously approved Revolving Credit Facility would fund the AIG contribution, FRBNY officials said. As part of its consideration, FRBNY took into account potentially extreme ML III portfolio losses. During this process, FRBNY directed an advisor to examine a larger AIG contribution than initially proposed, in the interest of providing stronger protection for its loan, and that examination produced the $5 billion figure eventually selected. In November 2008, using three economic stress scenarios, an FRBNY advisor estimated that CDO losses on a portfolio close to what became the ML III portfolio could be 32 percent, 46 percent, and 54 percent of notional, or face, value under a base case; a stress case; and an extreme stress case, respectively. In particular, based on expected losses during extreme stress, our analysis of FRBNY advisor information showed the ML III portfolio was expected to lose 57 percent of its notional value of $62.1 billion, leaving a value of about $27 billion. That amount, however, was still expected to be $2.7 billion greater than FRBNY’s $24.3 billion loan. Thus, the stress tests indicated that the CDO collateral held by ML III would be sufficient to protect the FRBNY loan under the extreme stress scenario indicated. Likewise, AIG’s equity contribution of $5 billion to ML III was designed to protect FRBNY’s loan during extreme economic stress. As noted, the equity position absorbs first principal losses in the ML III portfolio. Under the extreme stress case, ML III’s CDO recovery value would be $2.6 billion less than ML III’s total funding, according to our analysis. That is, after the projected loss of 57 percent, as noted previously, the assets would have a value of $26.7 billion. That would be less than the $29.3 billion in ML III funding provided by the combination of FRBNY’s $24.3 billion loan and AIG’s $5 billion in equity financing. However, if such a $2.6 billion shortfall occurred, the loss would be applied first against AIG’s $5 billion equity investment. Thus, the structure would allow AIG’s equity position to provide protection for FRBNY’s loan. Although AIG made an equity contribution to ML III, the company funded its investment using proceeds from the Revolving Credit Facility. FRBNY officials said they knew that AIG would need to borrow to fund its contribution, but they preferred that the company borrow from the Revolving Credit Facility as they did not want AIG to take on expensive debt to make its contribution. Nevertheless, this situation presented FRBNY with a trade-off when determining the size of AIG’s contribution to ML III. On one hand, a higher contribution would have provided more protection to FRBNY. On the other, a higher contribution would have required AIG to borrow more under the Revolving Credit Facility, and officials wanted to minimize use of that facility. FRBNY officials also said they did not want the size of AIG’s contribution to undermine the company if the contribution was entirely lost in a worst-case scenario. Our review showed that FRBNY also considered other methods for AIG to fund its contribution, such as a quarterly payments plan or financing the AIG equity contribution with a secured loan from ML III. The second key ML III design feature was the interest rate used to calculate payment on FRBNY’s loan and AIG’s equity contribution. The Federal Reserve Board approved an interest rate on FRBNY’s loan of 1- month London Interbank Offered Rate (LIBOR) plus 100 basis points, with the rate paid on AIG’s equity position set at 1-month LIBOR plus 300 basis points. Proceeds from the ML III CDO portfolio were to be applied first to FRBNY’s senior note until the loan was paid in full and then to AIG’s equity until it was also repaid in full. According to internal correspondence, FRBNY chose LIBOR as the base rate because LIBOR was also the base rate for a number of the assets in the ML III portfolio. As for the add-ons to the base rate, an FRBNY advisor judged the 100 and 300 basis point spreads to be normal market terms a year prior to the financial crisis. In addition, FRBNY officials told us that they wanted to leave open the option of selling the FRBNY loan in the future and thus wanted to include features that might be appealing to a potential future investor. The spread might be attractive to an investor as a form of profit-sharing. The final design feature addressed allocation of residual cash flow—that is, any income received by ML III from CDOs in its portfolio after repayment of the FRBNY loan and the AIG equity contribution. The as- adopted structure split residual cash flows between FRBNY and AIG on a 67 percent and 33 percent (67/33) basis, respectively. As of November 5, 2008, just before ML III was announced, residual cash flows to FRBNY and AIG were estimated to total $31.8 billion and $15.7 billion, respectively, under the base economic scenario. The division of residual cash flows was determined based on the proportion of funding contributed to ML III and what FRBNY officials deemed would be a fair return for its loan and AIG’s equity position. Table 4 shows the divisions of residual cash flows that FRBNY and its advisors considered based on variations in the size of AIG’s equity contribution, as of October 26, 2008. Under these alternatives, as AIG’s equity position increased, its residual cash flow allocation also increased, but at a disproportionately higher rate. Conversely, as FRBNY’s contribution decreased because AIG would be contributing more, FRBNY’s share of residual cash flow decreased at a higher rate. Another factor that influenced the choice of the residual split was the issue of consolidation of ML III onto AIG’s books. FRBNY requested that one of its advisors determine how much ML III could increase AIG’s allocation of residual cash flows before consolidation became an issue. FRBNY officials said they determined that FRBNY would need to take at least a 55 percent share of the residual cash flows to avoid AIG having to consolidate. That, however, would have provided a 45 percent share for AIG, which in turn would have produced an extraordinarily high rate of return on the company’s $5 billion contribution, FRBNY officials told us. As a result, FRBNY chose the 67/33 division, which also had the advantage of being a more conservative position for the FRBNY loan. Rating agency concerns also played a role in the allocation of the residual cash flows, according to FRBNY officials. The agencies told FRBNY that in assessing AIG for rating purposes, they would have concerns if there was no benefit for the company via the residual cash flow, because that could leave the company in a weaker position. FRBNY officials told us they viewed the rating agencies’ position as a constraint to be considered in their design, along with such factors as tax considerations and market perceptions. As a result, FRBNY included a residual share for AIG, although officials said that was not necessarily for the sake of the rating agencies alone. According to advisor estimates as of November 5, 2008, FRBNY could have expected to receive an additional $15.7 billion in residual cash flows had it decided not to provide AIG with a share. In general, according to FRBNY officials, they were not looking to earn large returns from the residual earnings. Instead, they said their primary interest was ensuring FRBNY would be repaid even in a highly stressed environment, while also seeking to stabilize AIG. The primary driver of repayment was the size of the AIG first-loss contribution. FRBNY wanted a bigger first-loss piece, to protect its loan, and in return, was willing to provide AIG with a bigger share of the residual earnings. Although the 67/33 split favored FRBNY, its focus was not on the residual earnings per se, officials told us. As part of the ML III process, ML III and AIGFP also executed another agreement, known as the Shortfall Agreement, under which ML III transferred about $2.5 billion to AIGFP. This amount was based on what FRBNY officials described as excess collateral that AIGFP had posted to the counterparties, based on fair market values determined for the CDOs in question. As described later, a portion of the Shortfall Agreement became an issue with AIG securities filings and disclosure of information about AIG counterparties participating in ML III. While the Federal Reserve Expected Concessions Would Be Negotiated, Accounts of FRBNY’s Attempts to Obtain Them Are Inconsistent The Federal Reserve Board authorized ML III with an expectation that concessions, or discounts, would be obtained on the par value of AIGFP counterparties’ CDOs. Our review found that FRBNY made varying attempts to obtain concessions and halted efforts before some of the counterparties responded to the Bank’s request for the discounts. The counterparties opposed concessions, we found, and FRBNY officials told us that insistence on discounts in the face of that opposition would have put their stabilization efforts at serious risk. The business rationale for seeking concessions from AIG’s CDS counterparties was similar to the logic for the option—not adopted—of having counterparties contribute to the three-tiered ML III structure—namely, to provide an additional layer of loss protection for FRBNY’s ML III loan. Some Federal Reserve Board governors also raised concerns that the counterparties receiving par value on CDOs could appear too generous, noting that the counterparties would receive accounting benefits from the transaction and no longer be exposed to AIG credit risk. Concessions would be a way for the Federal Reserve System to recover some of the benefits the counterparties had obtained through its intervention in AIG. According to FRBNY officials, discounts were justified because the counterparties would benefit from participation in ML III, while at the same time, such concessions would better protect FRBNY’s risk in lending to the vehicle. Under ML III, the theory of concessions was that counterparties would be relieved of a risk early, and be provided additional funding they would not otherwise get. Because the counterparties themselves were facing a risky partner in AIG, they should have been willing to accept concessions, officials told us. In particular, according to FRBNY and an advisor, ML III could have benefited AIG CDS counterparties in several ways:  Liquidity benefits. The counterparties would receive ML III cash payments immediately for purchase of their CDOs.  Financial statement benefits. Sale of CDOs would allow release of any valuation reserves previously booked in connection with the CDS transactions, which reflected potential exposure to AIG. Upon cancellation of AIG’s CDS contracts, the counterparties would no longer need to hold reserves against these exposures, and the reserves could be released into earnings.  Capital benefits. The counterparties would receive a capital benefit, by reducing risk-weighted assets on their balance sheets.  Risk of future declines in value. By participating in ML III, counterparties would avoid the risk of exposure to AIG on potential future declines in the value of CDOs protected by the company’s CDS contracts. In addition, we identified other potential benefits of counterparty participation in ML III. According to our review, before the government’s intervention, AIG and some of its CDS counterparties collectively had billions of dollars of collateral in dispute under the CDS contracts. Sale of the CDOs and termination of the CDS contracts would eliminate those disputes and their cost. Also, some counterparties had obtained hedge protection on their CDS contracts with AIG. Likewise, termination of the contracts would eliminate the costs of that protection. Prior to discussions with counterparties on concessions, FRBNY asked an advisor to estimate potential concession amounts. The advisor developed three scenarios, with total concessions ranging from $1.1 billion to $6.4 billion, representing 1.6 percent to 9.6 percent of CDO notional value. Individual counterparty discounts ranged from $0 to $2.1 billion (see table 5). The advisor also prepared an analysis of factors seen as affecting individual counterparties’ willingness to accept discounts. For instance, the analysis identified one counterparty as resistant to deep concessions because a significant portion of its portfolio was high quality with little expectation of losses. At the time the Federal Reserve Board authorized ML III, the understanding was that concessions would be negotiated with the counterparties. We found differing accounts of the request for, and consideration of, counterparty concessions. FRBNY officials told us that they made a broader outreach effort to the counterparties, while counterparties described a more limited effort. FRBNY officials told us that in seeking concessions, they contacted 8 of the 16 counterparties, representing the greatest exposure for AIG, in discussions on November 5 and 6, 2008. According to FRBNY officials, their initial calls were typically made to the chief executive officers or other senior management of the counterparty institutions. In the initial calls, FRBNY officials explained the ML III structure generally, and the institutions identified the appropriate internal contacts for detailed discussions. FRBNY officials said that they conveyed a sense of urgency about working out pricing details and concessions. FRBNY officials said that counterparties’ initial reactions to these requests were negative, and that FRBNY officials asked the counterparties to reconsider. After the initial contacts, some counterparties called FRBNY to obtain more information on the transaction, but these conversations did not include concessions, according to the officials. FRBNY gave the counterparties until the close of business Friday, November 7, to make an offer. Only one of the eight counterparties indicated a willingness to consider concessions and provided a concession offer, FRBNY officials told us. This willingness was conditioned on all other counterparties agreeing to the same concession, the counterparty told us. Counterparties we spoke with provided a different account of FRBNY’s effort to obtain concessions. As a starting position, they generally said they opposed a request for concessions because their CDS contracts gave them the right to be paid out in full if CDOs defaulted. As a result, they said they had no business case to accept less than par. Counterparties also cited responsibilities to shareholders, saying that accepting a discount from par would run counter to these duties. According to our interviews with 14 of the 16 counterparties, FRBNY appears to have started the process of seeking discounts with attempts of varying degrees of assertiveness to obtain concessions from five counterparties. In particular, according to our interviews, FRBNY requested a discount from two counterparties, which said they needed to consult internally before replying. These two counterparties said that FRBNY implied they might not receive financial crisis assistance or discount window access in the future if they did not agree to a discount. FRBNY officials disputed these accounts. However, FRBNY made contact soon afterward seeking to execute an ML III agreement without a discount, and FRBNY officials did not provide any explanation for their change in position, according to the counterparties we interviewed. Our interviews also indicated that FRBNY requested “best offer” of a discount from two other counterparties, and briefly referenced seeking a discount from another counterparty, before similarly withdrawing its request with little or no explanation, according to our interviews. Before that, one of the counterparties asked to make an offer told us it was still considering a range of possible discounts. The other said that it told FRBNY it would accept a 2 percent concession, but at that point, FRBNY officials told the counterparty they had decided against concessions, and that they would provide par value instead. The remaining counterparties we contacted indicated that FRBNY did not seek concessions from them. According to FRBNY officials, however, the same message had been delivered to each counterparty contacted. Similarly, the former FRBNY President said in congressional testimony that a majority of all 16 counterparties had rejected concessions. Following discussions with counterparties, the then-FRBNY President and Federal Reserve Board Vice Chairman, upon staff recommendation, decided to move ahead with ML III without concessions. In making the recommendation on the evening of Friday, November 7, 2008, FRBNY officials described the challenges to obtaining concessions and their concerns about continued negotiations. FRBNY officials told us that taking additional time to press further for discounts could risk not reaching agreement on the ML III transaction by the target date of November 10, 2008. The cost of not being able to announce the transaction as planned, coupled with a resultant credit rating downgrade, would have been greater than the amount of any concessions achievable in the best case, they said. Although FRBNY did not continue to pursue concessions, officials told us that ML III was nevertheless designed to allow repayment of the FRBNY loan under extreme economic stress without them. Therefore, FRBNY officials told us they were comfortable moving ahead without concessions. The former FRBNY President said that officials could not risk lengthy negotiations in the face of a severe economic crisis, AIG’s rapidly deteriorating position, and the prospect of a credit rating downgrade. Counterparties approached for a concession told us that once FRBNY dropped the request for a discount, they agreed to par value, and the transactions moved forward as final details were resolved. Federal Reserve Board officials told us that although the expectation was that concessions would be obtained, securing such discounts was not a requirement at the time ML III was authorized. According to FRBNY officials and records we reviewed, there were a number of reasons FRBNY decided not to pursue concessions:  Participation in ML III was voluntary, and coercing concessions was inappropriate, given the Federal Reserve System’s role as regulatory supervisor over a number of the counterparties.  There was no coherent methodology to objectively evaluate appropriate discounts from par.  Getting all counterparties to agree to an identical concession would have been a difficult and time-consuming process. Consistency was important, both to maximize participation and to make clear that FRBNY was treating the counterparties equally. Lengthy negotiations would have been a challenge for executing ML III over 4 days by the November 10 target.  FRBNY had little or no bargaining power given the circumstances. The attempts at concessions took place less than 2 months after the Federal Reserve System had rescued AIG, and the counterparties expected that the government would not be willing to put the credit it had extended to the company in jeopardy. FRBNY officials said in congressional testimony that the probability of the counterparties agreeing to concessions was modest. Even if they had agreed, FRBNY did not expect them to offer anything more than a small discount from par. FRBNY officials told us that setting aside any attempts to coerce concessions, the economic basis for concessions was relatively modest because AIG had been providing the counterparties with collateral. Thus, any exposure of the counterparties upon an AIG default would have been low compared to the notional size of the CDS transactions. Because some of the counterparties were French institutions, French law also entered into concession considerations. FRBNY officials told us that FRBNY had contacted French regulators for assistance, but that the French regulators opposed concessions. Also at issue was whether French law permitted discounts. FRBNY officials said that the French regulator was forceful in saying concessions were not possible under French law, and the former FRBNY President has testified that the French regulator unequivocally told FRBNY officials that under French law, absent an AIG bankruptcy, the French institutions were prohibited from voluntarily agreeing to accept less than par value. FRBNY told us that they did not conduct any legal analysis. Nevertheless, whatever an analysis might have determined, if the French regulator was not willing to support its institutions accepting concessions, then concessions would not be possible, FRBNY officials told us. Given the desire for consistent treatment of the counterparties, the French opposition effectively prevented concessions, the officials said. However, in congressional testimony, the then-FRBNY President said legal issues faced by the French institutions were not the deciding factor. A French banking official offered a different view to us. The official declined to discuss conversations with Federal Reserve System officials, citing French secrecy law. In general, though, the official provided a more nuanced explanation of French law’s treatment of any concessions than that cited by the former FRBNY President. According to the French banking official, there could be legal liability if an institution accepted a discount, with liability depending on individual facts and circumstances and a key consideration being whether any discount involved all creditors. In addition, one French institution told us its research indicated French law would not have been a factor in concessions. While FRBNY Sought to Treat Counterparties Alike, the Perceived Value of Maiden Lane III Participation Likely Varied Among Counterparties In establishing ML III, FRBNY sought to broadly include the AIGFP counterparty CDOs from the portfolio that was creating liquidity risk for AIG, because the more that were included, the greater the liquidity relief for the company. For various reasons, however, not all such CDOs were acquired for inclusion in ML III. In acquiring CDOs for ML III, FRBNY focused on the counterparties receiving the same total value as a way to ensure equal treatment, without which officials said ML III would not have been successful. Specifically, ML III paid counterparties an amount determined to be the fair market value of their CDOs, while the counterparties also retained collateral AIG had posted with them under terms of the CDS contracts being terminated. The sum of these two amounts was roughly equal to par value of the CDOs. Although FRBNY applied this equal treatment approach consistently, the perceived value of benefits derived from ML III participation likely varied because the circumstances of individual counterparties varied. FRBNY officials agreed there were differences among counterparty positions, but they said the most important consideration was the overall value provided and that taking account of individual circumstances would have been unfeasible and too time-consuming given the time pressure of addressing the financial crisis. To select the CDOs to be purchased for inclusion in ML III, FRBNY reviewed a list of CDOs protected by AIGFP CDS contracts. FRBNY’s focus was multisector CDOs because these securities were subject to collateral calls and were one of the main sources of AIG’s liquidity pressure. FRBNY officials told us their strategy was for ML III to acquire a large volume of CDOs from AIGFP’s largest counterparties so as to attract other counterparties to participate. In addition, the concern was that without the largest counterparties’ participation, ML III would not have been successful. FRBNY officials said, however, that no formal analysis was conducted to determine a specific CDO acquisition target amount that would produce ML III success. Ultimately, about 83 percent by notional value, or $62.1 billion of about $74.5 billion in CDOs, were sold to ML III, according to information from an FRBNY advisor. CDOs that ML III did not purchase were excluded due to decisions by both FRBNY and counterparties. FRBNY did not include “synthetic” CDOs due to questions of practicality and legal authority. It excluded synthetics because they might not have met the Federal Reserve System’s requirement to lend against assets of value, given that they were not backed by actual assets. According to an FRBNY advisor, excluded synthetics totaled about $9.7 billion in notional value. AIG counterparties decided to exclude certain CDO assets for financial and operational reasons. They elected to exclude euro-denominated trades with a total notional value of $1.9 billion after the trades were converted to dollars. For example, one counterparty told us that it elected not to participate with some of its holdings because movement in foreign exchange rates would have caused a loss, based on FRBNY’s structuring of the transaction. Additionally, another counterparty told us that $500 million in assets were not included because the counterparty did not have the underlying bonds and could not get them back for delivery to ML III. To obtain the agreement of AIG counterparties to participate in ML III, FRBNY sought to treat the counterparties consistently by providing each, through the ML III structure, with essentially par value on their CDO holdings, FRBNY officials told us. This value—for selling their CDOs and terminating their AIG CDS contracts—was based on the sum of two parts: (1) fair market value of the CDOs as determined shortly before ML III acquired them and (2) collateral that AIG had posted with the counterparties, which the counterparties retained. Under this structure, ML III itself did not pay par value for the CDOs it acquired. Rather, it paid fair market value, which at the time was below the initial, or notional, values of the CDOs. FRBNY officials told us that providing the counterparties with essentially par value based on these two components was important to achieving the objective of broad counterparty participation. They said that if counterparties had thought they were getting different arrangements, they would not have elected to participate in ML III, and FRBNY would not have achieved its goal of liquidity relief for AIG. The decision to provide the counterparties with essentially par value for selling their CDOs and terminating their CDS protection on them, rather than providing a lower level of compensation, was based on making them whole under terms of their CDS contracts, FRBNY officials told us. Because AIG had guaranteed the notional, or par, value in those CDS contracts, FRBNY officials said it was appropriate to provide essentially par value to the counterparties, which reflected the market value of the covered CDOs plus the value of AIG’s CDS protection on those securities. FRBNY officials explained that underlying their approach was the assumption that AIG would have been able to make good on its CDS obligations. For the counterparties, the risk of AIG failing to fulfill its CDS obligations had two elements: First, that AIG could not pay out on the contracts if CDOs protected by the company were unable to repay all principal and interest due at maturity, and second, that AIG could fail to make required collateral postings as required under the CDS contracts. According to FRBNY officials, of the two, failing to post collateral was the more important risk because under the CDS contracts, AIG would not have been required to make payouts following default on any principal balances until the maturity of the CDOs, which could be years into the future. On the other hand, a failure by AIG to post collateral when required would have represented a more immediate dishonoring of its CDS contracts. FRBNY officials told us that the assumption underlying their approach for providing par value—that AIG would make good on its CDS obligations— was appropriate because there was no realistic concern among the counterparties that AIG, with its recent government support, would fail to honor its CDS obligations. However, some counterparties we spoke with said that when ML III was created, they did have concerns that AIG would not be able to fulfill its CDS guarantees. For example, one counterparty told us that it believed there was still a risk of losses based on an AIG default because posting of collateral mitigated risk but did not eliminate it. Another counterparty said that providing par value was attractive because it provided an exit to a position it viewed as risky. In addition to concerns that counterparties had about AIG’s ability to honor its CDS contracts, market indicators at the time showed newly elevated concern about AIG’s health. This can be seen in the cost of obtaining CDS protection on AIG itself. On November 7, 2008, the last business day before the announcement of ML III and other assistance on November 10, 2008, premiums on CDS protection on AIG were near the level reached on September 16, 2008, when the company was on the verge of failure. Reflecting market perceptions of AIG’s financial health, the premium costs on November 7 were about 43 times higher than the cost at the start of the year. Although FRBNY used the same approach in acquiring CDOs from all the counterparties, the counterparties’ perception of the value of ML III participation likely varied, according to FRBNY officials and analysis that we conducted. FRBNY officials said that counterparties’ circumstances differed based on factors such as size of exposure to AIG, methods of managing risk, and views on the likelihood of continued government support for AIG. As a result, counterparties would have perceived different benefits and value from participating in ML III, FRBNY officials said. The ML III combination of the market value of the purchased CDOs and collateral retained had different value to different counterparties, which might have created different desires to participate, they said. In addition, there are other ways that counterparties might have been differently situated before agreeing to participate in ML III. In particular, we examined (1) the degree to which the counterparties had collected collateral under their CDS contracts following declines in the value of their CDO holdings and (2) the counterparties’ credit exposure to AIG based on the quality of the CDO securities they held. Differences in collateral collected under CDS contracts. FRBNY officials told us that the measure of a counterparty’s exposure to AIG was the amount of decline in CDO value that had not been offset by AIG’s posting of collateral under its CDS contracts. For example, if two counterparties each had $1 billion in CDOs and each group of CDOs had lost $400 million in value, each counterparty would expect AIG to post collateral to offset the loss in value. But if one counterparty had collected the entire $400 million while the other had collected only $200 million, the first counterparty would have fully collateralized its exposure, while the second counterparty would have had uncollateralized exposure to AIG. We found that prior to ML III, the counterparties had widely varying uncollateralized exposure to AIG. Figure 4 shows each counterparty’s uncollateralized exposure to AIG as of October 24, 2008, shortly before ML III was announced. For each counterparty, it shows the percentage of the loss in CDO value that had been covered by collateral collected from AIG. Collateral posted included payments that AIG had made to its counterparties using proceeds from the Revolving Credit Facility provided by FRBNY in September 2008. For example, as shown in the figure, as of October 24, a number of counterparties were at or near full collateralization, as collateral posted was at or near 100 percent of the decline in CDO values. Some of the counterparties had actually collected more collateral than value lost. Others, however, had collected less than half the CDO value lost. In all, the amounts collected varied by more than a factor of four, ranging from a low of about 44 percent to a high of about 197 percent. We found the same pattern of differences among the counterparties when considering total collateral requested by each counterparty, not all of which AIG may have posted. FRBNY officials offered several caveats for our analysis but agreed with the basic methodology of comparing collateral posted to loss in CDO value. They said that overall, despite what collateral postings might have been at a particular point, the collateral posting process was working as intended, and amounts posted grew in advance of the announcement of ML III. An issue factoring into the collateral situation was disputes over the amount of collateral AIG should have posted with its counterparties. Collateral postings were based on declines in CDO values, and there were disagreements over what the proper valuations should be. To the extent that lower valuations (more CDO value lost) produced greater collateral postings, counterparties had an interest in seeking lower valuations. Similarly, to the extent that higher valuations (less CDO value lost) meant smaller collateral postings, AIG had an interest in seeking higher valuations. According to information we reviewed, on a CDO portfolio of $71 billion (a preliminary portfolio somewhat different from the final ML III portfolio), AIG and its counterparties had valuation differences totaling $4.3 billion. Among a group of 15 counterparties, 9 had valued their assets differently than AIG. FRBNY officials told us they viewed the amount of collateral in dispute as relatively minor, but counterparties told us they viewed disputed amounts as significant. Varying AIG exposure due to credit quality of underlying assets. Analysis conducted by an FRBNY advisor indicated that CDOs the counterparties sold to ML III were expected to incur widely varying losses in value during periods of economic stress. These differences arose from the varying quality of assets underlying the CDOs. FRBNY officials stressed to us that such differences in quality were reflected in the fair market value that ML III paid for the CDOs and that counterparties held collateral based on declines in CDO values. From the perspective of individual counterparties, these differences illustrate dissimilar circumstances among the counterparties in the time before ML III was established. Figure 5 shows, in descending order, that the amount of value expected to be lost in each counterparty’s CDO portfolio during extreme economic stress ranged from a high of 75 percent to a low of 1 percent. Eleven of the 16 counterparty CDO portfolios were expected to lose at least 50 percent of their value during such periods of extreme stress. FRBNY’s advisor estimated, for instance, that counterparty 1’s CDO holdings would lose 75 percent of their notional value during extreme stress. By contrast, counterparty 16’s CDO portfolio was projected to lose only 1 percent of its value. The advisor’s analysis also indicated a wide range of expected losses for the base and stress economic cases. For the base case, projected losses ranged from 0 percent to 52 percent of CDO portfolio value. For the stress case, expected losses ranged from 0 percent to 67 percent. Another indicator of differing asset quality can be seen in widely varying credit ratings among the CDOs that counterparties sold to ML III. An FRBNY advisor examined CDO credit ratings, grouping them into 11 categories. Figure 6 focuses on 3 of those 11 categories, showing the percentage of each counterparty’s holdings that fell into the highest-, middle-, and lowest-rated groupings. In general, the analysis shows a relatively level amount of assets in the middle-rated category, with variance in the best and lowest ratings. For example, counterparty 5 had about 40 percent of its holdings in the highest-rated category, with about as much in the lowest-rated group. But counterparty 15 had about twice as much in the highest category as the lowest. One counterparty had 98 percent of its CDO portfolio in the top rating category, while another had none. Eleven counterparties’ CDO portfolios contained “nonrated” positions, which meant that the credit quality of those assets was unknown and their risk potentially higher. All else being equal, CDOs with lower credit ratings would be expected to produce higher losses compared to more highly rated positions. In addition, the FRBNY advisor also noted differences among the counterparties’ situations shortly before ML III was announced. For example, according to records we reviewed, the advisor noted that in a nonstressed economic environment, one counterparty’s portfolio was of higher quality, and that the counterparty expected there would be recoveries in value of the assets. For another counterparty, the advisor noted that its portfolio, overweighted with subprime assets, was forecast to experience higher losses in all economic scenarios, and disproportionately worse performance under extreme stress. In another case, the advisor noted that based on the counterparty’s situation, it would likely have been satisfied with its position without ML III participation. Another difference among AIG counterparties’ positions prior to their participation in ML III was that some had obtained hedge protection on AIG generally or had obtained protection specifically on their AIG CDS positions. Therefore, their overall risk posture was different from that of counterparties that had not obtained such hedge protection. FRBNY officials told us they agreed that the counterparties and their CDO holdings were not similarly situated. The officials said that the counterparties generally started out in similar positions, where each had CDS protection on the notional, or par, values of their CDO holdings. As the financial crisis intensified, the value of the CDOs declined, some more than others, and as a result, the counterparties’ relative positions diverged. The crisis was the differentiator, they said. As the value of the underlying assets changed, the value of AIG’s CDS protection became different, the officials said. Despite the counterparties’ dissimilar situations, FRBNY officials said the goal was to make sure the counterparties agreed to terminate their CDS contracts in order to stem liquidity pressure on AIG, and the approach they took, based on par value, was the best way to accomplish this given constraints at the time. They said that while some underlying CDOs may have been of differing quality, these CDOs also had the benefit of AIG’s CDS protection, which promised to protect their value. The counterparties’ differing situations and varying perceptions of the benefit of ML III participation might have offered an opportunity to lower the amount FRBNY lent to ML III if FRBNY had been able to negotiate individually with the counterparties based on their individual circumstances. However, FRBNY officials told us that trying to negotiate tailored agreements by counterparty would have been unworkable and too time consuming given the pressure of the financial crisis. According to the officials, trying to determine the economic implications of each counterparty’s position would have been speculative, as different parties would have made different arguments about the costs or benefits of the ML III transaction based on their individual circumstances. Further, they said that taking note of such positions would have led to different deals with different parties on the basis of how each had chosen to manage risk. While negotiations might have been possible, they would have been long and complicated and there was no time for such talks. In reaching agreement with the AIG counterparties on ML III, FRBNY provided counterparties with varying opportunities to negotiate some terms. FRBNY officials said that after the first set of eight counterparties agreed to participate in ML III on the par value basis, FRBNY provided transaction documents to them and then negotiated some details with them. Over the course of the weekend preceding November 10, 2008, ahead of the release of AIG’s quarterly earnings report, FRBNY had separate conversations with the eight counterparties representing the most significant exposure for AIG. FRBNY officials told us that these counterparties had the opportunity to suggest amendments to contract language, and FRBNY incorporated some of their comments into the final contracts. According to FRBNY and counterparties we spoke with, the negotiated items generally involved clarifications and technical items, not material economic terms. While in principle, ML III was an easy transaction to describe, there were important details to be worked out, involving such matters as timing and delivery of the CDOs at issue, FRBNY officials told us. After agreements were reached with the first group, FRBNY contacted the next group of counterparties, whose holdings FRBNY officials said were not significant compared to those of the first group. FRBNY officials told us that ML III needed to have the same contract with all the counterparties. According to our interviews, counterparties in the second group asked for changes, but FRBNY declined. For example, one counterparty told us it wanted to make procedural changes and clarify certain terms. FRBNY would not do so, saying that other counterparties with larger exposures had already commented on the terms. FRBNY made clear it was up to the counterparty to decide whether it wanted to engage on the terms offered, executives of the counterparty told us. Our review also identified at least one instance where a counterparty in the first group of eight was allowed to amend contract language after signing ML III agreements. FRBNY characterized the changes as technical and clarifying. The Federal Reserve’s Actions Were Generally Consistent With Existing Laws and Policies, but They Raised a Number of Questions The actions of the Federal Reserve System in providing several rounds of assistance to AIG involved a range of laws, regulations, and procedures. First, we found that while the Federal Reserve Board exercised its broad emergency lending authority to aid AIG, it did not make explicit its interpretation of that authority and did not fully document how its actions derived from it. Second, after government intervention began, FRBNY played a role in the federal securities filings that AIG was required to make under SEC rules. We found that although FRBNY influenced AIG’s filings, it did not direct the company’s decisions about what information to file for public disclosure about key details of federal aid. Finally, in providing assistance to AIG, FRBNY implemented vendor conflict-of- interest procedures similar to those found in federal regulations, but granted a number of waivers to conflicts that arose. In addition, we identified a series of complex relationships involving FRBNY, its advisors, AIG counterparties, and service providers to CDOs in which ML III invested that grew out of the government’s intervention. The Federal Reserve Exercised Its Broad Emergency Lending Authority to Aid AIG but Did Not Fully Document Its Decisions When the Federal Reserve Board approved emergency assistance for AIG beginning in September 2008, it acted pursuant to its authority under section 13(3) of the Federal Reserve Act. At the time, section 13(3) authorized the Federal Reserve Board, in “unusual and exigent circumstances,” to authorize any Reserve Bank to extend credit to individuals, partnerships, or corporations when the credit is endorsed or otherwise secured to the satisfaction of the Reserve Bank, after the bank obtained evidence that the individual, partnership, or corporation was unable to secure adequate credit accommodations from other banking institutions. The Reserve Bank making the loan was to establish the interest rate in accordance with section 14(d) of the Federal Reserve Act, which deals with setting of the Federal Reserve discount rate. In authorizing assistance to AIG, the Federal Reserve Board interpreted its broad authority under section 13(3) as giving it significant discretion in satisfying these conditions. The statute does not define “unusual and exigent circumstances,” and, according to our review, the Federal Reserve Board believes it has substantial flexibility in assessing whether such circumstances exist. The statute also does not define an inability “to secure adequate credit accommodations from other banking institutions” or set forth any standards for Reserve Banks to use in making this determination. As a result, Federal Reserve Board staff have stated that the Federal Reserve Board would be accorded significant deference in defining this standard. The Federal Reserve Board notes that its Regulation A—which governs extensions of credit by Reserve Banks, including emergency credit—does not require any specific type of evidence and bases the finding about credit availability on the “judgment of the Reserve Bank.” As noted, the statute authorizes Reserve Banks engaging in section 13(3) emergency lending to establish interest rates in accordance with section 14(d) of the Federal Reserve Act. Section 14(d), which authorizes Federal Reserve banks to establish rates for discount window lending, is implemented by Regulation A. Federal Reserve Board staff have stated that while Regulation A contains provisions relating to the rate for emergency credit from Reserve Banks, these provisions do not limit its power to authorize lending under section 13(3) in other circumstances and under other limitations and restrictions. The Federal Reserve Board’s rationale is that section 13(3) further allows it to authorize a Reserve Bank to extend credit to an individual, partnership, or corporation “during such periods as the said board may determine” and “subject to such limitations, restrictions, and regulations as the may prescribe.” As a result, the Federal Reserve Board has stated that it has complete statutory discretion to determine the timing and conditions of lending under section 13(3). Federal Reserve Board officials told us that the interest rate the Reserve Bank recommends to the Federal Reserve Board is based on the facts and circumstances of a particular instance of lending, and that the rate need not be the discount rate itself. Section 14(d) has never been viewed as linking the interest rate on section 13(3) lending to the then-prevailing discount rate, a Federal Reserve Board official told us. The Federal Reserve Board views the section 14(d) rate-establishing provision as procedural, an official told us, because the Reserve Bank extending the loan proposes the rate and the Federal Reserve Board must approve it. The official said that more analysis on rates takes place at the Reserve Bank level than at the Federal Reserve Board. Factors taken into account when setting rates include risk and moral hazard. For example, one FRBNY official described the Revolving Credit Facility as being akin to debtor-in-possession financing—that is, it has a high interest rate, aggressive restrictions on AIG’s actions, a short term, and a substantial commitment fee. These features were consistent with section 13(3), the official said, because if a loan is risky, there must be sufficient protection for the Reserve Bank making it. Section 14(d) also directs that rates be set “with a view of accommodating commerce and business.” Federal Reserve Board officials told us their view is that if the section 13(3) requirements for such factors as unusual and exigent circumstances and inability to obtain adequate financing from other banking institutions are met, then the section 14(d) directive of “with a view of accommodating commerce and business” is automatically satisfied. Rates on the Federal Reserve Board’s section 13(3) lending to aid AIG have varied, as shown in examples in table 6. Internal correspondence we reviewed discussed an FRBNY rationale for setting interest rates, noting that different rates could be expected based on the approach officials were taking. Under this approach, FRBNY set rates for its lending to SPVs that provided assistance to AIG according to risk and matching of the interest rate to characteristics of assets that were related to a particular loan. For example, FRBNY loan facilities held securities with floating rates that paid interest monthly based on the 1-month LIBOR rate. Hence, officials concluded that using the 1-month LIBOR rate as a base for the interest rates associated with emergency loans to those facilities was appropriate. In other cases, considerations were different. For the restructuring of the Revolving Credit Facility, the rationale for reducing the interest rate included stabilizing AIG, boosting its future prospects, and satisfying credit rating agency concerns. For the final, unused emergency lending facility, which dealt with securitizing cash flows from certain insurance operations, the rationale advanced was AIG’s ability to pay. The statute also does not impose requirements on the amount or type of security obtained by a Reserve Bank for section 13(3) lending, other than requiring that the loan be secured “to the satisfaction” of the lending bank. The Federal Reserve Board has stated that the absence of objective criteria in the statute leaves the extent and value of the collateral within the discretion of the Reserve Bank making the loan. As one Federal Reserve Board official told us, the security accepted by the Reserve Bank could range from equity stock to anything with value. As with interest rates, the security on emergency lending associated with AIG assistance has varied. For example, the Revolving Credit Facility was secured with assets of AIG and of its primary nonregulated subsidiaries, and ML III used the CDOs purchased from AIG counterparties as security for the SPV. For the facility approved but not implemented, the security would have been cash flows from certain AIG life insurance subsidiaries. Although the statute has no documentation requirements, we requested documentation of the Federal Reserve Board’s interpretation of its section 13(3) authority generally, as well as for each of its five decisions to extend aid to AIG in particular. While the Federal Reserve Board provided some documentation, it did not have a comprehensive analysis of its legal authority generally under section 13(3), and it did not maintain comprehensive documentation of its decisions to act under that authority to assist AIG. In particular, we found the Federal Reserve Board’s interpretation of its emergency lending authority to be spread across various memorandums, with limited analysis and varying degrees of detail. For the specific decisions to assist AIG, the documentation provided some support underlying use of the section 13(3) authority, but such analysis was absent in some cases and incomplete in others. For example, for the Revolving Credit Facility, Federal Reserve Board minutes and other records we reviewed noted that the discussion of terms included collateralizing the loan with all the assets of AIG and of its primary nonregulated subsidiaries but did not include documentation of FRBNY’s determination that the loan was secured to its satisfaction. For ML II and ML III, there was no documentation of how the interest rates on the loans to each vehicle were established. For the proposed facility to securitize life insurance subsidiary cash flows, information we reviewed stated that it was well established that AIG was unable to secure adequate credit accommodations from other sources and that, with a projected fourth quarter 2008 loss exceeding $60 billion, it was unlikely to find adequate credit accommodations from any other lender. However, there was no documentation that AIG was, in fact, unable to secure adequate credit from other banking institutions. Federal Reserve Board officials underscored that section 13(3) loans by nature are done on a fast, emergency basis. They told us the Board does not assemble and maintain documentary support for its section 13(3) lending authorizations. According to the officials, such information, while not specifically identified, can generally be found among the overall records the agency keeps and could be produced if necessary, much as documents might be produced in response to a lawsuit. Further, the officials told us, any necessary evidence or supporting information was well understood by the Federal Reserve Board and FRBNY during the time-pressured atmosphere when section 13(3) assistance was approved for AIG, beginning in September 2008 and continuing into 2009. As a result, it was not necessary to compile a formal assembly of evidence, the officials told us. As noted previously, recent legislation has amended section 13(3) since the Federal Reserve Board approved emergency lending for AIG. In the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act), Congress limits future use of section 13(3) lending to participants in programs or facilities with broad-based eligibility and restricts assistance to individual companies under specified circumstances. The act also mandates greater disclosure about section 13(3) lending, requiring the Federal Reserve Board to establish, by regulation and in consultation with Treasury, the policies and procedures governing such emergency lending. In addition, the establishment of emergency lending programs or facilities would require prior approval of the Secretary of the Treasury. The Federal Reserve Board is also required to report to Congress on any loan or financial assistance authorized under section 13(3), including the justification for the exercise of authority; the identity of the recipient; the date, amount and form of the assistance; and the material terms of the assistance. As part of our recent review of the Federal Reserve System’s implementation of its emergency lending programs during the recent financial crisis, we identified instances where the Federal Reserve Board could better document certain decisions and processes. As a result, we recommended that the Federal Reserve Board set forth its process for documenting its rationale for emergency authorizations and document its guidance to Reserve Banks on program decisions that require consultation with the Federal Reserve Board. These actions will help address the new reporting process required by the Dodd-Frank Act and better ensure an appropriate level of transparency and accountability for decisions to extend or restrict access to emergency assistance. The Federal Reserve Influenced AIG’s Securities Filings About Federal Aid but Did Not Direct the Company on What Information to File During the financial crisis, questions arose about FRBNY’s involvement in AIG’s exclusion of some ML III-related information from its federal securities filings—counterparty transaction details and the description of a key ML III design feature. In December 2008, after ML III was created, AIG filed two Form 8-K statements with SEC related to ML III, following consultations with FRBNY. The filings included the Shortfall Agreement but not the agreement’s Schedule A attachment, which contained ML III counterparty and CDO deal information. As noted earlier, under the Shortfall Agreement, ML III transferred about $2.5 billion to AIGFP for collateral adjustment purposes. This amount was based on what FRBNY officials described as excess collateral that AIGFP had posted to the counterparties, based on fair market values determined for the CDOs in the ML III portfolio. SEC noted the Schedule A omission and told AIG that under agency rules, it must include the schedule for public disclosure or request confidential treatment of the information in it. Subsequently, AIG filed a confidential treatment request (CTR) for the information. FRBNY became involved in AIG’s ML III filings after the company had failed to consult FRBNY or its advisors on an earlier company filing on the Revolving Credit Facility, which contained inaccurate information about details of the facility. FRBNY objected to the information, and AIG corrected its filing and agreed to consult on future filings in advance. The ML III agreement contained a confidentiality clause in which AIG generally agreed to keep confidential nonpublic information and to provide notice of any proposed disclosure. AIG executives told us that they expected FRBNY, given its role in assisting the company, to review securities filings and other information involving the Federal Reserve System. FRBNY officials told us they concurred that if counterparty information was to be released, it would be reasonable for FRBNY, as a co-venturer, to have the ability to express an opinion. We found that FRBNY, through its counsel, in November 2008 told AIG it did not believe the Shortfall Agreement needed to be filed at the time. When that effort was unsuccessful, and AIG moved to file the agreement nonetheless, FRBNY then urged that the Schedule A counterparty information be omitted from the company’s filings. FRBNY was also influential in shaping AIG’s arguments to SEC in support of the company’s request to keep the counterparty information confidential. In particular, FRBNY and its advisers made what they described as significant comments and edits to AIG filings regarding the information claimed as confidential, according to FRBNY officials and correspondence we reviewed. After AIG filed its CTR and SEC officials had reviewed and commented on it, FRBNY remained active in pursuing the CTR matter. Officials discussed making direct contact with SEC on the information they did not want the company to disclose. When SEC requested a telephone conference with the company to discuss the issues, FRBNY officials and its counsel began considering what information FRBNY should present to SEC, after first checking with AIG about the matter. FRBNY’s public arguments for confidentiality were twofold: that the counterparty information was commercially sensitive for the parties involved but did not provide material information to investors, and that disclosure could hurt the ability to sell ML III assets at the highest price, potentially to the detriment of taxpayers and AIG. In addition to these publicly stated reasons, FRBNY staff in internal correspondence also discussed other rationales for withholding Schedule A information. One was unspecified policy reasons, which officials later told us may have referred to the general practice of keeping the identities of discount window borrowers confidential. Another was that disclosure could attract litigation or Freedom of Information Act requests. A third such rationale was that seeking confidential treatment for all of Schedule A, and not just portions, could be a useful negotiating strategy because seeking protection for the entire document could make SEC more likely to grant such a request. FRBNY officials told us these other rationales were opinions voiced during internal discussions before FRBNY took a formal position. According to FRBNY officials, there was also concern that release of the information for ML III could lead to demands for release of similar information for other Federal Reserve System emergency lending facilities—ML II, which was created to deal with problems in AIG’s securities lending program, and Maiden Lane, a vehicle created in March 2008 to facilitate JPMorgan Chase & Co.’s merger with Bear Stearns. As part of its involvement, FRBNY participated in three teleconference calls with SEC officials about AIG’s CTR filing, according to SEC records and officials. On January 13, 2009, the day before AIG filed its request with SEC, FRBNY officials at their request spoke with SEC to explain the ML III transaction. Another call came on March 13, 2009, when representatives of AIG and FRBNY contacted SEC to say that AIG intended to file an amended CTR in response to SEC comments on the original request. The third call, on April 22, 2009, took place at SEC’s request to discuss AIG’s competitive harm arguments. SEC, AIG, and, at AIG’s request, FRBNY participated in that call. According to SEC, discussions with FRBNY were at the staff level. While SEC was reviewing AIG’s CTR, the company considered dropping its request, thus making all the contested information public. However, FRBNY officials convinced the company not to do so. By that point, FRBNY was willing to have some information released, such as counterparty names and amounts paid, but did not want to release other material, such as information related to individual securities, according to correspondence we reviewed. The specific concern was that release of security-specific information could allow market participants to identify ML III holdings. FRBNY officials told us they made their opinion known to AIG, and that such communication was appropriate given that FRBNY was a major creditor to ML III. AIG concurred with FRBNY’s concerns, according to an FRBNY communication. According to interviews and information we reviewed, underlying FRBNY’s desire that AIG not file sensitive ML III information with SEC was concern that such information could then be requested by Congress and ultimately be made public. This was because SEC rules require that applicants for CTRs consent to furnishing the information claimed as confidential to Congress, among others. SEC officials told us that although there are no records of Congress requesting such information, their best recollection is that Congress has never sought information filed in a CTR with the agency. SEC’s handling of AIG’s confidentiality request was routine, SEC officials told us, albeit under unusual circumstances. SEC officials told us they viewed FRBNY’s involvement with the agency as that of a counterparty to an agreement with a company required to make filings. In such a situation, it is not common for a counterparty to contact SEC, officials told us. In addition, FRBNY’s participation was more active than would be expected of a counterparty, they said. Officials said the agency processed AIG’s CTR using its normal CTR review process, and that SEC’s review of the request was prompt. But circumstances were unusual for several other reasons, SEC officials told us. First, the AIG filings had been targeted for heightened scrutiny as part of special review efforts arising out of the financial crisis and government aid to private companies. These efforts involved continuous review of selected companies’ filings. Second, FRBNY—which FRBNY officials characterized as a federal instrumentality—was an involved party. Third, in response to FRBNY concerns, SEC allowed a special drop-off procedure for the CTR aimed at protecting the information from disclosure. This action came after SEC had declined FRBNY requests for special ways to provide the information for SEC review, such as by SEC officials going to FRBNY offices to review relevant material or FRBNY officials showing SEC the information at SEC headquarters but outside the normal filing system. Finally, the case reached SEC’s associate director level and eventually the SEC Chairman. Officials told us that due to AIG’s high public profile, the Chairman was advised immediately before the CTR determination on the Schedule A information. SEC officials told us this was not typical. It is rare for SEC staff to brief the Chairman on a CTR determination, they said, but that was done in this case due to anticipated publicity for the matter. AIG’s original CTR sought confidential treatment for all of the Schedule A information. On May 22, 2009, SEC granted the company’s request, but only in part. SEC officials said AIG’s initial CTR was too broad, and the agency, through its review process, narrowed the scope of the request. As part of its review, SEC officials provided AIG with detailed comments and questions after reviewing its request and also monitored information that was already publicly available to determine if AIG’s CTR should be amended to reflect that availability. SEC officials said that notwithstanding FRBNY’s unusual involvement, they examined the case from the usual standpoint of investor protection, in which the key issue was harm to AIG. Any harm to the Federal Reserve System was not an SEC issue, officials told us. The agency determined that the following elements of the Schedule A information should not be treated confidentially, and thus should be disclosed:  amount of cash collateral posted;  CDO pricing information that reflected the securities’ loss in market complete Schedule A information for 10 CDOs, including CUSIP identifier, tranche name, and notional value, as related information had previously been made public; totals for notional value, collateral posted, and revised values based on market declines for all CDOs; and  all Schedule A titles and headings. Except for the 10 CDOs cited, SEC permitted confidential treatment of the following information for each of the other CDOs listed in Schedule A: CUSIP number, tranche name, and notional value. However, the SEC action eventually became moot, as on January 29, 2010, AIG amended its 8-K filings to fully disclose Schedule A. We also found that the desire to keep Schedule A-type information confidential was not a new position for AIG. Before ML III and any government assistance, AIG had sought protection for similar information on the basis that it was confidential business information. Specifically, in response to an unrelated request for information from SEC, AIG in August 2008 requested that CDO-related information be kept confidential. “As a result of this transaction, the AIGFP counterparties received 100 percent of the par value of the Multi-Sector CDOs sold and the related CDS have been terminated.” At the request of FRBNY’s outside counsel, AIG omitted this language from its filing, company executives told us. This omission led to criticism that FRBNY was seeking to conceal information about payments to AIG’s counterparties. AIG executives told us the company omitted this language because of concerns that it misrepresented the transaction, as ML III itself was not paying par value. Instead, as noted, ML III paid an amount that, when combined with collateral already posted by AIG to the counterparties, would equal par value (or near par value). In internal correspondence, FRBNY also said “par” was inaccurate, as counterparties paid financing charges and had to forgo some interest earnings. Thus, the amount received was less than par when all costs were considered; in some cases, the difference was in the tens of millions of dollars. We found that two units of SEC—the Division of Corporation Finance and the New York Regional Office—examined the deletion of the par value statement and concluded there was no basis for an enforcement action for inadequate disclosure. SEC staff considered whether AIG’s filing provided enough information for investors to see that the sum of the collateral counterparties kept and the payments from ML III amounted to 100 percent of value. SEC has not brought any enforcement action concerning this issue. In February 2010, FRBNY issued a memorandum formalizing its process for reviewing AIG’s securities filings. The memorandum emphasizes that AIG is solely responsible for the content of its filings, and that any FRBNY review is to promote accuracy or protect taxpayer interests. It also specifies material to be subject to review. Ultimately, according to both AIG and FRBNY, the company retained responsibility for its own filings. Based on our review, we found that while FRBNY’s involvement was influential, it was not controlling. AIG did not comply with all FRBNY requests about information in its filings. Also, later in the process, after Schedule A information was released publicly, an AIG executive reported to an SEC official that FRBNY had told the company to make its own decision on whether to disclose full Schedule A information in filings with SEC. According to AIG executives, there was no occasion when AIG strongly disagreed with a course advocated by FRBNY but adopted FRBNY’s position nonetheless. SEC enforcement staff found that AIG exercised independent judgment. The staff examined correspondence related to AIG filings, and their review showed that although FRBNY had a viewpoint it was not reluctant to express, AIG nevertheless remained actively involved in the process and exercised its own independent judgment on what its filings should say. More broadly, although FRBNY was aware of criticism that ML III funds were provided to unnamed counterparties or foreign institutions, we found no evidence that FRBNY urged AIG to withhold information in order to conceal identities or nationalities of the counterparties. According to FRBNY officials, FRBNY’s involvement with AIG illustrated the dual role of a central bank as a public institution that sometimes must also carry out private transactions as a private market participant. In our review, we considered whether FRBNY’s involvement in AIG’s securities filings was consistent with what might be expected in the private-sector under similar circumstances. We found that in broad terms, FRBNY’s activities appear to be consistent with actions of a significant business partner. The government assumed multiple roles in assisting AIG. Through its arrangement for initial aid, a government vehicle became the company’s majority equity investor. Its emergency lending also made it a significant creditor to AIG. In addition, FRBNY was a joint venturer with AIG in ML III. In the private-sector, any of these roles could provide a basis for involvement in a company’s affairs. Majority shareholders can have significant influence—for example, by naming the board, which exercises control over significant aspects of a company’s business. A company might consult with a majority owner on business decisions and might share draft securities filings. Creditor involvement in company affairs can be extensive, particularly in times of stress. Credit agreements can include detailed affirmative and negative covenants—requirements to take, or refrain from, certain actions—through which creditors can shape and constrain financing, management, and strategic decisions. Agreements often require corporations to provide extensive financial information to the creditor. In the case of joint venturer, the academic research we reviewed does not discuss the influence that private-sector counterparties may have over each others’ SEC filings. However, individuals with whom we spoke indicated sharing draft filings in a merger and acquisition context is common. Parties to a joint venture may share draft filings as well. The circumstances of Federal Reserve System aid to AIG preclude a direct private-sector comparison for several reasons. Majority ownership of large public companies is unusual. The trust agreement for the government’s AIG holdings placed limitations on the trust’s role as shareholder. In addition to any assistance relationship with AIG, the government, via OTS, has also had a regulatory relationship with the company. The government also had goals in the AIG intervention beyond those of typical private-sector actors: attempting to stabilize financial markets and the broader economy. Nevertheless, through its various actions, the government provided significant resources to AIG and took on significant risk in doing so. A private party in similar circumstances could be expected to become involved in company affairs. While FRBNY Implemented Updated Vendor Conflict-of-Interest Procedures in Providing AIG Assistance, Aid Gave Rise to Complex Relationships that Posed Challenges To provide emergency assistance that the Federal Reserve Board approved for AIG, FRBNY contracted for financial advisors to perform a range of activities for the Revolving Credit Facility and ML III. FRBNY retained its principal financial advisors for the Revolving Credit Facility and ML III in September and October 2008. According to FRBNY officials, they awarded contracts for at least two of the advisors without competitive bidding, due to exigent circumstances. They said there was insufficient time to bid the services competitively as advisors were needed to quickly begin setting up the program. For the Revolving Credit Facility, the principal financial advisors were Ernst & Young and Morgan Stanley, which were engaged for these main duties: structuring the loan documentation between FRBNY and AIG after the company accepted FRBNY’s initial loan terms on September 16, 2008;  providing advisory services for AIG asset sales;  performing valuation work on AIG securities posted as collateral to secure the Revolving Credit Facility; calculating AIG cash flow projections to monitor the company’s use of cash, plus actual and predicted draws on the Revolving Credit Facility;  advising FRBNY on how to address rating agency and investor  monitoring Revolving Credit Facility requirements on information AIG must provide to FRBNY to identify any instances where AIG did not comply. For ML III, FRBNY’s three primary financial advisors have been Morgan Stanley, Ernst & Young, and BlackRock, Inc., which were engaged for these main duties:  developing alternate designs for ML III; identifying CDO assets for inclusion in ML III; valuing CDO securities under economic stress scenarios;  advising FRBNY on how to structure the transaction to address rating agency and investor concerns; and  managing the ML III portfolio for FRBNY. FRNBY has also contracted for two other vendors to provide key services for ML III: Bank of New York Mellon performs accounting and administration for the ML III portfolio, and another vendor, Five Bridges Advisors, conducts valuation assessments. One of the factors FRBNY considered when selecting vendors was potential conflicts of interest. In general, potential and actual conflicts of interest can arise at either the personal or organizational levels. A personal conflict could arise, for example, through the activities of an individual employee, whereas an organizational conflict could arise through the activities of a company or unit of a firm. Our work focused on potential organizational conflicts of interest that involved the Revolving Credit Facility and ML III. When FRBNY engaged its Revolving Credit Facility and ML III advisors, FRBNY had its Operating Bulletin 10 as guidance, which applies to vendor selection but did not include provisions on vendor conflicts. By contrast, Treasury, which has also engaged a number of vendors in implementing TARP, in January 2009 issued new interim guidelines for its management of TARP vendor conflicts of interest. The Treasury regulations provide that a “retained entity”— generally, an individual or entity seeking or having a contract with Treasury—shall not permit an organizational conflict of interest unless the conflict has been disclosed to Treasury and mitigated under an approved plan, or unless Treasury has waived the conflict. However, even though FRBNY guidance did not have provisions on vendor conflicts, FRBNY officials told us that they held internal discussions to identify potential advisor conflicts that could arise. FRBNY also identified some activities, such as providing advisory services, as presenting a greater risk of conflict than other activities, such as administrative services where there is no discretionary or advisory role. As a result, FRBNY subjected the advisors to greater conflict of interest scrutiny. Based on its internal discussions, FRBNY identified a number of potential conflicts, including two main types of conflicts for advisors other than its investment advisor: instances in which AIG or its subsidiaries seek entities serving as FRBNY advisors to assist them, for matters in the past, present, or future; and instances in which potential buyers of AIG assets seek entities serving as FRBNY advisors to assist them, for matters in the past, present, or future. Without specific conflict policies for its advisors in its established guidance, FRBNY relied upon contract protections and what officials said was day-to-day vendor management to address certain conflict situations. For example, one advisor’s agreement with FRBNY provided that when a potential buyer of AIG assets, also known as a “buy-side” firm, sought transaction advisory services from the advisor, the advisor was to determine if it could perform all services for each party objectively and without compromising confidential information. Upon determining it could be objective, the advisor was to notify FRBNY and AIG of the names of each potential buyer and provide an opportunity for FRBNY and AIG to discuss the scope of services the advisor would provide to the would-be buyer. Another advisor’s agreement similarly provided for seeking FRBNY’s consent before entering into transactions that would create a conflict. Contractual conflict mitigation procedures included separation of employees conducting work for FRBNY from those doing buy-side advisory work, as well as information barriers to prevent sharing of confidential information between FRBNY engagements and the advisor’s other work. One advisor’s engagement agreement also had a provision giving FRBNY the right to audit the advisor’s performance and determine whether it was in compliance with requirements. According to FRBNY, it performed conflict of interest reviews of four advisors providing AIG- related services. Similarly, the ML III investment management agreement of November 25, 2008, by and among FRBNY, ML III, and BlackRock, noted potential conflicts and provided mitigation procedures involving employee separation of duties and information barriers. Among other things, BlackRock employees engaged for ML III are not permitted to perform managerial or advisory services related to ML III assets for third parties or to provide valuation services for third parties for those assets without FRBNY’s consent. BlackRock is also barred from recommending or selecting itself as a replacement collateral manager for any ML III CDO. Further, it cannot knowingly purchase for ML III any asset from a portfolio for which it serves as an investment advisor or knowingly sell an y ML III assets to portfolios for which it serves as an investment adv However, BlackRock may aggregate trading orders for ML III-related transactions with similar orders being made simultaneously for other accounts the advisor manages, if aggregating the orders would benefit FRBNY. isor. In addition to the contract provisions, in December 2008 FRBNY asked its Revolving Credit Facility advisors to disclose potential and actual conflicts arising from their duties and to provide a comprehensive plan to mitigate such conflicts. The mitigation plan was to include implementation steps, conflict issues that were reasonably foreseeable, and identification of how the advisor would notify FRBNY of conflicts identified in the course of their duties. FRBNY requested this information to assist it in developing an approach to managing conflicts related to AIG assistance and other Federal Reserve System emergency facilities created to address the financial crisis. In response, the advisors provided general information on their conflict-of-interest policies and procedures, according to FRBNY officials. Officials told us that FRBNY did not make the same request of one of its ML III advisors because FRBNY had been working with the advisor on a frequent basis for some time and the officials felt they understood the advisor’s conflict issues and policies. Over the course of FRBNY assistance to AIG, FRBNY’s advisors have disclosed a number of conflict situations, both when first engaged and subsequently while performing their duties. These have involved several kinds of conflicts, which FRBNY has waived or permitted to be mitigated. When signing their agreements with FRBNY, one advisor disclosed two buy-side advisory engagements that were underway. FRBNY permitted the arrangements, provided that employee separation and information barriers be created and that the advisor not provide FRBNY with advisory services related to certain potential AIG divestitures. However, FRBNY’s consent still allowed some potential sharing of information between separate employee teams at the advisor. Two advisors had teams providing advisory services to AIGFP when FRBNY engaged them. Both FRBNY and AIG agreed to waive the potential conflicts. One advisor was working on a broad range of advisory and tax services for AIG. Another was involved in analysis of certain AIG CDOs and the RMBS portfolio associated with AIG’s securities lending program. One ML III advisor reported that it was a collateral manager for certain CDOs in which ML III was an investor, and it was allowed to continue subject to conditions. FRBNY officials told us their general approach to conflict issues such as these was to rely on information barriers, which are intended to prevent sensitive information from being shared among people or teams, and to avoid having the same people work in potentially conflicting roles, such as both buy-side and sell-side engagements. We note, however, that such precautions involve a trade-off: all else equal, these measures may protect against conflicts, but they can also preclude application of skills or resources that would otherwise be available. FRBNY and its advisors also set up regular communications for addressing conflicts. For example, one advisor would provide FRBNY with a weekly list of projects requested by AIG subsidiaries or potential acquirers of AIG assets. After considering whether it could accept the project, the advisor would seek waivers from FRBNY when necessary. FRBNY officials told us that they discussed the projects, addressed concerns, and raised questions as needed. The advisor would also get approval of the project proposals from AIG. Another advisor likewise presented potential project requests to FRBNY as they arose. This process was written into a new engagement letter in November 2010. Conflict provisions for another advisor included advisor identification of conflict situations to FRBNY and use of appropriate trading limitations. As part of its conflict management process, FRBNY commissioned compliance reviews for several Revolving Credit Facility and ML III advisors in order to assess the advisors’ policies and identify potential conflicts. One review found several instances in which the advisors allowed employees to work on an engagement for an AIG subsidiary, but these situations were disclosed to FRBNY and the staff in question were reassigned. Another review that covered several Federal Reserve System lending facilities noted that the ML III investment management agreement did not require conflict policies and procedures tailored specifically for ML III. Due to the complexity of ML III assets and the presence of third parties that could influence the portfolio, the report said that FRBNY should consider requiring an advisor to revise its policies and procedures to address unique issues raised by ML III, including potential conflicts and mitigating controls. As discussed later, in May 2010, FRBNY implemented a new vendor management policy to serve as a framework to minimize reputational, operational, credit, and market risks associated with its use of vendors. Our review of advisor records showed that FRBNY’s Revolving Credit Facility advisors have requested at least 142 waivers for AIG-related projects and buy-side work. FRBNY has granted most of these waiver requests. According to FRBNY officials, overall figures on conflict waiver requests and outcomes are not available because FRBNY did not begin tracking the requests until about January 2010, about 16 months after government assistance began. According to the records, one advisor made at least 132 conflict waiver requests to FRBNY for the period of 2008 to 2011. The work requested covered an array of advisory projects involving AIG business units. The records did not indicate how many requests FRBNY granted consent for, but according to FRBNY officials, FRBNY granted a large majority of them on the condition of employee separation and information barriers. Another advisor initially made 10 conflict waiver requests but later dropped one. The remaining nine requests covered at least the 2009– 2011 period, with five related to work requested by AIG and four related to work on behalf of potential buyers of AIG assets. The AIG projects were for such matters as assisting with asset sales and raising funds for subsidiaries. The buy-side projects related to acquisition and financing of AIG assets. FRBNY granted four waiver consents for AIG-related work and two for buy-side transactions. For example, according to FRBNY, it denied one of the conflict waivers in an instance where the advisor provided FRBNY with sell-side advice and deal structuring for a potential AIG asset sale. The advisor requested a waiver to participate in financing to assist the buy-side client to the transaction. According to FRBNY, officials decided that the advisor had been too involved in providing FRBNY with advice and thus turned down the waiver request. In another case, the advisor was in a situation where it had multiple roles involving an AIG subsidiary. One unit of the advisor recommended AIG sell the subsidiary, while another recommended AIG conduct an initial public offering of stock. The advisor was providing FRBNY with advice at the same time it was advising a potential purchaser. The conflict issue became moot when the sale idea was abandoned, but before that, FRBNY had decided not to allow a conflict waiver, officials told us. In cases such as these, FRBNY officials said they considered separation of duties to be a significant mitigating factor, because individuals with access to AIG-related information would not be staffed to other potentially conflicting engagements. FRBNY’s interests as an ML III creditor and its interest in the health of AIG have created competing interests because the interests of ML III and AIGFP overlap: (1) AIGFP owns tranches in the same CDOs in which ML III owns tranches and (2) AIGFP has been an interest rate swap counterparty to certain CDOs in which ML III is an investor. These interests have resulted in circumstances where ML III and AIGFP have either worked together or instead have had conflicts due to divergent interests. FRBNY has identified instances in which decisions made that reflect the overlapping interests could have led to a total of as much as $727 million in losses or foregone gains for ML III. However, ML III gains could have come at the expense of AIGFP, the health of which is also of interest to the Federal Reserve System. In December 2009, FRBNY’s Investment Support Office documented 10 such instances in 2008 and 2009, including the following: In three instances, the ML III portfolio lost a total of $72.5 million, with AIGFP gaining at least $59.3 million. For example, in one instance, ML III and AIGFP together held voting rights to control a CDO. A default occurred, and FRBNY’s ML III advisor sought AIGFP’s consent to “accelerate” the CDO, a process that would have directed cash flows to the benefit of both ML III and AIGFP. However, AIGFP declined to cooperate because it was in a dispute with the CDO manager on another transaction. FRBNY believed AIGFP did not want to antagonize the manager by voting to accelerate the CDO, which would have reduced the manager’s fee income. In three instances, ML III saw total gains of $5.6 million. For example, in one instance, AIGFP agreed to vote with ML III to direct a CDO trustee to terminate a CDO manager and replace it with a new manager at reduced cost. In two instances, FRBNY refrained from taking action, in its role as managing member of ML III, for the benefit of AIGFP. This resulted in foregone ML III gains of up to $660 million. At issue was potential termination of interest rate swap protection AIGFP provided on certain CDOs in which ML III was an investor. FRBNY’s Risk Advisory Committee considered the issue in February 2009, deciding that ML III should refrain from exploring termination of the interest rate swaps, because there was a potential loss at AIGFP that would not be offset by the gain to ML III, and because there was concern that terminating the swap protection could have encouraged other market participants to do the same, to AIGFP’s detriment, the committee indicated. The $660 million was a maximum potential gain for ML III, assuming the swap termination would have been successful, FRBNY officials told us, although they expected AIG to vigorously oppose any attempts to terminate. Overall, FRBNY officials told us that if different choices had been made in these instances, then AIGFP rather than ML III would have suffered losses, which would have had direct and indirect implications for the Federal Reserve System and the larger public interest. We also reviewed a number of other relationships that resulted from FRBNY’s assistance to AIG. These involve (1) the continuing involvement of AIG’s CDS counterparties with CDOs in which ML III is an investor; (2) other relationships among parties involved in AIG assistance, such as FRBNY vendors and advisors; (3) regulatory relationships; and (4) cross- ownership interests. Figure 7 depicts a number of these situations, which are discussed in further detail in the sections following. Links to AIG counterparties. Our review identified continuing indirect relationships between FRBNY and the AIG counterparties that sold CDOs to FRBNY’s ML III vehicle. The AIG counterparties have acted as trustees and collateral managers to CDOs in which ML III is an investor. They have also been interest rate swap counterparties to these ML III CDOs and had other continuing relationships. For example, our review of public data and information obtained from an FRBNY advisor showed that five AIG counterparties have provided either CDO trustee or collateral manager services to CDOs in which ML III is an investor. The AIG counterparties thus continued to have involvement with ML III and FRBNY via FRBNY’s management of the assets in ML III. For trustee services, our analysis identified four AIG counterparties that sold assets to ML III that were trustees for CDOs in which ML III is an investor. These counterparties accounted for 66 percent of the trustees for CDOs in which ML III invests. For example, Bank of America, which sold CDOs with a notional value of $772 million to ML III, was trustee for 71, or 40 percent, of the CDOs in which ML III invests. Trustee duties involve interaction with the ML III investment manager, BlackRock; the ML III administrator, Bank of New York Mellon; and by extension, FRBNY. FRBNY officials said the fact that counterparties act as trustees shows that the trustee business is highly concentrated, meaning that such relationships are difficult to avoid. They also said they see little conflict because the job of a trustee is largely ministerial. Table 7 provides a breakdown of the trustees of CDOs in which ML III is an investor, showing AIG counterparties among them. Our analysis also identified an additional AIG CDS counterparty—Societe Generale, which sold CDOs with a notional value of $16.4 billion to ML III—as accounting for 31, or 17 percent, of all collateral managers in the ML III portfolio. As described previously, in the case of AIGFP’s dispute with a collateral manager, issues can arise with collateral managers. FRBNY officials told us that collateral managers work for a CDO and its trustee, not the CDO investors, and that investors have no right to direct the collateral manager. However, some CDOs permit investors with sufficient voting rights to direct a trustee to replace a collateral manager if certain conditions have been met, officials said. Another area of continuing relations involves interest rate swap counterparties. As described earlier, interest rate swaps help manage interest rate risk. Through December 31, 2008, three AIG counterparties had a total of five swap arrangements with CDOs in which ML III was an investor. To the extent that these swap counterparties’ interests diverge from ML III’s interests, similar to the AIGFP swap case discussed previously, issues can arise. According to FRBNY and an advisor, AIG counterparties that sold CDOs to ML III have also been involved with AIG’s asset sales, the proceeds of which have paid down federal assistance, such as the Revolving Credit Facility. For example, one counterparty was involved in the divestiture of AIG’s ALICO, Star, and Edison life insurance subsidiaries. Additionally, four other counterparties provided advisory services to AIG, according to the advisor. FRBNY officials told us they did not view such assistance in asset sales as raising an issue. Another continuing relationship arose temporarily through placement of ML III cash in an investment account offered by an AIG counterparty. For example, according to a December 2008 advisor memorandum, in November and December 2008, ML III’s portfolio holdings generated cash flows of approximately $408 million, which were placed in the AIG counterparty’s investment fund. Later, according to FRBNY, it moved most cash into U.S. Treasury bills, using the counterparty’s fund as a short-term holding account. FRBNY officials said the relationship was not a concern, and that they chose the fund because it had flexibility for withdrawals and offered the best return. Advisor or vendor relationships. FRBNY advisors or vendors have also acted as service providers to CDOs in which ML III is an investor. For example, as noted previously, one ML III advisor reported to FRBNY that it was collateral manager for CDOs in which ML III was an investor. Specifically, the advisor managed other investor accounts that held 11 CDOs managed by other parties and in which ML III held a senior interest. According to FRBNY, the notional value of the assets was approximately $539 million. The advisor also managed one ML III CDO for which ML III held the super senior tranche, which had a notional value of about $800 million. The advisor sought a conflict waiver, and FRBNY consented, stipulating that the advisor would not make management decisions or take a position contrary to the interests of ML III and that the advisor would immediately seek to sell the CDO positions in question where permissible. Our review also identified instances in which this advisor has managed other ML III-related CDO assets that it said presented a potential for conflicts and where the advisor did not seek waivers from FRBNY. At the time ML III was established, the advisor was investment manager for clients owning approximately eight junior tranches in CDOs for which ML III held the senior tranches. FRBNY officials said that under the structure of the assets, neither the advisor nor ML III is able to influence the CDO holdings. We also found that the ML III administrator, Bank of New York Mellon, has also been a trustee for individual CDOs in which ML III is an investor. This means Bank of New York Mellon has had interests that could diverge. Bank of New York Mellon has been the trustee for 50 CDOs in which ML III is an investor, or 28 percent of all trustees, according to our analysis. As an individual CDO trustee, Bank of New York Mellon is involved in such tasks as performing compliance tests on the composition and quality of CDO assets; identifying CDO events of default; and liquidating CDOs upon events of default at the direction of CDO holders, subject to certain conditions. As the administrator for the overall ML III portfolio, Bank of New York Mellon’s income would depend on the CDO assets held in the ML III portfolio. But as noted previously, as trustee to individual CDOs, it could be called upon to determine if CDOs are in default, which can lead to liquidation if requisite conditions are met. Such liquidations could reduce overall portfolio assets, and hence, the administrator’s income. FRBNY officials said that this divergence of incentives is inherent in the CDO trustee business, and they emphasized that as the ML III administrator, Bank of New York Mellon had no authority to make ML III decisions. According to FRBNY, Bank of New York Mellon performed custodial and administrative services and had no discretion, and thus was considered to present a low conflict risk. In the case of an ML III advisor managing individual CDOs, or related assets, FRBNY officials told us they examined individual situations as necessary. Finally, our review also identified other relationships. For example, an ML III advisor has had service contracts with the AIG counterparties that sold CDOs to ML III. FRBNY officials told us they did not consider these relationships to be of concern because the advisor was not involved in direct negotiations with counterparties with respect to ML III purchases of CDOs. Also, there are one or two instances where interests in Maiden Lane—the vehicle for another Federal Reserve System emergency program—hold tranches of CDOs in ML III, FRBNY officials told us. In some cases, the interests of Maiden Lane LLC and ML III could diverge, similar to the situations described earlier relating to AIGFP and ML III. According to FRBNY, it manages from the standpoint of its overall loans for assistance. FRBNY officials also said that it would be rare that a loss to Maiden Lane LLC would be greater than the gain to ML III. According to FRBNY officials, they would have avoided any involvement with the various parties if practicable. They said that the relationships we identified, the majority of which stemmed from arrangements that existed before ML III was established, reflected areas that did not raise concern. From FRBNY’s perspective, after AIG’s counterparties no longer owned the CDO positions sold to ML III, those counterparties had no ongoing interest in ML III’s structure or interactions with FRBNY related to ML III. FRBNY officials said that positions that ML III held in the CDOs came with the rights and obligations the CDO structure itself stipulated, as well as the trustees and collateral managers then involved—all of which predated FRBNY’s involvement. They acknowledged that FRBNY’s ML III investment manager presented the potential for conflict but said that adequate measures were taken to avoid actual conflicts. Regulatory relationships. The Federal Reserve System oversees two of FRBNY’s advisors, which means that while FRBNY has been receiving advice from the advisors, it also has been responsible for oversight of them. According to FRBNY, it has maintained its AIG monitoring team separately from staff who perform supervisory duties. The AIG monitoring staff has no contact with those involved in supervision, officials told us. In addition, officials told us that FRBNY policy requires bank supervisory information to be kept separate from other operations, including separate computer systems. As an example of attention to separation of supervisory duties, FRBNY officials cited the case of MetLife, which in 2010 acquired AIG’s ALICO unit. MetLife is a bank holding company regulated by the Federal Reserve System. At the time of the acquisition, there were inquiries from an FRBNY MetLife team to the AIG team. When that happened, officials said they immediately put in place an information barrier to make clear that supervisory decisions would not be affected by information the AIG team had. Officials saw the matter as a serious potential conflict because FRBNY had an interest in seeing the acquisition being completed, as that would aid repayment of federal lending, while at the same time, it had a supervisory responsibility for MetLife. Cross-ownership. Cross-ownership occurs when parties have ownership interests in each other—for example, if a company owns stock in another firm and that firm owns stock in the first company. According to academic literature we reviewed, such reciprocal ownership can create mutual interests among the parties or interests that might not have been present absent the ownership, which can diminish independence between the parties. Our review found that a number of AIG CDS counterparties, FRBNY advisors, and service providers to CDOs in which ML III is an investor have held cross-ownership interests in each other, both at the time ML III was established and more recently. For example, we found that as of December 31, 2008—the end of the quarter during which ML III was planned and formed—FRBNY ML III advisor Morgan Stanley had stock holdings in nine AIG CDS counterparties totaling at least $1.4 billion. Among those nine counterparties, four have been service providers to CDOs in which ML III is an investor (such as trustees or collateral managers, as discussed previously). Morgan Stanley’s largest counterparty holding was Bank of America, valued at $925 million. At the same time Morgan Stanley held its equity ownership in these nine counterparties, the nine counterparties had equity ownership in Morgan Stanley valued at about $1.1 billion, our review found. The counterparties’ ownership ranged from a low of $6.7 million for counterparty HSBC to a high of $384.2 million for Goldman Sachs. Similarly, and more recently, we identified cross-ownership between AIG CDS counterparties and FRBNY ML III advisor BlackRock. In particular, we found that 12 counterparties owned BlackRock stock worth at least $998 million, based on information available as of April 2011—3.8 percent of BlackRock’s outstanding shares. Among these 12 firms, 5 have been service providers to CDOs in which ML III is an investor. The largest AIG CDS counterparty owner of BlackRock stock was Barclays, with holdings valued at $603 million. At the same time these 12 counterparties owned BlackRock stock, BlackRock had equity ownership interests in them worth $44.3 billion. BlackRock’s ownership ranged from a low of $248 million for Calyon (later renamed Credit Agricole) to a high of $8.4 billion for HSBC. BlackRock and Merrill Lynch, an AIG CDS counterparty, have had business interests in addition to investment interests. In September 2006, BlackRock merged with the investment management unit of Merrill Lynch. Later, Merrill Lynch became one of the largest recipients of ML III payments. At year-end 2008, Merrill Lynch owned about 44 percent of BlackRock’s common stock. In September 2008, Bank of America announced its acquisition of Merrill Lynch. Bank of America was also an AIG CDS counterparty that received payments from ML III. According to BlackRock’s 2008 year-end SEC filing, Merrill Lynch would vote its BlackRock shares according to the recommendation of BlackRock’s board of directors. Similarly, we found that cross-ownership extends to FRBNY advisors and service providers (that is, CDO trustees and collateral managers) for CDOs in which ML III is an investor. For example, we found that 15 of 52 CDO service providers owned BlackRock stock valued at $624 million, based on information available as of April 2011, with holdings equal to 2.4 percent of BlackRock’s outstanding shares. Among these providers, for example, was State Street Global Advisors, which had the largest BlackRock stake, worth $300 million, or 1.2 percent of BlackRock’s outstanding shares. At the same time, BlackRock held State Street stock worth $293 million, or 1.1 percent of shares outstanding. FRBNY officials told us that they had not considered the cross-ownership issue, either before or after executing ML III, but that by itself, it was not of concern. First, they distinguished BlackRock from other entities, saying BlackRock is an investment management company that owns securities on behalf of clients, which accounts for most of the holdings we identified. However, we note that BlackRock would still have an interest in the performance of client holdings from the standpoint of management fees and client satisfaction with investment performance. Second, the officials said that entities have subdivisions, such as affiliates or subsidiaries; therefore, relationships among parties are not necessarily as linked as they might appear. For example, they distinguished between BlackRock Solutions, the portion of BlackRock that has been FRBNY’s advisor, and other operations of BlackRock, Inc., the BlackRock corporate entity. However, according to BlackRock federal securities filings, BlackRock Solutions is not a distinct subsidiary of the parent, and instead operates as a “brand name” for certain services the company provides. While different units could nonetheless be affiliated within an overall corporate structure, the relevance or impact of any such affiliations is not clear, FRBNY officials said. Overall, FRBNY officials compared the cross- ownership issue to the former large investment banks, which could provide both advisory services and sales and trading functions. The officials noted that while there were considerable interconnections of interests, the point at which they become unacceptable is not clear. Overall, while our review indicated FRBNY devoted attention to conflict of interest matters involving assistance to AIG, FRBNY‘s decision to rely on private firms for key assistance in designing and executing aid to the company introduced other challenges. For example, FRBNY established conflict of interest standards that permitted waivers, and it has granted a number of waiver requests. But because a system for tracking conflict waiver requests was not implemented until about 16 months after assistance began, FRBNY officials cannot provide a comprehensive account of such requests and their dispositions. Also, the relationships we identified among FRBNY, its advisors, and the AIG CDS counterparties raise questions in light of officials’ statements that one goal was to avoid continuing relationships with firms involved in AIG assistance. Given the time pressure of the financial crisis and FRBNY’s decision to rely upon private firms, FRBNY had to develop policies and procedures on an ad hoc basis. While FRBNY was attuned to conflict of interest issues, its procurement policy did not address vendor or other nonemployee conflicts of interest. As FRBNY officials told us, it is not necessarily clear at what point interrelations between parties becomes a matter for concern. In our recent report on the Federal Reserve System’s emergency lending programs, which included assistance to AIG, we found that the emergency programs brought FRBNY into new relationships with institutions that fell outside of its traditional lending activities, and that these changes created the possibility for conflicts of interest for vendors, plus FRBNY employees as well. FRBNY used vendors on an unprecedented scale, both in the number of vendors and the types of services provided. FRBNY created a new vendor-management policy in May 2010, but we found that this policy is not sufficiently detailed or comprehensive in its guidance on steps FRBNY staff should take to help ensure vendor conflicts are mitigated. FRBNY staff have said that they plan to develop a documented policy that codifies practices FRBNY put in place during the crisis. The lack of a comprehensive policy for managing vendor conflicts, including relationships that cause competing interests, could expose FRBNY to greater risk that it would not fully identify and appropriately manage vendor conflicts of interest in the event of future crises. In that report, we recommended that FRBNY finalize this new policy to reduce the risks associated with vendor conflicts. FRBNY officials said they plan to document a more comprehensive policy for managing vendor conflict issues. Initial Federal Reserve Lending Terms Were Designed to Be More Onerous than Private Sector Financing FRBNY officials have said that when they provided the first assistance to AIG—the $85 billion Revolving Credit Facility—they adopted key terms of an unsuccessful private-sector lending package. Our review, however, found that the initial federal lending was considerably more onerous than the contemplated private deal. After accepting the terms of government lending—which included restrictions on some company activities—AIG reduced some investment activities but did not fail to meet any legal obligations, the company said. The Revolving Credit Facility Was More Expensive than the Failed Private Loan Plan and Was Intended to Be Onerous FRBNY officials told us that after an agreement could not be reached on private financing for AIG, they adopted key economic terms of the private- sector loan syndication plan for the Federal Reserve System’s initial assistance—the Revolving Credit Facility. Our review, however, showed that the terms of the FRBNY loan were more expensive in key respects and that the government intended them to be onerous. The initial cost of the Revolving Credit Facility created financial challenges for AIG and its ability to repay FRBNY. In response, the Federal Reserve System twice restructured its loan before the company fully repaid it in January 2011. According to both FRBNY officials and AIG executives, it was apparent at the time the Revolving Credit Facility was offered that restructuring would be necessary, although Federal Reserve Board officials told us that they believed the $85 billion credit facility had solved the company’s problems until economic conditions deteriorated further. FRBNY officials told us that some of the Revolving Credit Facility’s initial loan terms were different from those of the failed private-sector plan but that key economic terms, such as the interest rate and fees were the same. FRBNY also stated publicly on its website that the interest rate was the same as the private-sector plan, and an FRBNY advisor also said that the credit facility’s terms were those that had been outlined in the private- sector plan. FRBNY officials told us that the Federal Reserve System used the private-sector terms because it did not have sufficient time to do otherwise prior to extending government aid, and that in the process, they took a signal from the private sector on what was appropriate in light of the risk. Given the situation, according to an FRBNY internal fact sheet, officials attempted to assess AIG’s situation and take into account the terms of the private-sector lending plan, before finalizing the FRBNY loan offer to the company. Our review, however, showed that key economic terms of the Revolving Credit Facility were more expensive than those of the private plan, until loan terms were subsequently modified. For example, as shown in table 8, the rate on drawn amounts was two percentage points higher, and the FRBNY loan included a fee on undrawn amounts, which the private- sector plan did not. Apart from the financial terms, the Revolving Credit Facility also provided a longer term than the private plan. In an e-mail sent to the then-FRBNY President about a month after the Revolving Credit Facility was authorized, an FRBNY official cited the interest rate as being high and expressed concern about the Federal Reserve Board imposing such a rate in approving the lending. In our review, FRBNY officials could explain only the increase in the base rate, from LIBOR plus 6.5 percentage points to LIBOR plus 8.5 percentage points. The officials said an advisor made that increase, on the theory that the loan had become more risky since the failed private-sector attempt. The rationale was that market turmoil had increased in the day before Federal Reserve Board approval of the loan, following the Lehman bankruptcy, and that it would be FRBNY alone, rather than a syndicate of lenders, that would extend the credit. Otherwise, the officials were unable to provide us with an explanation of how other original terms for the Revolving Credit Facility became more expensive, such as the undrawn amount fee. FRBNY officials also told us there were some reservations internally about the initial interest rate on the Revolving Credit Facility. As FRBNY officials described to us, the rate would be high whether AIG used the facility or not, reflecting the 8.5 percent rate on undrawn amounts. Despite internal concerns, there were no efforts to seek changes at the time the loan was approved, FRBNY officials said. Although FRBNY officials could not fully explain the rate discrepancy we identified, they told us nonetheless that in general, they intended the original Revolving Credit Facility terms to be onerous, as a way to motivate AIG to quickly repay FRBNY and to give AIG an incentive to replace the government lending with private financing. Without reconciling the changing terms of the lending, the former FRBNY President told us that FRBNY provided for appropriately tough conditions on AIG. An FRBNY advisor also described the terms as onerous and said the market recognized them as such. Similarly, as noted, AIG initially objected to the terms, in particular, the interest rate and the 79.9 percent equity stake the company gave up. Many of the terms of the Revolving Credit Facility resembled those of bankruptcy financing, FRBNY officials said, and their objective was to devise terms that reflected the company’s condition, the nature of its business, and the large exposure the government faced. According to the officials, they had to balance that AIG would need to maintain its daily business operations against the exposure FRBNY faced with its loan and the contemplated source of repayment, namely asset sales. The officials said they also constructed the economic terms based on what private-sector lenders would have considered appropriate for the risk involved. An AIG advisor characterized the loan as aggressive and unprecedented, but said AIG was in a price-taking position, and that notwithstanding the high cost, the loan nevertheless allowed AIG to survive. In addition to the economic terms highlighted in table 8, the credit agreement for the Revolving Credit Facility also imposed a number of affirmative and negative covenants, or obligations. Under the terms of an accompanying security agreement, AIG granted a lien against a substantial portion of its assets, including its equity interests in its regulated U.S. and foreign subsidiaries. AIG’s insurance subsidiaries did not pledge any assets in support of the facility, as noted in a Federal Reserve System internal fact sheet, and the subsidiaries themselves did not act as guarantors of the loan. This arrangement was established because officials wanted to better ensure that AIG’s insurance subsidiaries would be well capitalized and solvent, according to the fact sheet. The agreements did not require AIG’s foreign subsidiaries to become guarantors, according to FRBNY. The credit agreement also stipulated repayment of FRBNY’s loan with proceeds from asset sales or the issuance of new debt or equity. In addition, officials told us there were other restrictions barring AIG from making large capital expenditures or providing seller financing on asset sales without FRBNY’s consent. Finally, the agreement also included a negative covenant that provided protection for the government on how AIG could use the government’s TARP equity investment. FRBNY officials told us the loan structure proved durable and achieved its purpose of providing AIG with needed liquidity while protecting FRBNY’s position as a creditor. In a secured lending facility such as the Revolving Credit Facility, it is not unusual to negotiate a range of restrictions to protect the lender, the officials said. Nonetheless, the structure created challenges for AIG shortly after its creation. Concerns remained, for example, about the level of AIG’s debt, the rate on the Revolving Credit Facility, and the company’s ability to sell off assets to repay the lending. FRBNY officials told us that the amount AIG initially withdrew from the Revolving Credit Facility ($62.5 billion) and how quickly it did so (slightly more than 2 weeks) demonstrated the depth of the company’s problems. Thus, rating agency concerns were not unexpected, although officials said they were surprised by how quickly those concerns arose. In addition, Federal Reserve Board staff comments cited an issue with the loan, namely, that it required AIG to use proceeds of the Revolving Credit Facility to meet preexisting liquidity needs and not for investment in assets that would generate returns. Thus, as officials told us, rather than repaying FRBNY from productive activities funded by the loan, AIG had to repay the Revolving Credit Facility by selling assets. This requirement ultimately proved difficult to fulfill given the challenges AIG faced in carrying out its asset-sales plan. FRBNY and AIG both told us they understood at the time the Revolving Credit Facility was established that it was only an interim solution and that additional assistance, or restructuring of the assistance, would be required. According to FRBNY officials, the Revolving Credit Facility was a necessary step to forestall AIG’s immediate problems, and the loan gave them time to consider more targeted solutions. FRBNY officials also highlighted the uncertainties that remained after the initial loan, including the condition of the broader economy, as well as the reactions of AIG’s counterparties to Federal Reserve System assistance. In particular, AIG’s securities lending counterparties were terminating their contracts, resulting in increased draws on the Revolving Credit Facility early on. According to AIG executives, while the Revolving Credit Facility addressed the company’s immediate liquidity problems, it also created an unsustainable situation, given the company’s high debt levels, downward pressure on credit ratings, and illiquid markets in which to sell assets. While FRBNY and AIG considered the need for additional government assistance immediately after the Revolving Credit Facility, Federal Reserve Board officials told us that a number of factors accounted for why the Federal Reserve Board determined restructuring became necessary only after economic conditions worsened following authorization of the initial lending. According to Federal Reserve Board officials, markets continued to deteriorate in October and November 2008, resulting in increased cash demands from AIG and heightened prospects for a downgrade. Market conditions worsened more than they expected, officials noted, making it necessary to revisit the terms of the Revolving Credit Facility. In particular, it was important at that point to make the interest rate less burdensome. As noted, the Federal Reserve System twice restructured the terms of the Revolving Credit Facility in order to, among other things, improve AIG’s capital structure and enhance the company’s ability to conduct its asset sales plan. As shown in table 8, the November 2008 restructuring included reductions in the interest rate and the undrawn amount fee, as well as an extension of the loan’s maturity. According to an FRBNY internal fact sheet from November, the lower interest rate and commitment fee on undrawn amounts reflected AIG’s stabilized condition and outlook following Treasury’s $40 billion TARP investment in preferred stock. In addition, according to the fact sheet, the Federal Reserve Board extended the loan’s maturity in order to provide AIG with additional time to sell assets and to repay FRBNY with the proceeds. The restructuring also reduced AIG’s degree of indebtedness and improved its ability to cover interest payments, the fact sheet said, which were key measures for the marketplace and rating agencies in assessing AIG’s future risk. FRBNY’s commitment to lend to AIG under the Revolving Credit Facility was reduced to $60 billion. The March 2009 restructuring included, as noted, a further reduction of the amount available under the Revolving Credit Facility. As part of this restructuring, FRBNY received preferred interests in two SPVs created to hold all of the outstanding common stock of two life insurance holding company subsidiaries of AIG. In addition, officials eliminated the LIBOR floor on the interest rate for the Revolving Credit Facility, potentially reducing the cost of the loan. Following these changes, the amount available to AIG under the Revolving Credit Facility was further reduced. On January 14, 2011, FRBNY announced full repayment of the Revolving Credit Facility and exchange of the 79.9 percent controlling equity interest in AIG for common stock. FRBNY officials told us repayment of the loan was, as expected, the product of AIG asset sales. After Accepting the Federal Reserve’s Loan Terms, AIG Says It Restricted Some Investment Activities but Otherwise Stayed Current on Obligations After the Federal Reserve Board approved assistance for AIG, questions arose about the company’s treatment of financial counterparties and its ability to meet its obligations. We examined this issue from the standpoint of whether, after receiving federal aid, AIG failed to perform on legally required obligations. FRBNY officials said that while they monitored company activities as part of oversight following the rescue, they did not direct AIG on how to treat its counterparties, and company executives told us they did not fail to honor existing obligations. However, AIG executives told us that the company did reduce its investments in certain projects. As noted previously, AIG’s loan agreements imposed a number of restrictions (negative covenants) on the company’s activities. For example, the credit agreement for the Revolving Credit Facility generally barred the company from creating or incurring new indebtedness. It also placed restrictions on payment of dividends and on capital expenditures greater than $10 million. In addition, FRBNY officials told us other restrictions arose from the credit agreement, as amended, although they were not explicitly contained in the agreement. For instance, the AIG parent company ordinarily could inject capital into subsidiaries that were not guarantors of FRBNY’s loan without FRBNY’s consent. However, FRBNY officials said they had concerns about funds going to AIGFP. Thus, according to the officials, in a separate letter agreement with the company, they required that any loan, advance, or capital contribution to AIGFP would require consent. Apart from the loan agreements and related items, the Federal Reserve System and Treasury did not place any additional limitations on AIG’s activities or its use of cash, such as the ability to make loan payments or to fulfill previously committed obligations, company executives told us. Similarly, short of actual restrictions, the Federal Reserve System and Treasury did not impose any limitations that caused AIG to forego activities it otherwise would have undertaken, the executives said. AIG executives also told us that AIG did not act, or fail to act, due to restrictions arising from federal aid. More specifically, the executives said AIG has not failed to perform any legally required obligations to parties such as creditors, joint venture partners, and other counterparties. In particular, AIG’s credit agreement with FRBNY stipulates that AIG is not to be in default of contractual obligations, the executives said. However, the AIG executives distinguished between the obligations described in the previous paragraphs and investment-based decisions not to make additional contributions of capital to certain projects, or to discontinue payments on certain projects and allow lenders to foreclose on them, so that the lenders took over the projects under terms of lending agreements. AIG has made such business decisions, involving a number of projects, when it judged them to be in the best interest of the company, its stakeholders, and FRBNY as AIG’s lender, the executives told us. They said that in such instances, AIG has not had any obligation to continue funding under any contract and had the ability to make payments if it chose to do so. Citing one real estate development project as an example, the executives characterized the situation as a bad real estate decision by the banks involved. FRBNY became involved in ongoing AIG business activities by attending meetings of steering committees AIG set up in certain business units, as one way to obtain information officials felt was necessary to inform judgments FRBNY needed to make under the credit agreements, FRBNY officials told us. For instance, FRBNY would ask for information to understand the company’s risk position or utilization of proceeds from government lending. However, FRBNY did not substitute its judgment for company executives’ judgment, officials told us, and did not direct AIG’s activities. Instead, FRBNY officials told us they focused on issues of interest as a creditor to the company and, as such, would probe company assumptions or analyses. Officials told us that although they did not exercise control, in some instances, AIG reconsidered ideas after discussions with FRBNY. FRBNY never indicated whether AIG should not pay a particular lender or counterparty, officials told us. Instead, FRBNY’s interest was broader and involved evaluating whether a proposed use of capital made sense from a broad context and in light of competing demands for capital, they said. FRBNY encouraged AIG to make decisions based on economics, which sometimes was at odds with narrower interests of managers in particular business units, FRBNY officials said. AIG executives characterized this FRBNY review of its corporate initiatives as constructive, typical of a creditor-borrower relationship, and said they could not recall an instance when AIG wanted to pursue a course that they believed made good business sense but FRBNY did not agree. The AIG Crisis Offers Lessons That Could Improve Ongoing Regulation and Responses to Future Crises As with past crises, the Federal Reserve System’s experience with assisting AIG offers insights that could help guide future government action, should it be warranted, and improve ongoing oversight of systemically important financial institutions. Already, the Dodd-Frank Act seeks to broadly apply lessons learned from the financial crisis in a number of regulatory and oversight areas. For example, the act contains oversight provisions in the areas of financial stability, depository institutions, securities, brokers and dealers, and financial regulation. In addition, our review of Federal Reserve System assistance to AIG has identified other areas where lessons learned could be applied: identifying ways to ease time pressure in situations that require immediate response,  analyzing collateral disputes to help identify firms that are coming conducting scenario stress testing to anticipate different impacts on the financial system. Actions Could Be Taken Earlier to Reduce Time Pressure As discussed earlier, time pressure was an important factor in Federal Reserve System decision making about aid to AIG. For example, the Federal Reserve Board made its initial decision on the Revolving Credit Facility against the urgency of expected credit rating agency downgrades in mid-September 2008, which would have imposed significant new liquidity demands on the company. Similarly, FRBNY chose among ML III design alternatives based largely on what could be done quickly. Time pressure also played a key role in decisions whether federal aid was appropriate. As noted, the Federal Reserve Board’s emergency lending authority under section 13(3) of the Federal Reserve Act was conditioned on the inability of borrowers to secure adequate credit from other banking institutions. In AIG’s case, the company and the Federal Reserve System sought to identify private financing over several days in September 2008 leading up to the first offer of government aid to the company. But entities contemplating providing financing to AIG said the process forced them to compress what ordinarily would be weeks’ worth of due diligence work into only days. As the scope of the financial crisis and AIG’s situation evolved, potentially large investments were being considered in an environment of uncertain risk. When FRBNY stepped in to try to arrange bank financing—at which point AIG’s identified financial need had grown substantially—there was even less time to act, and the Federal Reserve Board quickly moved to extend its offer of assistance. While unforeseeable events can occur in a crisis, easing time pressure could aid future government decision making and the process of seeking private financing. In AIG’s case, the Federal Reserve System could have eased time pressure two ways. First, it could have begun the process of seeking or facilitating private financing sooner than it did—the day before the Federal Reserve Board approved the Revolving Credit Facility—as warning signs became evident in the months before government intervention. Second, given the warning signs, it could have compiled information in advance to assist would-be investors or lenders. Potential private-sector financiers told us the process would have benefited from both more time and information. An example of the kind of information that would be useful in a crisis can be seen in recent rulemaking by the Federal Reserve Board and the Federal Deposit Insurance Corporation. As part of Dodd-Frank Act implementation, the two agencies proposed that large, systemically significant bank holding companies and nonbank financial companies submit annual resolution plans and quarterly credit exposure reports. A resolution plan would describe the company’s strategy for rapid and orderly resolution in bankruptcy during times of financial distress. A company would also be required to provide a detailed listing and description of all significant interconnections and interdependencies among major business lines and operations that, if disrupted, would materially affect the funding or workings of the company or its major operations. The credit exposure report would describe the nature and extent of the company’s credit exposure to other large financial companies, as well as the nature and extent of the credit risk posed to others by the company. Such information was of interest to those contemplating providing financing to AIG ahead of federal intervention, as well as to government officials themselves. This information could also benefit ongoing regulation of financial entities, whether by the Federal Reserve System or other financial regulators, but it could be of particular benefit to the Federal Reserve System, given its broad role in maintaining the stability of the financial system. Such efforts could also improve the quality of information that the Financial Stability Oversight Council is now charged with collecting from, among others, financial regulatory agencies, pursuant to the Dodd-Frank Act. Under terms of the legislation, the Federal Reserve Board Chairman is a member of the Financial Stability Oversight Council, whose purpose is to identify risks to financial stability that could arise from distress, failure, or ongoing activities of large, interconnected bank holding companies or nonbank financial companies; promote market discipline; and respond to emerging threats to the stability of the U.S. financial system. The law created an Office of Financial Research within Treasury to support the Council and its member agencies. Analyzing Collateral and Liquidity Issues Could Help Identify Warning Signs Requirements to post collateral figured prominently in the difficulties in AIGFP’s CDS business that spurred the creation of ML III. Leading up to government intervention, AIG was in dispute with some of its counterparties on the amount of collateral the company was required to post with them under terms of AIG’s CDS contracts. A number of the counterparties told us that they were in disagreement with AIG over billions of dollars of collateral they claimed the company owed them. For example, one counterparty told us it had contentious discussions with AIG over collateral, and another said it made multiple unsuccessful demands for payment. Records we reviewed also indicated that market mechanisms for valuing assets had seized up, which AIG told us contributed to the disagreements over the amount of collateral to be posted. This experience suggests that identifying, monitoring, and analyzing collateral issues may offer opportunities for enhancing regulators’ market surveillance or developing warning signs that firms are coming under stress. A large AIG CDS counterparty told us that it was not clear that regulators appreciated the significance of collateral disputes involving the company. Collateral disputes can be a warning sign and usually involve valuation conflicts. While regulators generally are expected to look for such things as fraud and problems in economic modeling, whether they are attuned to looking closely at collateral disputes and the warnings they might yield is not clear, the counterparty said. In AIG’s case, the duration of the dispute and sharply differing views of values were unusual, the counterparty said. The idea of tracking collateral issues is gaining some attention among financial regulators. For example, the Financial Industry Regulatory Authority has recently issued guidance for broker-dealers that lists “notable increases in collateral disputes with counterparties” among factors that could be warning flags for funding and liquidity problems. More sophisticated monitoring of financial firms’ liquidity positions could likewise be valuable, a former Treasury official who was involved in AIG assistance told us. Proper assessment of liquidity requires not just knowing how much cash is available, the former official said, but also the amount of cash a firm would have available in the event that all parties with the potential to make calls on the firm were to do so. In AIG’s case, neither the company nor regulators understood the situation in this way, but this kind of assessment should be an essential part of future regulatory oversight, the former official said. Scenario Stress-Testing Could Increase Analytical Insights In general, risk analysis that involves thoughtful stress testing can allow for better-informed and more timely decision making. For example, in evaluating elements of federal assistance to AIG, FRBNY and an advisor analyzed expected performance and outcomes under varying conditions of economic stress. Similarly, we reported on the Supervisory Capital Assessment Program that was established through TARP, which assessed whether the 19 largest U.S. bank holding companies had enough capital to withstand a severe economic downturn. Led by the Federal Reserve Board, federal bank regulators conducted stress tests to determine if these banks needed to raise additional capital. These experiences underscore the value of stress testing generally, and the particular circumstances of AIG’s difficulties suggest an opportunity to expand and refine such testing in order to better anticipate stress in the financial system. In AIG’s case, FRBNY officials cited the company’s financial interconnections and the multifaceted nature of the financial crisis as contributing to the need for federal assistance. Similarly, the Federal Reserve Board Chairman has highlighted the risks presented by large, complex, and highly interconnected financial institutions. More sophisticated stress testing that incorporates comprehensive measures of financial interconnectedness and different crisis scenarios could offer the opportunity to study expected outcomes of financial duress, not only for a single institution but for a range of institutions as well. Such testing could allow regulators to better understand the potential systemic impacts of crises or actions, which, among other things, could help them in their new role to monitor systemic risk under the Dodd-Frank Act. The Dodd-Frank Act requires annual or semiannual stress testing by the Federal Reserve Board or financial companies themselves, according to type of institution and amount of assets. The AIG experience underscores the importance of interconnectedness in such analysis. Agency and Third Party Comments and Our Evaluation We provided a draft of this report to the Federal Reserve Board for its review and comment, and we received written comments that are reprinted in appendix II. In these comments, the Federal Reserve Board generally agreed with our approach and results in examining the Federal Reserve System’s involvement with AIG within the context of the overall financial crisis at the time, and it endorsed the lessons learned that we identified in our work. Regarding regulators taking earlier action to reduce time pressure during a crisis, the Federal Reserve Board stated that it has established a new division to focus on market pressures and developments that may create economic instability, and is otherwise working to identify threats to financial stability. Regarding the opportunity that collateral disputes may offer for enhancing regulators’ market surveillance or for developing warning signs that firms are coming under stress, the Federal Reserve Board stated that it is working with other financial regulators to implement changes in supervision and regulation of derivatives markets, including requirements governing collateral posting. Regarding the notion that risk analysis that involves thoughtful stress testing—especially focusing on interconnections among institutions—can allow for better-informed and more timely decision making, the Federal Reserve Board stated that it has begun development of an annual stress testing program for large financial firms within its supervisory purview. In response to our findings that Federal Reserve System assistance to AIG gave rise to overlapping interests and complex relationships among the various parties involved, the Federal Reserve Board said it is exploring opportunities to improve its approach to potential or actual conflicts of interest that can arise from such interests and relationships. The Federal Reserve Board and FRBNY also provided technical comments, which we have incorporated as appropriate. In addition, we provided a draft of this report to Treasury for review and comment, and we also provided relevant portions of the draft to AIG, SEC, and selected others for their review and comment. We have incorporated comments from these third parties as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Chairman of the Federal Reserve Board, interested congressional committees, and others. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202)-512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To examine the sequence of events and key participants as critical decisions were made to provide federal assistance to American International Group, Inc. (AIG), we reviewed a wide range of AIG-related documents. We obtained these documents primarily from the Board of Governors of the Federal Reserve System (Federal Reserve Board) and the Federal Reserve Bank of New York (FRBNY), including records they have provided to Congress. These documents included e-mails, information relating to options and plans for aiding AIG, research, memorandums, financial statements, and other items. We also obtained information from congressional testimonies of the former FRBNY President and officials of the Federal Reserve Board, FRBNY, the former Secretary of the Department of the Treasury (Treasury), and former AIG executives. In addition, we reviewed Federal Reserve Board and FRBNY announcements, presentations, and background materials. We also reviewed our past work and the work of others who have examined the government’s response to the financial crisis, including the Congressional Oversight Panel, the Special Inspector General for the Troubled Asset Relief Program (SIGTARP), and the Financial Crisis Inquiry Commission. We conducted interviews with many of those involved in federal assistance to AIG, to obtain information on their participation in the events leading up to federal assistance for AIG, as well as their perspectives on the condition of AIG and the financial markets at the time. From the regulatory sector, we interviewed Federal Reserve Board and FRBNY officials, a former Federal Reserve Board Governor, a Reserve Bank President, current and former officials from state insurance regulatory agencies, SIGTARP staff, current and former Treasury officials, and an official of the Federal Home Loan Bank system. From the private sector, we interviewed current and former AIG executives, representatives from FRBNY advisors, an AIG advisor, AIG business counterparties, credit rating agencies, potential private-sector financiers, and academic and finance experts. In addition, we obtained written responses to questions from the former Office of Thrift Supervision, the former FRBNY President, and a former senior Treasury official. To examine decisions involving the selection and structure of the Maiden Lane III vehicle (ML III), we obtained and reviewed relevant documents from the Federal Reserve Board, FRBNY, and others, as noted earlier. In addition, we reviewed filings submitted by AIG to the Securities and Exchange Commission (SEC). We also conducted interviews with parties identified earlier. In addition, we obtained written responses to questions from the Autorite de Controle Prudentiel, a French banking regulator. We analyzed the information obtained from documents and interviews to identify the options for assistance considered by Federal Reserve System officials. We followed up with Federal Reserve System officials to understand their rationale for selecting the as-adopted ML III vehicle. To determine the extent to which FRBNY pursued concessions from the counterparties, we interviewed Federal Reserve Board and FRBNY officials and 14 of the 16 counterparties that participated in ML III. Bank of America and Merrill Lynch were unable to provide information on the concession issue. To examine the extent to which key actions taken were consistent with relevant law or policy, we reviewed AIG-related documents indicated earlier to identify key actions taken. More specifically, to understand the Federal Reserve Board’s authority to provide emergency assistance to nondepository institutions and related documentation issues, we reviewed legislation including the Federal Reserve Act of 1913, as amended, and the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010. We interviewed Federal Reserve Board officials to obtain their interpretation of the Federal Reserve Board’s authority. Further, to determine FRBNY’s involvement in AIG’s securities disclosures on the federal assistance, we reviewed relevant SEC records and interviewed SEC officials. Relevant documents we reviewed included e-mails, memorandums, disclosure filings, regulations and procedures, and material connected with AIG’s request for confidential treatment of ML III- related information. Finally, to evaluate the effectiveness of FRBNY policies and practices for managing conflicts of interest involving the firms that provided services to FRBNY, we reviewed FRBNY vendor agreements and FRBNY’s Operating Bulletin 10, which address procurement issues, as well as FRBNY’s employee Code of Conduct. We also reviewed documentation of on-site reviews of advisor and vendor firms and obtained documentation related to waivers granted to the firms. To determine relations among companies involved with ML III, we obtained and analyzed equity stock holdings data for the firms. We conducted interviews with a number of the parties indicated earlier—in particular, with Federal Reserve Board officials, FRBNY officials and advisors, SEC officials, a representative of the SEC Inspector General’s office, AIG executives, AIG counterparties, and academic experts. To examine criteria used to determine the terms for key assistance provided to AIG, we reviewed AIG-related documents indicated earlier, to understand the nature of the assistance and the terms. We compared the terms of a contemplated private-sector loan syndication deal with the original terms for FRBNY’s Revolving Credit Facility, and we also discussed differences between the two sets of terms with FRBNY officials. To review AIG’s treatment of various creditors and other significant parties after receiving federal assistance, we reviewed the FRBNY credit agreement, as amended, to understand the restrictions that were applied to AIG. To obtain information on FRBNY’s involvement in AIG’s decisions on meeting obligations and making investments, we conducted interviews with FRBNY officials, AIG executives, and those involved in AIG-supported real estate development projects. To identify lessons learned from AIG assistance, we relied generally on our analysis of information obtained from all the sources cited earlier and comments obtained from a number of interview subjects. We inquired generally about what the process of providing assistance to AIG might suggest for any future government interventions, as well as specifically about such matters as reducing time pressure in critical decision making and improving analytical insights into conditions at individual financial institutions and in financial markets at large. We conducted this performance audit from March 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Board of Governors of the Federal Reserve System Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Karen Tremba, Assistant Director; Tania Calhoun; Daniel Kaneshiro; Marc Molino; Brian Phillips; Christopher H. Schmitt; Jennifer Schwartz; Wade Strickland; and Gavin Ugale made major contributions to this report.
In September 2008, the Board of Governors of the Federal Reserve System (Federal Reserve Board) approved emergency lending to American International Group, Inc. (AIG)--the first in a series of actions that, together with the Department of the Treasury, authorized $182.3 billion in federal aid to assist the company. Federal Reserve System officials said that their goal was to avert a disorderly failure of AIG, which they believed would have posed systemic risk to the financial system. But these actions were controversial, raising questions about government intervention in the private marketplace. This report discusses (1) key decisions to provide aid to AIG; (2) decisions involving the Maiden Lane III (ML III) special purpose vehicle (SPV), which was a central part of providing assistance to the company; (3) the extent to which actions were consistent with relevant law or policy; and (4) lessons learned from the AIG assistance. To address these issues, GAO focused on the initial assistance to AIG and subsequent creation of ML III. GAO examined a large volume of AIG-related documents, primarily from the Federal Reserve System--the Federal Reserve Board and the Federal Reserve Bank of New York (FRBNY)--and conducted a wide range of interviews, including with Federal Reserve System staff, FRBNY advisors, former and current AIG executives, AIG business counterparties, credit rating agencies, potential private financiers, academics, finance experts, state insurance officials, and Securities and Exchange Commission (SEC) officials. Although GAO makes no new recommendations in this report, it reiterates previous recommendations aimed at improving the Federal Reserve System's documentation standards and conflict-of-interest policies. While warning signs of the company's difficulties had begun to appear a year before the Federal Reserve System provided assistance, Federal Reserve System officials said they became acutely aware of AIG's deteriorating condition in September 2008. The Federal Reserve System received information through its financial markets monitoring and ultimately intervened as the possibility of bankruptcy became imminent. Efforts by AIG and the Federal Reserve System to secure private financing failed after the extent of AIG's liquidity needs became clearer. Both the Federal Reserve System and AIG considered bankruptcy issues, although no bankruptcy filing was made. Due to AIG's deteriorating condition in September 2008, the Federal Reserve System said it had little opportunity to consider alternatives before its initial assistance. As AIG's troubles persisted, the company and the Federal Reserve System considered a range of options, including guarantees, accelerated asset sales, and nationalization. According to Federal Reserve System officials, AIG's credit ratings were a critical consideration in the assistance, as downgrades would have further strained AIG's liquidity position. After the initial federal assistance, ML III became a key part of the Federal Reserve System's continuing efforts to stabilize AIG. With ML III, FRBNY loaned funds to an SPV established to buy collateralized debt obligations (CDO) from AIG counterparties that had purchased credit default swaps from AIG to protect the value of those assets. In exchange, the counterparties agreed to terminate the credit default swaps, which were a significant source of AIG's liquidity problems. As the value of the CDO assets, or the condition of AIG itself, declined, AIG was required to provide additional collateral to its counterparties. In designing ML III, FRBNY said that it chose the only option available given constraints at the time, deciding against plans that could have reduced the size of its lending or increased the loan's security. Although the Federal Reserve Board approved ML III with an expectation that concessions would be negotiated with AIG's counterparties, FRBNY made varying attempts to obtain these discounts. FRBNY officials said that they had little bargaining power in seeking concessions and would have faced difficulty in getting all counterparties to agree to a discount. While FRBNY took actions to treat the counterparties alike, the perceived value of ML III participation likely varied by the size of a counterparty's exposure to AIG or its method of managing risk. While the Federal Reserve Board exercised broad emergency lending authority to assist AIG, it was not required to, nor did it, fully document its interpretation of its authority or the basis of its decisions. For federal securities filings AIG was required to make, FRBNY influenced the company's filings about federal aid but did not direct AIG on what information to disclose. In providing aid to AIG, FRBNY implemented conflict-of-interest procedures, and granted a number of waivers, many of which were conditioned on the separation of employees and information. A series of complex relationships grew out of the government's intervention, involving FRBNY advisors, AIG counterparties, and others, which could expose FRBNY to greater risk that it would not fully identify and appropriately manage conflict issues and relationships.
Background DOD is currently implementing several major force structure and basing initiatives that are expected to result in a large number of personnel movements and changes in the size and shape of its domestic installation infrastructure. First, under the 2005 BRAC round, DOD is implementing 182 recommendations, as set forth by the Base Closure and Realignment Commission, which must be completed by the statutory deadline of September 15, 2011. Through the BRAC process, DOD intends to transform its departmentwide installation infrastructure and, as such, the recommendations have an unusually large number of realignment actions that are expected to result in significant personnel movements across DOD’s installations. Second, under the Global Defense Posture Realignment, DOD is realigning its overseas basing structure to more effectively support current allies and strategies in addition to addressing emerging threats. Included in this rebasing effort is the expected return of about 70,000 military and civilian personnel to the United States by 2011. Third, the Army is also undergoing major force restructuring in implementing its force modularity effort, which has been referred to as the largest Army reorganization in 50 years. The foundation for the modular force is the creation of brigade combat teams that are expected to be more agile and deployable to better meet combatant commander requirements. Finally, DOD has recently initiated a Grow the Force initiative intended to permanently increase the end strength of the Army and Marine Corps by 74,000 soldiers and 27,000 marines, respectively, to enhance overall U.S. forces, reduce stress on deployable personnel, and provide necessary forces for success in the Global War on Terrorism. When considered collectively, the simultaneous implementation of these initiatives is generating large personnel increases at many military installations within the United States, which, in turn is impacting the communities that are in close proximity to those installations. As of January 2008, OEA was assisting 20 communities surrounding growth installations based on direct DOD impacts in light of community-specific needs and resources. Figure 1 shows those impacted locations. As indicated in figure 1, most of the growth locations are attributable to the Army, which is affected more than any other military service by force structure and basing initiatives. As shown in table 1, available DOD data indicate that these 20 installations are expecting a combined net growth of over 173,000 military and civilian personnel over fiscal years 2006-2012, not counting family members and nonmission-related contractors who are also expected to relocate to the surrounding communities and generate additional community infrastructure needs. It should be noted that these estimates are based on planned personnel movement actions as of March 2008 and are subject to change over time as there are a number of factors, such as revisions in operational plans associated with the Global War on Terrorism, which may give cause for estimate revisions. As table 1 shows, the vast majority of the community locations predicted to be most affected by DOD growth surround Army installations, with Fort Bliss, Fort Belvoir, Fort Riley, and Fort Lee expected to experience personnel growth rates of more than 50 percent over fiscal years 2006-2012. Moreover, while Fort Knox, Kentucky and Cannon Air Force Base, New Mexico are actually expected to incur overall losses in personnel at their facilities between fiscal years 2006 and 2012, OEA has identified growth challenges for the surrounding communities and therefore treats them as such. For example, the Fort Knox population is changing from mostly military students living on base to a civilian population living off base, creating new growth demands on the surrounding community’s infrastructure and services. Moreover, because the growth estimates displayed in table 1 exclude dependents associated with military and civilian personnel movements as well as support contractors who may elect to relocate to these growth locations, these estimates do not represent total growth at these locations. As shown in table 2, available military projections for increases in the number of dependents at these locations over fiscal years 2006-2012 currently exceed 168,000. The Army has reported significant dependent growth for the communities surrounding Fort Bliss, Fort Belvoir, Fort Riley, Fort Knox, Fort Lee, and Fort Carson, each of which is expected to experience a greater than 50 percent increase in the number of military dependents. It should be noted that the Army dependent numbers are currently being reviewed by some communities and the Department of Education, which is described later in this report. It should also be noted that even with the best estimate, the number of dependents that will actually relocate and when is not certain due to a number of factors, such as the timing and duration of the military personnel’s next overseas deployment. In addition to the growth estimates depicted in tables 1 and 2, the communities surrounding growth installations can expect additional personnel growth from indirect economic development such as employment opportunities created by defense support contractors. Based on a series of presidential executive orders dating back to 1978 and amended as recently as May 2005, it has been long-standing DOD policy that DOD takes the leadership role within the federal government in helping communities respond to the effects of defense-related activities. The current version of the executive order, which is included in appendix III, states that the Secretary of Defense, through the EAC, shall, among other things, establish a Defense Economic Adjustment Program to assist substantially and seriously affected communities from the effects of major defense closures and realignments. The order identifies the 22 federal agency members of the EAC and names the Secretary of Defense or the Secretary’s designee as the Chair of the committee with the Secretaries of Labor and Commerce as co-vice chairs. The order states that the EAC shall advise, assist, and support the program and develop procedures for ensuring that state and local officials are notified of available federal economic adjustment programs. The order further states that the program shall, among other things, identify problems of states and communities that result from defense-related activities and that require federal assistance; assure timely consultation and cooperation with federal, state, and community officials concerning DOD-related impacts; assure coordinated interagency and intergovernmental adjustment assistance; prepare, facilitate, and implement cost-effective strategies and action plans to coordinate interagency and intergovernmental economic adjustment efforts; and serve as a clearinghouse to exchange information among federal, state, and community officials involved in the resolution of community economic adjustment problems including sources of public and private financing. The order also states that all federal executive agencies shall afford priority consideration to requests from defense- affected communities for federal assistance that are part of a comprehensive plan used by the committee. OEA, located in Arlington, Virginia, is a DOD field activity that reports to the Deputy Under Secretary of Defense for Installations and Environment, under the Under Secretary of Defense for Acquisition, Technology, and Logistics. OEA is responsible for facilitating DOD resources in support of local programs and providing direct planning and financial assistance to communities and states seeking assistance to address the impacts of DOD’s actions. The office has a fiscal year 2008 budget exceeding $57 million, $45 million of which is to fund its core programs—which include assistance to closing and growing locations—and a staff of 35 civilians and 3 military liaisons. Currently, OEA is managing about 240 community projects including closing, downsizing, and growth bases. OEA assistance to growth communities is primarily focused on assisting local communities to organize and plan for population growth due to DOD activities. Growth Communities Have Begun to Identify Infrastructure Needs, but Planning Has Been Hampered by a Lack of Consistent and Detailed Information about DOD Personnel Movements Communities surrounding DOD growth installations have begun to identify infrastructure needs in general terms, but planning efforts have been hampered by a lack of consistent and detailed information about anticipated DOD personnel movements. Due to the complexity of DOD’s current growth activities, coupled with ongoing operations in Iraq and Afghanistan, precise data about the magnitude and makeup of personnel movements continue to evolve. However, until the military departments begin to disseminate consistent and more detailed information about the defense personnel moves they know about, it will be difficult for community, state, and federal officials to plan for and provide necessary infrastructure and quality-of-life support to members of the armed services, their families, and other community residents. Communities Have Begun to Plan for Expected Growth Many of the 20 communities that OEA has determined will be substantially and seriously affected by DOD growth have begun planning and taking action on projects and programs that will help them accommodate the expected influx of military and civilian personnel, military families, and contractors over the next several years. DOD’s Base Redevelopment and Realignment Manual states that mission and personnel increases at military installations can place direct and significant demands on surrounding community infrastructure and services. It further notes that large, rapid influxes of personnel and changes in missions create the need for an immediate partnership between community leaders and installation leaders to manage the changes. Coordinated management of change provides an opportunity to minimize the negative effects on the community while enhancing the long-term quality of life for defense personnel and community residents. Among other things, communities must prepare roads, schools, and other infrastructure to accommodate the expected growth, which can require significant lead time to plan, budget for, finance, and construct. According to our survey of 20 growth communities, 18 have established planning processes to engage local stakeholders to consider potential community impacts, determine priorities, and ultimately develop an action plan. Although all communities are different and are in various planning stages, most of these growth communities have begun developing growth management plans, which are used to identify specific infrastructure improvements such as, roads, schools, and housing that may be required to support the expected growth. Of the 20 communities, 3 completed growth management plans by the end of 2007 and 13 had started plans— the majority of which are scheduled to be completed by the end of 2008. Two of the remaining 4 communities have opted not to develop a growth management plan and instead are proceeding to develop studies targeted toward issues that are already apparent. For example, Fort Belvoir, where traffic congestion has been identified as an issue, will be using its OEA planning grant to develop transportation models. At the time of our review, the communities surrounding Marine Corps Base Camp Lejeune and Marine Corps air stations New River and Cherry Point in eastern North Carolina were in the early stages of establishing a community planning organization and were expected to apply for OEA planning assistance soon. Based on our survey, coupled with our analysis of community profiles prepared by the growth communities for OEA’s December 2007 Growth Summit, we found that transportation, schools, and housing were identified by the communities as their top growth-management issues. When asked to report their top infrastructure challenges, 16 of the 20 communities cited transportation, principally roads. Insufficient school capacity was named by 11 communities. Six communities said affordable housing was a major challenge. Other issues that were identified by at least 1 growth community included water and sewerage, health services, workforce development, child care, spousal employment, law enforcement, and emergency services. Figure 2 illustrates our analysis of the top issues identified by 2 or more of the 20 growth communities. In the summary profiles prepared for the OEA Growth Summit, the communities described some of the impacts these issues would have on their communities if they were not addressed prior to the arrival of the new personnel. The impacts ranged from increased usage and associated congestion on local roads to concerns about the adequacy of schools and questionable quality of healthcare facilities which are likely to be stretched to accommodate the expected increased demand. Communities also expressed concerns about obtaining funding to implement the plans that call for new infrastructure to be built in order to accommodate expected growth. Funding issues are discussed later in this report. Precise Planning Efforts Have Been Hampered by a Lack of Consistent and Complete Information about Military Growth Although communities have made progress in planning for growth in general terms, community planners told us that they need more detailed information regarding the numbers and demographics of expected DOD population growth in order to prepare more refined implementation plans and secure required financing. DOD Directive 5410.12 requires the services to provide maximum advance information and support to state and local governments to allow planning for necessary adjustments in local facilities and public services, workforce training programs, and local economic development activities. Further, the directive requires each of the military services to develop implementing guidance for providing planning information to installations, communities, and OEA. However, our review found that none of the services have developed implementing guidance as required by the directive, and senior officials from each of the services acknowledged that this guidance has not been prepared. Senior military officials we interviewed either did not know about the directive or did not see it as a priority for implementation. As a result, information that has been provided to communities regarding planned DOD personnel movements has been inconsistent and lacks important demographic details. The Army has established its centralized Army Stationing and Installation Plan database as the official source of Army personnel numbers. However, we recently reported that these numbers were often inconsistent with personnel information received from installation officials—the primary source of personnel data used by community planners. To the Army’s credit, most of the installation-level officials we spoke with said that the consistency of the data being provided to communities is improving. Nevertheless, in our survey and during follow-up discussions with the 20 communities, more than half expressed concerns about the consistency and completeness of the personnel information they were provided. For example, one community representative from the Fort Belvoir, Virginia area indicated that the planning numbers being discussed at the installation level differed from those being discussed at the headquarters level by nearly 5,000 personnel due to the omission of mission-related contractors. According to this official, the Army was notified of the omission, but had not included them in subsequent briefings. Another community representative from the Fort Bragg, North Carolina community told us that the planning numbers they used during a public meeting were disputed by a senior military installation official. According to this official, the difference was so great (nearly 1,500 military personnel due to the omission of another military service using the base) that the community had to go back and revise its plans, duplicating an already complicated effort, wasting valuable time and money in the process. This situation could have been avoided if the installation had prepared and disseminated complete information to the community in a more timely manner. Other communities also expressed concerns regarding the timeliness of the data. For example, a community leader responsible for leading community development efforts near Fort Knox, Kentucky indicated that his organization did not have timely access to the detailed population information needed to plan effectively. He noted that understanding the size and the timing of the population movements was essential to his planning efforts and for ensuring that the state budget was sufficient to address the expected growth needs. He indicated that growth information was obtained through multiple sources including the installation, discussions with Pentagon officials, and by proactively monitoring Pentagon growth announcements. Without timely access to information he noted that it was difficult to know if his organization was making the best decisions about the development of supporting infrastructure. He indicated that when changes happen, the Army does not share much information, which places a lot of stress on the community, which must then work with rumors and wait until the Army arrives at a final decision before any official information is released. A community leader from the Fort Bliss, Texas area expressed similar concerns regarding the timeliness of information and suggested that receiving the planning information on a regular (quarterly) schedule would help reassure the community that it has the best and most up-to-date information so that planning efforts remain realistic. He also noted that he did not have much confidence in the civilian personnel numbers that the Army has provided because they do not match the ratio of civilian personnel to military personnel that is seen across the Army for similar capabilities. While he complimented the Army’s transition office for providing quick updates and information on projected increases, regular, quarterly updates would give the community confidence that it has the most up-to-date information. This community leader also remained unclear as to why Fort Bliss civilian personnel numbers appear to be understated. Most of the community representatives we interviewed were quick to point out how helpful local installations have been to their planning efforts and acknowledged that military actions continue to change and complete personnel predictions are uncertain. Nevertheless, several communities expressed concerns about the lack of information regarding dependents, particularly regarding the number of school-aged children expected to accompany arriving military personnel. According to community planners, detailed demographic data, such as the number and ages of dependent children expected to accompany incoming service members, are particularly important when planning to meet future demands for education and housing. For instance, a community official from the Fort Riley, Kansas area indicated that Fort Riley is receiving a greater number of younger, single soldiers than originally expected, resulting in fewer school-aged children and higher demand for rental housing than the community initially anticipated. Community officials from the Fort Benning, Georgia area have had long-standing disagreements with Army officials regarding the number of school-aged children that are expected to arrive. Although the Army and local officials have recently reached an agreement regarding the projected number of children the Fort Benning community should use for planning purposes, this example raises questions about the reliability of dependent data being provided to other communities. The Air Force and the Navy do not centralize their personnel movement data and have, thus far, not attempted to calculate the number of school-aged children that will accompany their relocating service members. Neither service could provide detailed information regarding dependents. OEA, as part of its duties under executive order 12788, is to serve as a clearinghouse of DOD planning information to the public, but without consistent data and timely updates from all military services, it cannot effectively perform this function. As a result, communities—as well as state and federal agencies—have been left to their own devices to obtain needed information. Several community officials told us that they have resorted to gathering their own demographic data in order to obtain the detailed dependent information required for their planning. For instance, community officials from San Antonio, Texas have visited the units that are expected to relocate to Fort Sam Houston and have interviewed personnel within these units to determine key demographic information that might aid them in their community planning efforts. While these methods allow communities to obtain some of the detailed planning information they require, these communities must often resort to diverting resources from planning and implementation to developing information that the services should have already provided them. Information on school-aged children is also important to the Department of Education, which uses this information for providing assistance to federally impacted school districts. During our review, the Department of Education expressed frustration with the Army’s inconsistent and incomplete information in this area. According to OEA officials, the Army, the Department of Education, and OEA had begun negotiating a memorandum of understanding to establish a framework for addressing, among other things, issues involved in reporting actual or projected numbers of school-aged dependents. The memorandum would require the Army to develop, monitor, and share projections of dependent student data associated with military, civilian, and mission-support contractors and to establish a system for sharing historical and actual military dependent student data by installation. In commenting on a draft of this report, the office of the Under Secretary of Defense (Military Community and Family Policy) noted that this effort had been expanded beyond the Army to encompass all of DOD. At the time of our review the memorandum had not been finalized. Without high-level DOD direction to the military services to establish and implement guidance in accordance with the DOD directive regarding how and when information related to DOD personnel movements will be distributed to affected communities and what types of data will be included, information that the services provide the installations, communities, and other federal agencies will likely continue to be inconsistent and incomplete. Furthermore, OEA’s efforts to establish a centralized clearinghouse for this information, which could greatly improve the consistency and availability of personnel planning data, will continue to be hampered. The complexity of DOD’s current growth activities, coupled with ongoing operations in Iraq and Afghanistan, creates a situation where precise data about the magnitude and makeup of personnel movements are continuing to evolve. Nevertheless, until the military departments begin to disseminate consistent and detailed information about defense personnel moves, including a description of what is included in the data and any uncertainties such as timing of personnel movements, it will be difficult for community, state, and federal officials to plan for and provide the necessary infrastructure to support members of the armed services, their families, and current residents of surrounding communities. OEA and Other Agencies Are Providing Some Assistance to Communities, but the Office of the Secretary of Defense Has Not Provided the High-Level Leadership Necessary to Help Ensure Interagency and Intergovernmental Coordination While OEA, other DOD agencies, and some state, local, and federal government agencies have provided some assistance to DOD growth communities, the Office of the Secretary of Defense has not provided the high-level leadership necessary to help ensure interagency and intergovernmental coordination at levels that can make policy and budgetary decisions to better leverage resources through the EAC. The EAC was established over 30 years ago for the purpose of sharing information and coordinating assistance to communities adversely affected by DOD activities—including growth, closures, and other actions. Although the Secretary of Defense, or his designee, is directed by presidential executive order to chair the EAC and lead efforts to share information within the federal government and among state and local agencies, OSD has not provided the leadership necessary to make this happen effectively. However, in the absence of a fully functioning EAC at the executive level, OEA has been proactive in working with communities it believes will be substantially and seriously affected by DOD growth activities and in reaching out to other federal agencies at the working level. In addition, other DOD agencies, non-DOD federal agencies, and state and local agencies have also provided various kinds of assistance to growth communities. OEA Has Provided Planning and Technical Assistance to Affected Communities DOD’s efforts to assist communities affected by base closures, realignments, or expansions are consolidated in OEA, which has been proactive in working with communities it believes will be substantially and seriously impacted by DOD activities. To assist growth communities, OEA has identified those communities expected to be impacted by DOD growth activities and have expressed a need for planning assistance. This planning assistance has helped many of those communities hire planners or consultants to undertake studies to identify gaps in their existing local infrastructure that must be filled in order to accommodate the expected population growth. During our survey of the 20 growth-impacted communities, we found that the representatives were complimentary of OEA’s role in supporting their planning process through grants and technical support. Many communities referred to OEA as their only source of federal assistance. As table 3 shows, OEA provided grants to 18 of the 20 communities and to three states—Virginia, Kansas, and Maryland. Both Virginia and Maryland are using their grants for transportation planning, and Maryland is also using its grant to plan for environmental impacts to the Chesapeake Bay. Kansas used its OEA grant to hire a state coordinator to help communicate DOD-related community impacts to state policymakers. Other DOD Agencies Have Provided Some Assistance to Affected Communities In addition to OEA, other DOD agencies have provided some assistance to growth communities. For example, the Defense Access Road program administered by the Military Surface Deployment and Distribution Command provides a method for DOD to pay for public highway infrastructure improvements required as a result of sudden or unusual defense-generated traffic impacts if certain criteria are met. When the commander of an installation determines that improvements to a public road are needed, it is the commander’s responsibility to bring the deficiencies to the attention of the appropriate state or local transportation authority. In cases where the owning transportation authority cannot or will not correct the deficiency, the installation commander can request the improvements under the Defense Access Road program. We recently reported that in March 2008, the DOD had requested $36.2 million for a new access road in the Fort Belvoir, Virginia area. If the funds are appropriated by Congress, this project is expected to be completed by the end of fiscal year 2010. Another DOD agency that has provided assistance to some growth communities is the DOD Education Activity. This activity, located within the Office of the Under Secretary of Defense for Military Community and Family Policy, operates over 200 schools worldwide, 57 of which are located in the continental United States. This activity recently published an update to a report on assistance to local educational agencies for defense dependents education. This report, required by the John Warner National Defense Authorization Act for Fiscal Year 2007, directed the Secretary of Defense to update the DOD plan to provide assistance to local educational agencies that experience growth and/or decline in the enrollment of military students as a result of the force structure changes, relocation of military units, or the closure or realignment of military installations. The DOD Education Activity also established a directorate in October 2007 to help provide quality education opportunities for military children and to assist military-connected school systems. This assistance is geared toward issues unique to military children, such as helping them keep up with changing curriculum requirements as they are moved from base to base. Although some off-base schools that may receive assistance are among those experiencing DOD growth, the program does not specifically focus on growth communities. DOD, through its supplement to the Department of Education’s Impact Aid Program, provides financial assistance to local educational agencies that are impacted by the presence of military or DOD civilian dependent students and DOD children with severe disabilities. In fiscal year 2007, the total appropriation for the DOD supplement to the Department of Education’s Impact Aid Program was $43 million. Some Assistance Has Been Provided to Communities by State, Local, and Federal Agencies Other federal agencies as well as the state and local agencies of jurisdiction have provided some assistance to growth communities. Since there is currently no centralized mechanism for collecting information on all of the types of assistance provided to DOD communities, the information we collected should not be viewed as complete. Furthermore, although our survey of the 20 growth communities completed in April 2008 did not necessarily identify all of the funding that has been provided to these communities and we did not validate the responses, it did reveal the magnitude and variety of resources that may be available to them. For example, 11 communities reported receiving a total of $131.7 million in state-sponsored funding to support a range of initiatives including building roads, conducting needs assessments, developing business plans, and acquiring easements in support of the installations’ missions. Five communities indicated that they have received a total of $167.2 million in local funding. The majority of local funding came from communities near Fort Carson, Colorado and Fort Riley, Kansas. Fort Carson instituted a special purpose tax through a rural transportation authority which raised $78.8 million in local funding to improve roads. Communities outside of Ft. Riley raised $87.3 million through local bonds for the construction of two schools and the expansion of a community hospital. Three communities received a total of $212,500 from private funding sources. For example, the community surrounding Fort Benning, Georgia received $160,000 in 2003 from the Fort Benning Futures Partnership, a community action group, to study the impact of BRAC. In an attempt to identify some of the federal assistance that may have been provided or that may be available to growth communities, we obtained information from structured questions administered to seven federal agencies and from information provided by DOD. Although we did not find any federal programs in these agencies specifically designed to assist communities impacted by DOD-related growth, officials from those agencies we contacted told us that there are numerous programs that growth communities can apply and be considered for if they meet specific eligibility requirements. For example, the Department of Labor reported that it had provided more than $65 million in Workforce Innovation in Regional Development grants to expand employment and advancement opportunities for workers, and it has given almost $30 million in National Emergency Grants to communities affected by BRAC, including growth communities. In addition, our analysis shows that for fiscal year 2008, the Department of Education estimates that over $428 million in Federal Impact Aid grants will be provided for the operational support of local schools based on the number of federally connected children who are in attendance in specific local school districts in states with growth installations. This assistance is not provided to DOD growth communities only, but to any community where federally connected children are attending school. Appendix II provides a list of the assistance programs identified by the eight federal agencies we contacted (including DOD), for which DOD growth communities may be eligible. In April 2006, OEA, in its capacity to provide administrative support to the EAC, published a compendium of federal assistance programs for communities, businesses, and workers affected by BRAC closures or realignments and other DOD actions. The compendium—which provided federal points of contact, internet addresses, and telephone numbers—was a helpful first step. However, the compendium did not provide important details on available assistance programs, such as eligibility requirements, application procedures, and deadlines—information that could have been easily gathered through a fully functioning EAC. The EAC Is Intended to Assist Communities Adversely Affected by DOD Actions, but the Office of the Secretary of Defense Has Not Provided the High-Level Leadership Necessary to Ensure Interagency and Intergovernmental Coordination The EAC was established over 30 years ago for the purpose of sharing information and coordinating assistance to communities adversely affected by DOD activities—including growth, closures, and other actions. Although the Secretary of Defense, as chair of the EAC, is directed by executive order to provide a forum for sharing information within the federal government and among state and local agencies, the Office of the Secretary of Defense has not provided the high-level leadership necessary to make this happen effectively. To ensure that communities substantially and seriously affected by DOD actions receive assistance, the 22-agency EAC was created by presidential executive order. Executive order 12788 designated the Secretary of Defense, or his designee, to chair the committee and designated the Secretaries of Labor and Commerce, or their designees, to serve as committee co-vice-chairs. The order also directs the EAC to identify problems of states and communities that result from defense-related activities and that require federal assistance. The order directs all executive agencies to afford priority consideration to requests from defense-affected communities for federal technical, financial, or other assistance that are part of a comprehensive plan used by the EAC. In addition, the committee was tasked with making communities that are substantially and seriously affected by DOD actions—including both closings and growth activities—aware of available federal economic adjustment programs. The executive order further requires the EAC to serve as a clearinghouse to exchange information among its member agencies for the benefit of all communities affected by DOD activities. Such interagency and intergovernmental coordination is important to more effectively leverage resources, and our prior work has concluded that successful collaboration requires commitment by senior officials in respective federal agencies to articulate their agreements in a formal document such as a memorandum of understanding, interagency guidance, or interagency planning documents. Although staff-level working group meetings have been held, the executive-level committee has not met since November 2006 and committee leadership currently has no plans to convene periodic meetings. Furthermore, the EAC has not developed a plan to ensure information sharing and other forms of cooperation among its member agencies for the benefit of all communities affected by DOD activities. While the Secretary of Defense is required to lead interagency and intergovernmental efforts to assist communities most affected by its activities, OSD delegated this function to the Deputy Under Secretary (Installations and Environment), who has not held regular meetings of the executive-level EAC. According to representatives of key EAC federal agencies with whom we spoke with, they have not been fully engaged in the committee process and DOD has not kept them entirely informed of department activities that might better help them provide assistance to affected DOD communities. Furthermore, one executive-level EAC representative we spoke with was unaware that the executive order requires her agency to afford priority consideration to requests from defense-affected communities for federal assistance as part of a comprehensive plan used by the EAC. In the absence of a fully functioning EAC, OEA has proactively organized ad hoc outreach visits with senior federal officials for education issues. Officials representing the Department of Education, the Army, the Office of the Deputy Under Secretary of Defense for Military Community and Family Policy, and OEA met with leaders representing states, installations, communities, and local education activities at Forts Drum, Riley, Bliss, and Benning between September 2007 and January 2008. The purpose of these visits was to provide stakeholders with information involving student population growth issues, improve communication among all partners, identify gaps or lags in capacities, and to more extensively document specific requests for federal action to assist communities and states responding to student growth. In addition, OEA has sponsored conferences attended by state, local, and federal agencies and affected community representatives, providing an opportunity for communities to discuss issues with officials from OEA and participating federal entities that are members of the EAC. The most recent conference, a 3-day Growth Summit, was held in December 2007. During our conversations with representatives of the 20 growth communities, several communities volunteered how helpful the summit was to them in that they could exchange lessons learned with other communities facing similar challenges. At the summit, OEA announced plans to work with communities to prepare a list of projects that could not be undertaken to address DOD-related growth activities due to a lack of funding. Once these projects are identified and validated by OEA project managers, OEA plans to present this information to the Office of Management and Budget and cognizant federal agencies sometime during the summer of 2008 for possible budget consideration. OEA can not guide interagency operations at a high enough level to promote effective interagency cooperation. Only high-level leadership from the Secretary of Defense can marshal the resources of the executive federal agency EAC members and only these high-level federal officials can affect possible policy and budget decisions that may be required to better assist the communities. Without high-level DOD leadership, the EAC will continue to function at the working group level and communities affected by all types of DOD actions (growth and closure) will lack an important source of information and support. Conversely, a functional EAC could better leverage resources by providing a conduit through which member agencies could share any ongoing and planned efforts that could assist DOD-affected communities, better match available resources to community needs, identify and avoid redundancies and serve as a clearinghouse for providing comprehensive, targeted, and timely information about funding programs to all DOD-affected communities. Conclusions Although the long-term outlook for communities surrounding growing DOD facilities is generally encouraging, the very real challenges many communities face to accommodate the expected influx of personnel will require carefully targeted investments and judicious use of local, state, and federal resources. Communities that are unable to provide needed infrastructure improvements by the time DOD executes its planned personnel movements could face overcrowded schools, clogged roadways, and overburdened public services. Conversely, some communities could make substantial investments or incur large debts only to find that new residents will be longer in coming or fewer in number than expected. Hence, accurate, detailed, and timely planning information is vital to both maximize the efficient use of resources and to ensure the highest quality of life possible for relocating DOD personnel and their families. Unless DOD shares its best available information regarding personnel movements— including demographics as well as information on the limitations of the data and when to expect updates—in the timeliest practical manner, some communities surrounding growing installations may bear unnecessary burdens as they strive to accommodate growth that they have little or no ability to control. Furthermore, without a centralized and user-friendly source for obtaining such information, many communities, especially small towns and rural areas that lack the experience or planning personnel to effectively research and compete for grant opportunities, may be disadvantaged. By executive branch policy, federal agencies have a shared responsibility with local and state governments in growth areas for providing affected communities with assistance, but have done so in a generally uncoordinated fashion. In addition, as the instigating force behind the growth initiatives—the 2005 BRAC, overseas rebasing, force modularity, and Grow the Force—and the body accountable for implementing BRAC recommendations, DOD is charged by presidential executive order and DOD directive to lead federal efforts to alleviate the impact of its actions. Without providing the leadership necessary to fully implement the presidential executive order to provide consistent and complete information and be fully engaged in the high-level cooperation of other federal agencies, DOD risks allowing the needs of affected communities to go unfulfilled in an inefficient, hit-or-miss search for assistance. Until DOD begins to fully leverage the interagency resources of the EAC and achieve unity of effort aimed at maximizing assistance to affected communities, state and local governments may not be able to provide expanded infrastructure and services for DOD personnel while maintaining existing amenities. As a result, quality of life for both military and civilian residents, along with military readiness, could be degraded. Recommendations for Executive Action In order to assist communities in planning to provide the infrastructure necessary to support defense-related growth and to ensure quality of life for members of the armed forces, their families, and other members of surrounding communities, we recommend that the Secretary of Defense direct the Secretaries of the military services and the Commandant of the Marine Corps to develop and implement guidance, no later than the end of fiscal year 2008, that is consistent with DOD Directive 5410.12 for the timely, complete, and consistent dissemination of DOD planning information such as estimated timelines and numbers of personnel relocating, as well as demographic data such as numbers of school- aged children, and to update this information quarterly. In order to better coordinate and leverage federal resources to assist communities affected by DOD activities, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to implement Executive Order 12788 by holding regular meetings of the full executive-level EAC and by serving as a clearinghouse of information for identifying expected community impacts and problems as well as identifying existing resources for providing economic assistance to communities affected by DOD activities. This clearinghouse would provide a centralized source for information from all military services regarding personnel planning information, as well as information regarding any resources available at the federal, state, local, and private-sector levels that can help address potential infrastructure gaps at the affected communities. In addition, this information should be updated at least quarterly and made easily available to all interested stakeholders at the local, state, and federal levels. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD concurred with our recommendations. However, while DOD indicated concurrence, it is unclear from its comments and stated actions as to what actions, if any, DOD plans to take to meet the intent of our recommendations. DOD’s comments are reprinted in their entirety in appendix IV. DOD, as well as several other federal agencies cited in this report, also provided technical comments on a draft of this report which we incorporated as appropriate. DOD concurred with our recommendation to direct the military services to develop and implement guidance that is consistent with DOD Directive 5410.12, which provides overall policy for minimizing economic impacts on communities resulting from defense activities. Although DOD indicated it would continue to work with the cognizant DOD components to ensure compliance with the directive, actions taken to date have not resulted in the military services’ development and implementation of guidance which we believe is necessary for providing more complete and consistent personnel relocation planning data for impacted communities. Moreover, DOD was not explicit in its comments as to what steps it intends to take to ensure that the military services have implemented such guidance by the end of fiscal year 2008. With respect to our recommended action to provide information updates on a quarterly basis, DOD indicated that not all situations are conducive to quarterly updates. The primary basis for recommending quarterly updates was because the Army, which has the majority of growth activities affecting local communities, updates its centralized personnel movement database on a quarterly basis and could therefore provide quarterly updates. The other services do not have centralized databases and currently provide the information on an as-needed basis. While we agree that some flexibility in the update process may be warranted so as to not create burdensome situations, we continue to believe that it is critical that updated data important for community planning be disseminated on a regular basis to community entities in a manner that is timely, complete, and consistent to provide assurance to the communities that they have the best and most accurate DOD information possible for planning purposes. DOD also concurred with our recommendation directing the Under Secretary of Defense for Acquisitions, Technology, and Logistics to implement Executive Order 12788 to better coordinate and leverage federal resources by holding regular meetings and by developing a centralized clearinghouse of information to provide, among other things, a centralized source for personnel relocation data and available resources to address potential community infrastructure gaps. As noted in its comments, DOD stated that it will develop an information clearinghouse which will identify federal programs and resources to affected communities, present successful state and local responses, and provide EAC members with a basis to resource their assistance programs. Although we believe this to be a step in the right direction, we continue to believe that the EAC, as the senior-level federal committee established by presidential executive order to assist interagency and intergovernmental coordination in support of defense-impacted communities, needs to meet on a regular basis to exercise its responsibilities and assure the successful implementation of Executive Order 12788. However, based on DOD’s comments, it is unclear as to whether DOD, as chair of the EAC, intends to call and periodically hold meetings of the full executive-level committee to provide the high-level federal leadership that we believe is necessary to more effectively coordinate federal agency assistance to impacted communities. As our review has shown, the full committee has not met since November 2006. While DOD has left the workings of the EAC to the Office of Economic Adjustment, we do not believe that this office can effectively guide interagency operations at a high enough level to promote interagency cooperation and provide priority considerations to defense- affected communities and therefore we reiterate our recommendation to hold regular meetings of the executive-level EAC. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense, Army, Air Force, and Navy and the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-4523 or at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology To examine the extent to which communities affected by defense actions arising from the implementation of the base realignment and closure (BRAC) 2005 round recommendations, the Global Defense Posture Realignment, Army force modularity, and Grow the Force initiatives have identified necessary infrastructure requirements to meet anticipated growth projections, we collected and analyzed available Department of Defense (DOD) data regarding the expected personnel growth at selected communities within the United States. We selected all 20 communities that DOD’s Office of Economic Adjustment (OEA) had determined to be growth locations expected to be substantially and seriously impacted based on OEA criteria as of January 2008. (See table 1 for a full listing of these locations.) We interviewed OEA project managers designated to work with each of these communities to obtain background and insight into the challenges these communities were facing and their progress in identifying needed infrastructure within their communities as a result of the military growth. In order to present information regarding expected growth at each military installation, we analyzed Army and Air Force headquarters-level data, and Navy and Marine Corps installation-level population data. We obtained and analyzed the estimated installation population between fiscal years 2006 and 2012 for military, civilian, and mission contractor personnel as well as their families for the 20 growth communities that OEA identified to be substantially and seriously impacted. Installation and dependent population data for the Army were obtained from the centralized Army Stationing and Installation Plan database. To obtain consistent data from the Navy, Marine Corps, and the Air Force—none of which maintain a centralized database for this information—we developed and administered a data collection instrument using the Army database categories. The Navy and Marine Corps provided data directly from the installation level, while the Air Force provided data through its headquarters Office of Manpower and Personnel. We made numerous contacts with cognizant Army, Navy, Marine Corps, and Air Force officials both at the headquarters and installation level in order to gather and explain these data. We conducted a survey with OEA’s designated point of contact at each of the 20 communities and periodically followed up to ascertain, among other things, their progress in identifying growth issues and the status of plans to identify needed support infrastructure. We received completed questionnaires from all 20 locations and conducted follow-up interviews with all 20 to ensure that our information was current. We further interviewed senior officials from each of the military services regarding their practices in providing installation growth projections to growth-impacted communities and OEA in accordance with DOD policy. We also visited 1 location representing each of the top three growth challenges as determined by our survey. These locations and their corresponding growth challenges were Eglin Air Force Base, Florida (transportation); Fort Benning, Georgia, (schools); and Fort Sill, Oklahoma (housing). At each location we interviewed cognizant installation and local community officials regarding the communities’ planning issues and analyzed impact and planning data. In addition, we used information collected from site visits during our 2007 review of Army growth installations for a total of 10 location visits. We also attended numerous workshops involving military growth communities—an Association of Defense Communities Conference in August 2007 in Miami, Florida; a December 2007 OEA-sponsored growth summit in St. Louis, Missouri; a Forth Belvoir town hall meeting in Fairfax County, Virginia, in April 2007; a meeting of the Committee for a Sustainable Emerald Coast in Fort Walton Beach, Florida, in August 2007; and the second annual meeting of the Fort Bragg and Pope Air Force Base BRAC Regional Task Force in Fayetteville, North Carolina, in October 2007. Attending these meetings provided us with more detailed perspectives on community issues and the efforts of selected federal agencies to provide needed assistance. The OEA-sponsored growth summit was particularly helpful in that all 20 communities attended and presented information briefs on their top issues, which we gathered and summarized for this report. We also interviewed officials from the National Governors Association and the Association of Defense Communities who were familiar with infrastructure and financing issues facing military growth communities. To assess DOD’s efforts and the efforts of other government agencies to provide resources and other assistance to affected communities, we reviewed applicable DOD directives and executive orders to determine what role DOD and other agencies have in this process. To ascertain the extent to which communities were receiving state and local funds, we asked the communities to estimate the amount received as part of our survey of the 20 communities. To determine the extent and type of federal assistance being provided, we first conducted interviews with senior OEA officials because OEA serves as a key DOD activity in assisting communities in addressing growth challenges. To determine the extent of non-DOD federal assistance which might be available to growth-impacted communities, we administered a structured data collection instrument (structured questions which we e-mailed) to seven federal agencies—the Department of Transportation, the Department of Education, the Department of Labor, the Department of Commerce, the Small Business Administration, the Department of Agriculture, and the Department of Housing and Urban Development—identified by OEA as key federal agencies that, based on the community issues, may be the most helpful. We asked questions regarding what assistance they had provided the DOD- impacted communities and what programs they could suggest that might provide assistance to these communities. The results of these interviews were summarized and included in the report. We conducted follow-up interviews with senior officials at the Department of Transportation Federal Highway Program and Federal Transit Administration; the Department of Education Elementary and Secondary Education and Impact Aid Program; and the Department of Labor Employment and Training Administration to better understand their knowledge about DOD activities and what plans they had, if any, to assist the impacted communities. We further interviewed senior DOD officials responsible for military community and family; military housing; education; and transportation policies and practices to determine the types and extent of assistance that DOD was providing to impacted communities in those specific areas of interest. During the course of our review, we contacted the following offices with responsibility for planning, managing, studying, or overseeing growth at defense impacted communities: Office of the Secretary of Defense Deputy Under Secretary of Defense for Installations & Environment, Office of Economic Adjustment, Arlington, Virginia Deputy Under Secretary of Defense for Military Community and Family Policy, Arlington, Virginia Department of Defense Education Activity, Arlington, Virginia Military Surface Deployment and Distribution Command, Defense Access Road Program, Newport News, Virginia Army Office of the Assistant Secretary for Installations & Environment, Army Office of the Deputy Assistant Secretary of the Army for Installation Management, Arlington, Virginia Army Office of the Deputy Assistant Secretary for Installations & Environment, Housing Division, Arlington, Virginia Army Installation Management Command, Arlington, Virginia Navy Office of the Deputy Assistant Secretary for Installations & Facilities, Arlington, Virginia Navy Base Realignment and Closure Program Management Office, Air Force Deputy Assistant Secretary for Installations, Arlington, Air Force Office of Manpower and Personnel, Arlington, Virginia Headquarters, U.S. Marine Corps, Arlington, Virginia Department of Transportation, Federal Highway Program and Federal Transit Administration, Washington, D.C. Department of Education, Office of the Assistant Secretary for Elementary and Secondary Education and the Office of the Impact Aid Program, Washington, D.C. Department of Labor, Employment and Training Administration, Washington, D.C. Department of Agriculture, Office of Rural Development, Washington, D.C. Department of Commerce, Economic Development Administration, Washington, D.C. Small Business Administration, Office of Financial Assistance and Office of Business Development, Washington, D.C. U.S. Department of Housing and Urban Development, Office of Community Planning and Development, Washington, D.C. Association of Defense Communities, Washington, D.C. National Governors Association, Washington, D.C. Georgia Military Affairs Coordinating Committee, Atlanta, Georgia North Carolina Eastern Region, Kinston, North Carolina Conferences, town hall meetings, and workshops attended Association of Defense Communities 2007 summer conference in Town Hall Meeting, Fort Belvoir Virginia, in Mount Vernon, Virginia Fort Bragg, BRAC Regional Task Force Annual Meeting, Fayetteville, Committee for a Sustainable 2030 Emerald Coast, Fort Walton Beach, DOD, Office of Economic Adjustment 2007 Growth Summit, in St. Louis, Missouri. We conducted our work from February 2007 through May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Obtaining installation and family population data from the Navy, Marine Corps, and Air Force required numerous follow-ups by telephone and e-mail and still the data were not complete for our needs. Unlike the Army, these military services do not have a centralized database for this information, and were required to draw from various databases and from the installations themselves in order to fulfill our request. For its part, the Army maintains a centralized database which is updated on a quarterly basis. However, these data have their own shortcomings as described in this report. We found these estimates by nature are not precise and rounded them to the nearest hundreds to provide a sense of the growth in personnel and families communities have to use for planning purposes. Overall, we believe that the evidence obtained for this report provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Types of Federal Assistance Available to All Domestic Communities, Including DOD- Affected Growth Communities, as Identified by 8 of the 22 EAC-Member Agencies Program DOD Su (LEAs) tha students. E average da the preced Impact military de criteria. DO or disburse pplemental Impact Aid provides financial assistance to Local Educational Agencies t are heavily impacted by the presence of military or DOD civilian dependent ligible LEAs must have at least 20 percent military or civilian dependent students in ily attendance in their schools, as counted on their Federal Impact Aid application for ing year. Aid for Children with Severe Disabilities is available to any LEA that has at least two pendent children with severe disabilities that meet certain special education cost D works with LEAs and the Department of Education to clarify or resolve any fundin ment eligibility issues. Aid Program provides technical assistance and disburses payments to local l agencies that are financially burdened by federal activities based on a statutory students reported annually in section 8003 applications to the Department. Econom assistan assistance Public “sudden rehabilitati ic Adjustment Assistance Program is the primary vehicle for BRAC-related ce to communities. The program provides technical, planning, and infrastructure . Works and Economic Development Program is available to communities impacted and severe” changes in economic conditions. This program provides for construction on of essential public infrastructure facilities. nity Development Block Grant Program provides annual grants on a formula basis to communities to carry out a wide range of community development activities directed ighborhood revitalization, economic development, and improved community facilities es. ME Program provides grants to states and local governments to implement local strategies designed to increase home ownership and affordable housing for low- and come Americans. Commu entitled towards ne and servic The HO housing very low-in Nationa service response t High Gr busines workforce Commu workfor colleges to Workfo critical r strategies. together st colleges, a address th l Emergency Grant Program grants are discretionary awards that temporarily expand capacity at the state and local levels through time-limited funding assistance in o significant dislocation events. owth Job Training identifies industries in need of talent development, connects ses to the workforce system, and creates programs designed to meet their specific needs. nity-Based Job Training Grants address the need for a partnership between the ce system and the vocational education system and increase the capacity of community meet employer demands by providing grants to colleges. rce Innovation in Regional Economic Development (WIRED) Initiative stresses ole talent development plays in creating effective regional economic development The initiative goes beyond traditional strategies for worker preparation by bringing ate, local, and federal entities; academic institutions (including K-12, community nd universities); investment groups; foundations; and business and industry to e challenges associated with building a globally competitive and prepared workforce. Express Program provides lending partners with a government-guaranteed loan ilored to active duty and reserve personnel and their immediate family members. Development Program, section 8 (a), is a program designed by Congress to provide and economically disadvantaged businesses with the requisite management and ssistance to enhance their ability to compete in the American marketplace. The tilizes set-aside and limited competition federal contracts, assistance through SBA tégé Program, and management and technical assistance through 7(j) designated ovide business development assistance to 8(a) firms. ent and Technical Assistance Program, section 7(j), is one of the forms of evelopment assistance provided to more than 8,800 firms that particip ate in the 8(a) s Development Program, as well as other 7(j) eligible concerns. SBA has been able to e assistance provided through the 7(j) program with other forms of management and ssistance. Additional agency-sponsored workshops, seminars, and conferences have the 7(j) assistance. The training is conducted nationwide and focuses on marketing doing business with the federal government, how to write winning proposals, c e cost proposal, maximizing cash flow management, and cost and pricing traini rafting ng. rust Fund, Title 23, U.S.C., authorizes funding of broad categories of transportation from the Highway Trust Fund, which is the main source of federal transportation to the states. Priorities are set at the state/local level. Business businesse Cooperative Extension through Land Grant Universities that provide resource descriptions to communities and annually seeks input on needed services. Community Facilities Direct Loans and Grants Program provides guaranteed loans to develop essential community facilities in rural areas and towns of up to 20,000 in population. Single-Family Housing Program guarantees housing loans to help low and moderate-income individuals or households purchase homes in rural areas. Multi-Family Housing Program provides loans to develop and/or rehabilitate rural rental housing under two direct loan programs, one for farm labor tenancy and one loan-guaranteed program. Rural Rental Assistance Program provides support for very-low and low-income households to assist in paying rent in Rural Development-financed properties. Rural Development Electric Program provides direct loans and loan guarantees to help finance the construction of electric distribution, transmission, and generation facilities. Rural Development Telecommunications Loan Program offers loans for infrastructure improvement and expansion. Rural Business Enterprise Grant Program provides grants for rural projects that finance and facilitate development of small and emerging rural business, help distance learning networks, and help fund employment-related adult education programs. Rural Business Opportunity Grant Program promotes sustainable economic development in rural communities with exceptional needs. Intermediary Relending Program is to help alleviate poverty and increase economic activity and employment in rural communities. Rural Economic Development Loan and Grant Program provides funding to rural projects through local utility organizations. Section 9006 Guaranteed Loan Program encourages commercial financing of renewable energy and energy efficiency projects. Section 9006 Grant Program provides grants for agricultural producers and rural small businesses to purchase renewable energy systems. Rural Development Water and Wastewater Program provides direct loans, grants, and loan guarantees to help finance the construction of drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural areas and cities and towns of 10,000 or less. & Industry Guaranteed Loan Program provides financial backing for rural s. Commercial loan guarantees are available up to 80 percent of the loan amount. Appendix III: Executive Order 12788, as Amended through May 2005 Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, Jim Reifsnyder, Assistant Director; Karen Kemper, Analyst-in-Charge; Bob Poetta; Kurt Burgeson; Susan Ditto; Ron La Due Lake; Julia Matta; Anna Russell; David Adams; and Nancy Lively made key contributions to this review. Related GAO Products Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. State and Local Governments: Growing Fiscal Challenges Will Emerge during the Next 10 Years. GAO-08-317. Washington, D.C.: January 22, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Force Structure: Need for Greater Transparency for the Army’s Grow the Force Initiative Funding Plan. GAO-08-354R. Washington, D.C.: January 18, 2008. Force Structure: Better Management Controls Are Needed to Oversee the Army’s Modular Force and Expansion Initiatives and Improve Accountability for Results. GAO-08-145. Washington, D.C.: December 14, 2007. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Force Structure: Army Needs to Provide DOD and Congress More Visibility Regarding Modularity Force Capabilities and Impleme Plans. GAO-06-745. Washington, D.C.: September 6, 2006. Defense Infrastructure: DOD’s Overseas Infrastructure Master Plans Continue to Evolve. GAO-06-913R. Washington, D.C.: August 22, 2006. Force Structure: Capabilities and The Cost of Army Modular Force Remain Uncertain. GAO-06-548T. Washington, D.C.: April 4, 2006.
Due to several simultaneous Department of Defense (DOD) force structure and basing initiatives, 20 installations are expecting a combined net growth of over 173,000 military and civilian personnel, not including family members and all contractors, over fiscal years 2006-2012. Although communities surrounding these installations can expect to realize economic benefits in the long term, DOD has identified these 20 to be substantially and seriously impacted in terms of being able to provide infrastructure to accommodate the growth. In response to the House report to the fiscal year 2007 defense appropriations bill, GAO (1) examined the extent to which communities affected by DOD's actions have identified their infrastructure needs, and (2) assessed DOD's efforts and those of other agencies to assist affected communities. GAO reviewed applicable directives and executive orders, surveyed the 20 growth communities, and met with community and agency officials to discuss growth issues. Communities surrounding DOD growth installations have begun to identify infrastructure needs to help support expected personnel growth in general terms, but planning efforts have been hampered by a lack of consistent and detailed information about anticipated DOD personnel movements. When asked to identify their top infrastructure challenges, 16 of the 20 communities identified by DOD as substantially and seriously impacted cited transportation, 11 named school capacity, and 6 said affordable housing. However, communities lack the detailed planning information, such as the growth population demographics, necessary to effectively plan and obtain financing for infrastructure projects. A DOD directive requires the military services to develop guidance for providing planning information to installations, communities, and DOD's Office of Economic Adjustment (OEA), but GAO found that none had done so. While the consistency of the personnel relocation data DOD provides has improved, over half of the communities we surveyed expressed concerns about the completeness of the personnel data they receive and the lack of detailed demographic data, such as the number and ages of dependent children expected to accompany incoming service members and attend school. Until the military departments begin to disseminate consistent and more detailed information about planned defense personnel moves, it will be difficult for community, state, and federal officials to effectively plan for and provide necessary infrastructure to accommodate DOD personnel and their families relocating to growth-impacted communities. OEA, other DOD agencies, and some state, local, and federal agencies have provided grants and technical assistance to DOD growth communities, but the Office of the Secretary of Defense has not provided the high-level leadership critical to achieving effective interagency and intergovernmental coordination. To ensure that DOD-impacted communities receive assistance, the 22-agency Economic Adjustment Committee (EAC) was created by executive order over 30 years ago and amended as recently as 2005. The Secretary of Defense, or his designee, chairs the committee that is required to lead efforts to assist communities most affected by its activities and serve as a clearinghouse for sharing information about expected DOD impacts on the communities surrounding military growth installations, as well as information regarding possible government resources that could mitigate some of those impacts. As chair of the EAC, DOD could regularly convene full committee meetings and exercise the high-level leadership needed to help ensure that federal agencies are affording certain priority considerations to defense-affected communities. However, the full committee has not met since November 2006. Instead, DOD has left the workings of the EAC to OEA, which has been proactive in assisting impacted communities but can not guide interagency operations at a high enough level to promote effective interagency cooperation. Consequently, in the absence of high-level DOD leadership, the committee has not developed a clearinghouse for information sharing which could more effectively match government resources with the needs of DOD-impacted communities.
Background ONDCP was established by the Anti-Drug Abuse Act of 1988 to, among other things, enhance national drug control planning and coordination and represent the drug policies of the executive branch before Congress. In this role, the office is responsible for (1) developing a national drug control policy, (2) developing and applying specific goals and performance measurements to evaluate the effectiveness of national drug control policy and National Drug Control Program agencies’ programs, (3) overseeing and coordinating the implementation of the national drug control policy, and (4) assessing and certifying the adequacy of the budget for National Drug Control Programs. The 2010 National Drug Control Strategy is the inaugural strategy guiding drug policy under President Obama’s administration. According to ONDCP officials, it sought a comprehensive approach to drug policy, including an emphasis on drug abuse prevention and treatment efforts and the use of evidence-based practices—approaches to prevention or treatments that are based in theory and have undergone scientific evaluation. Drug abuse prevention includes activities focused on discouraging the first-time use of controlled substances and efforts to encourage those who have begun to use illicit drugs to cease their use. Treatment includes activities focused on assisting regular users of controlled substances to become drug free through such means as counseling services, inpatient and outpatient care, and the demonstration and provision of effective treatment methods. ONDCP established two overarching policy goals in the 2010 Strategy for (1) curtailing illicit drug consumption and (2) improving public health by reducing the consequences of drug abuse, and seven subgoals under them that delineate specific quantitative outcomes to be achieved by 2015, such as reducing drug-induced deaths by 15 percent. To support the achievement of these two policy goals and seven subgoals (collectively referred to as goals), the Strategy included seven strategic objectives and multiple action items under each objective, with lead and participating agencies designated for each action item. Strategy objectives include, for example, Strengthen Efforts to Prevent Drug Use in Communities and Disrupt Domestic Drug Trafficking and Production. Subsequent annual Strategies provided updates on the implementation of action items, included new action items intended to help address emerging drug-related problems, and highlighted initiatives and efforts that support the Strategy’s objectives. ONDCP is required annually to develop the National Drug Control Strategy, which sets forth a plan to reduce illicit drug use through prevention, treatment, and law enforcement programs, and to develop a Drug Control Budget for implementing the strategy. National Drug Control Program agencies follow a detailed process in developing their annual budget submissions for inclusion in the Drug Control Budget, which provides information on the funding that the executive branch requested for drug control to implement the strategy. Agencies submit to ONDCP the portion of their annual budget requests dedicated to drug control, which they prepare as part of their overall budget submission to the Office of Management and Budget for inclusion in the President’s annual budget request. ONDCP reviews the budget requests of the drug control agencies to determine if the agencies have acceptable methodologies for estimating their drug control budgets, and includes those that do in the Drug Control Budget. In FY 2016, the budget contains 38 federal agencies or programs. There are five priorities for which resources are requested across agencies: substance abuse prevention and substance abuse treatment (both of which are considered demand-reduction areas), and drug interdiction, domestic law enforcement, and international partnerships (the three of which are considered supply-reduction areas) as shown in figure 1. ONDCP manages and oversees two primary program accounts: the High Intensity Drug Trafficking Areas (HIDTA) Program and the Other Federal Drug Control Programs. ONDCP previously managed the National Youth Anti-Drug Media Campaign which last received appropriations in fiscal year 2011. ONDCP and Other Federal Agencies Have Not Fully Achieved 2010 Strategy Goals; ONDCP Has Established a Mechanism to Monitor Progress Although Limited Progress Has Been Made for Some Goals, None of the National Drug Control Strategy Goals Have Been Fully Achieved In the 2010 National Drug Control Strategy, ONDCP established seven goals related to reducing illicit drug use and its consequences to be achieved by 2015. As of May 2016, our analysis indicates that ONDCP and federal agencies have made moderate progress toward achieving one goal, limited progress on three goals, and no demonstrated progress on the remaining three goals. ONDCP officials stated that they intend to report on updated progress toward meeting the strategic goals in summer 2016. As of May 2016, overall, none of the goals in the Strategy have been fully achieved. Table 1 shows the 2010 Strategy goals and progress toward meeting them. ONDCP and federal drug control agencies have made mixed progress but have not fully achieved any of the four Strategy goals associated with curtailing illicit drug consumption. For example, progress has been made on the goal to decrease the 30-day prevalence of drug use among 12- to 17-year-olds by 15 percent. The data source for this measure— SAMHSA’s National Survey on Drug Use and Health (NSDUH)— indicates that in 2014, 9.4 percent of 12- to 17-year-olds reported having used illicit drugs in the past month. This represents a 7 percent decrease from the 2009 baseline for this measure. However, progress has not been made on the goal to decrease the 30-day prevalence of drug use among young adults aged 18 to 25 by 10 percent. Specifically, the rate of drug use for young adults increased from 21.4 percent in 2009 to 22 percent in 2014, moving in the opposite direction of the goal. This increase was primarily driven by marijuana use. According to the 2014 NSDUH, 19.6 percent of young adults reported having used marijuana in the past month and 6.4 percent reported having used illicit drugs other than marijuana. The rates of reported marijuana use for this measure increased by 8 percent from 2009 to 2014 while the rates of reported use of illicit drugs other than marijuana decreased by 24 percent. Progress has also been mixed on the remaining three Strategy goals associated with reducing the consequences of drug use. For example, the goal to reduce drug-related morbidity by 15 percent has two measures, and progress has been made on one but not the other. Specifically, HIV infections attributable to drug use decreased by 34 percent from 2009 to 2014, exceeding the established target. However, the number of emergency room visits for substance use disorders increased by 19 percent from 2009 to 2011. The data source for this measure— SAMHSA’s Drug Abuse Warning Network—indicates that pharmaceuticals alone were involved in 34 percent of these visits and illicit drugs alone were involved in 27 percent of them. According to the 2013 Drug Abuse Warning Network report, the increase in emergency room visits for drug misuse and abuse from 2009 to 2011 was largely driven by a 38 percent increase in visits involving illicit drugs only. In addition, progress has not been made on the goal to reduce drug-induced deaths by 15 percent. According to the CDC’s National Vital Statistics System, 49,714 deaths were from drug-induced causes in 2014, an increase of 27 percent compared to 2009. This represents a significant departure from the 2015 goal. The CDC’s January 2016 Morbidity and Mortality Weekly Report stated that 47,055 of these deaths were from drug overdoses, the majority of which (61 percent) involved opioids. ONDCP Established a System to Monitor Progress toward Strategy Goals In March 2013, we reported that ONDCP established the Performance Reporting System (PRS) to monitor and assess progress toward meeting Strategy goals and objectives and issued a report describing the system with its 2012 Strategy. The PRS includes interagency performance measures and targets under each of the Strategy’s seven objectives, which collectively support the overarching goals discussed above. For example, one of the six performance measures under the Strategy’s first objective—Strengthen Efforts to Prevent Drug Use in Our Communities— is the average age of initiation for all illicit drug use, which has a 2009 baseline of 17.6 years of age and a 2015 target of 19.5 years of age. These PRS measures were established to help assess progress towards each objective. According to ONDCP, they are a tool to help indicate where the Strategy is on track, and when and where further attention, assessment, evaluation, and problem‐solving are needed. As part of our review for our March 2013 report, we assessed the PRS measures for the Strategy’s seven objectives and found them to be generally consistent with attributes of effective performance management identified in our prior work as important for ensuring performance measures demonstrate results and are useful for decision making. For example, we found that the PRS measures for the objectives were clearly stated, with descriptions included in the 2012 PRS report, and all 26 of them had or were to have measurable numerical targets. In addition, the measures were developed with input from stakeholders through an interagency working group process, which included participation by the Departments of Education, Justice, and Health and Human Services, among others. The groups assessed the validity of the measures and evaluated data sources, among other things. At the time of our review, the PRS was in its early stages and ONDCP had not issued its first report on the results of the system’s performance measures. ONDCP released its most recent annual PRS report in November 2015. The 2015 report assesses progress on the Strategy’s goals and the 28 performance measures and submeasures related to each of the Strategy’s seven objectives, which support the achievement of the goals. For each objective, the report classifies results on performance measures into five categories and identifies areas of progress on and challenges with achieving objectives. For example: Objective 1—Strengthen Efforts to Prevent Drug Use in Our Communities. The report indicates that sufficient progress has been made on reducing the average age of initiation for all illicit drugs to enable meeting the 2015 target. However, it notes that accelerated effort is needed to prevent youth marijuana use and counter youth perceptions that marijuana (including synthetic marijuana) use is not harmful. The report shows that the percent of respondents aged 12 to 17 who perceive a great risk in smoking marijuana once or twice a week decreased from 2009 to 2013, moving in the opposite direction of the 2015 target. Objective 3—Integrate Treatment for Substance Use Disorders into Health Care and Expand Support for Recovery. The report shows that the percent of treatment facilities offering at least four specified recovery support services, such as child care, employment assistance, and housing assistance, increased from 2008 to 2013 and exceeded the 2015 target. However, the report states that challenges persist in the integration of substance abuse treatment services into mainstream health care. For instance, the percent of the Health Resources and Services Administration’s Health Center Program grantees providing substance use counseling and treatment services decreased from 2009 to 2013. According to the report, implementation of the Affordable Care Act presents opportunities to provide greater access to treatment for substance use disorders by, for example, efficiently integrating such treatment into the health care system and providing non-discrimination for coverage for preexisting conditions. Objective 5—Disrupt Domestic Drug Trafficking and Production. According to the report, progress is being achieved in domestic law enforcement and efforts to disrupt or dismantle domestic drug trafficking organizations. The 2015 targets for both measures related to these efforts have been exceeded. The report also indicates that progress has been made on reducing the number of methamphetamine lab seizure incidents (a proxy for lab activity) from 2009 to 2013 but accelerated progress is needed to meet the 2015 target. Objective 6—Strengthen International Partnerships and Reduce the Availability of Foreign Produced Drugs in the United States. According to the report, key source and transit countries continue to demonstrate increased commitment to reducing drug trafficking and use through demand and supply reduction efforts. The targets for the two measures related to such commitments have both been met. However, the report states that accelerated progress is needed in working with partner countries to reduce the cultivation of drugs and their production potential in Afghanistan, Burma, Laos, Mexico, and Peru. See attachment I for performance measures under each Strategy objective, progress toward 2015 targets, and ONDCP’s assessment categorizations. ONDCP officials stated that actions taken in response to PRS results include Department of Education grants for school-based prevention activities to help educate students on the risks of using marijuana and increased funding to expand access to treatment to help address the rise in drug-induced deaths from opioid use, as discussed below. Total Federal Spending for Drug Control Programs Has Increased since FY 2007 Federal Drug Control Spending on Treatment and Prevention Increased, While Law Enforcement and Interdiction Spending Remain Relatively Constant According to ONDCP, federal drug control spending increased from $21.7 billion in FY 2007 to approximately $30.6 billion that was allocated for drug control programs in FY 2016 as shown in figure 2. Though, total federal drug control spending increased from FY 2007 through FY 2016, spending on supply reduction programs, such as domestic law enforcement, interdiction, and international programs remained relatively constant at $13.3 billion in FY 2007 and $15.8 billion in FY 2016. However, federal spending for demand programs—treatment and prevention steadily increased from FY 2007 through FY 2016 and spending in these two programs went from $8.4 billion in FY 2007 to $14.7 billion in FY 2016. As a result, the proportion of funds spent on demand programs increased from 39 percent of total spending in FY 2007 to 48 percent in FY 2016. According to ONDCP’s Fiscal Year 2016 Budget and Performance Summary, ONDCP has prioritized treatment and recovery support services stating that they are essential elements of the Strategy’s efforts to support long-term recovery among people with substance use disorders. Allocated funding for treatment increased in FY 2016 to approximately $13 billion, a 5 percent increase over FY 2015. These funds are used for early intervention programs, treatment programs, and recovery services. For example, according to ONDCP, approximately $8.8 billion was the amount estimated for benefit outlays by the Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services for substance use disorder treatment in both inpatient and outpatient settings for FY 2016. ONDCP also stated that preventing drug use before it starts is a fundamental element of the Strategy. Funding for prevention increased in FY 2016 to about $1.5 billion, a 10 percent increase from FY 2015, as shown in figure 3. Funding for treatment also increased from $12.5 billion in FY 2015 to $13.2 billion in FY 2016 in allocated funding. Figure 3 shows the increase in treatment and prevention spending for fiscal years 2007 through 2016. Additionally, in FY 2017, HHS’ Substance Abuse and Mental Health Services Administration (SAMHSA) requested $460 million for a new program (State Targeted Response Cooperative Agreements) to help expand access to treatment for opioid use disorders, as well as $15 million for evaluating the effectiveness of medication-assisted treatment programs to improve service delivery and decrease the incident of opioid- related overdose and death (Cohort Monitoring and Evaluation of Medication Assisted Treatment Outcomes). These programs could result in increasing SAMHSA’s budget request for treatment programs to approximately $3 billion in FY 2017 from $2.5 billion enacted in FY 2016. Addressing the drug supply is categorized by three main functions, which are Domestic Law Enforcement, Interdiction, and International. For Domestic Law Enforcement, ONDCP noted that federal, state, local, and tribal law enforcement agencies play a key role in the Administration’s approach to reduce drug use and its associate consequences. ONDCP also stated that interagency drug task forces, such as the High Intensity Drug Trafficking Areas (HIDTA) program, are critical to leveraging limited resources among agencies. Allocated funding for domestic law enforcement in FY 2016 is approximately $9.7 billion, a 4 percent increase from FY 2015 funding. Regarding Interdiction, the United States continues to face a serious challenge from the large scale smuggling of drugs from abroad which are distributed to every region in the Nation. These funds support collaborative activities between federal law enforcement agencies, the military, the intelligence community, and international allies to interdict or disrupt shipments of illegal drugs, their precursors, and their illicit proceeds. Allocated funding in support of Interdiction for FY 2016 is approximately $4.5 billion, an increase of 12 percent from FY 2015. International functions place focus on collaborative efforts between the U.S. Government and its international partners around the globe. According to ONDCP, illicit drug production and trafficking generate huge profits and are responsible for the establishment of criminal networks that are powerful, corrosive forces that destroy the lives of individuals, tear at the social fabric, and weaken the rule of law in affected countries. In FY 2016, approximately $1.6 billion was enacted, a 0.4 percent decrease from FY 2015. Figure 4 shows federal drug spending for Domestic Law Enforcement, Interdiction, and International activities. ONDCP Spending Account For One Percent of Total Federal Drug Control Spending In addition to advising the President on drug-control issues and coordinating drug-control activities and related funding across the Federal government, ONDCP also directly oversees two drug-related functions for which it receives federal drug control funding —HITDAs and other federal drug control programs, such as the Drug Free Community (DFC) coalition grant program. Based on ONDCP’s spending in FY 2012 through its allocated funding in FY 2016 for these two functions, ONDCP’s drug- related spending account for 1 percent of the total federal drug control spending in the federal government. ONDCP’s requested funding for FY 2017 is 1 percent of the total federal drug control request. See figure 5 for allocated percentages. Chairman Johnson, Ranking Member Carper, and Committee members, this concludes my prepared statement. I would be happy to respond to any questions you may have. GAO Contact and Staff Acknowledgements If you or your staff members have any questions about this testimony, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other contributors included Kevin Heinz, Assistant Director, Aditi Archer, Lyle Brittan, Eric Hauswirth, Justin Snover, and Johanna Wong. Attachment I: ONDCP 2015 Performance Reporting System Report—Performance Measures for Strategy Objectives, Progress toward 2015 Targets, and Assessment Categorizations Attachment I: ONDCP 2015 Performance Reporting System Report—Performance Measures for Strategy Objectives, Progress toward 2015 Targets, and Assessment Categorizations Measure Objective 1—Strengthen Efforts to Prevent Drug Use in Our Communities Measure 1.1: Percent of respondents, ages 12–17, who perceive a great risk in smoking marijuana once or twice a week. 49.0 percent (2009) 39.5 percent (2013) Measure 1.2: Percent of respondents, ages 12–17, who perceive a great risk in consumption of one or more packs of cigarettes per day 65.5 percent (2009) 64.3 percent (2013) Measure 1.3: Percent of respondents ,ages 12–17, who perceive a great risk in consuming four or five drinks once or twice a week 39.6 percent (2009) 39.0 percent (2013) Measure 1.4: Average age of initiation for all illicit drugs 17.6 years (2009) 19.0 (2013) Measure 1.5: Average age of initiation for alcohol use 16.9 years (2009) 17.3 (2013) Measure 1.6: Average age of initiation for tobacco use 17.5 years (2009) 17.8 (2013) Progressing, accelerated progress required to meet 2015 target 20.7 years (2009) 21.6 (2013) Target met or exceeded, progress should be maintained through 2015 18.9 years (2009) 18.4 (2013) Objective 2—Seek Early Intervention Opportunities in Health Care Measure 2.1: Percent of Health Center Program grantees providing SBIRT services 10.3 percent (2009) 16.9 percent (2013) Measure 2.2: Percent of respondents in the past year using prescription-type drugs non- medically, age 12–17 7.7 percent (2009) 5.8 percent (2013) Measure 2.3: Percent of respondents in the past year using prescription-type drugs non- medically, age 18–25 15 percent (2009) 12.2 percent (2013) Measure 2.4: Percent of respondents in the past year using prescription-type drugs non- medically, age 26+ Objective 3—Integrate Treatment for Substance Use Disorders into Health Care and Expand Support for Recovery Measure 3.1: Percent of treatment plans completed 2015 target 4.7 percent (2009) 4.8 percent (2013) 4.0 percent No progress to date, accelerated progress required to meet 2015 target 45.1 percent (2007) 43.7 percent (2011) Baseline 21.6 percent (2009) Progress to date 20.0 percent (2013) Measure 3.3: Percent of treatment facilities offering at least 4 of the standard spectrum of recovery services (child care, transportation assistance, employment assistance, housing assistance, discharge planning, and after- care counseling) Objective 4—Break the Cycle of Drug Use, Crime, Delinquency, and Incarceration Measure 4.1: Percent of residential facilities in the juvenile justice system offering substance abuse treatment 35.5 percent (2008) 41.0 percent (2013) 39.0 percent Target met or exceeded, progress should be maintained through 2015 38.8 percent (2008) 45.3 percent (2012) Measure 4.2: Percent of treatment plans completed by those referred by the criminal justice system 48.8 percent (2007) 47.5 percent (2011) Objective 5—Disrupt Domestic Drug Trafficking and Production Measure 5.1: Number of domestic Consolidated Priority Organization Targets linked organizations disrupted or dismantled* 296 (2009) 473 (2013) Measure 5.2: Number of Regional Priority Organization Targets linked organizations disrupted or dismantled 119 (2009) 153 (2014) Target met or exceeded, progress should be maintained through 2015 Measure 5.3: Methamphetamine lab activity (as measured by number of methamphetamine lab seizure incidents) Objective 6—Strengthen International Partnerships and Reduce the Availability of Foreign Produced Drugs in the United States Measure 6.1: Percent of selected countries that increased their commitment to supply reduction 12,852 (2009) 11,329 (2013) Progressing, accelerated progress required to meet 2015 target 2009 or earliest available [Baseline not provided in PRS report] 100 percent (progress to date) Measure 6.2: Percent of selected countries that increased their commitment to demand reduction 2009 [Baseline not provided in PRS report] 100 percent (progress to date) Measure 6.3: Percent of selected countries showing progress since 2009 in reducing either cultivation or drug production potential 2009 [Baseline not provided in PRS report] 29 percent (progress to date) Baseline 65 (2009) Progress to date 72 (2014) Measure 7.1: Increase timeliness (year‐end to date‐of‐release) of select Federal data sets Objective 7—Improve Information Systems for Analysis, Assessment, and Local Management 17.5 Months 23.5 (2011) Measure 7.2: Increase the utilization (number of annual web hits, or number of documents referencing the source) of select Federal data sets by 10percent from the baseline Substance Abuse and Mental Health Data Archive (SAMHDA) National Survey of Drug Use and Health (NSDUH) (Journal articles referencing NSDUH) 113 (2014) Measure 7.3: Increase Federal data sets that establish feedback mechanisms to measure usefulness (surveys, focus groups, etc.)— SAMHSA Funded Data Sets 1 (progress to date) Target met or exceeded, progress should be maintained through 2015 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Policymakers, health care providers, and the public are concerned about the nation's current drug epidemic and its effects, as drug overdose deaths surpassed auto accidents as the leading cause of death or injury in recent years. To help address national drug control policy efforts, ONDCP coordinates and oversees implementation of a National Drug Control Strategy to reduce illicit drug use, among other things. This statement addresses (1) what progress has been made toward achieving National Drug Control Strategy goals and how ONDCP monitors progress and (2) trends in federal drug control spending. This statement is based upon findings GAO reported in March 2013 and December 2015, analysis of ONDCP's Budget and Performance Summaries and selected updates in 2016. For the updates, GAO analyzed publically available data sources that ONDCP uses to assess progress on Strategy goals, reviewed ONDCP Performance Reporting System reports, and interviewed ONDCP officials. The Office of National Drug Control Policy (ONDCP) and federal agencies have made mixed progress toward achieving the goals articulated in the 2010 National Drug Control Strategy (Strategy) and ONDCP has established a mechanism to monitor and assess progress. In the Strategy, ONDCP established seven goals related to reducing illicit drug use and its consequences by 2015. As of May 2016, our analysis indicates that ONDCP and federal agencies have made moderate progress toward achieving one goal, limited progress on three goals, and no progress on the three other three goals. Overall, none of the goals in the Strategy have been fully achieved. In March 2013, GAO reported that ONDCP established the Performance Reporting System to monitor and assess progress toward meeting Strategy goals and objectives. GAO reported that the system's 26 new performance measures were generally consistent with attributes of effective performance management. A 2015 ONDCP report on progress towards these measures similarly identified some progress towards overall achievements—some of the measures had met or exceeded targets, some had significant progress underway, and some had limited or no progress. Federal drug control spending increased from $21.7 billion in fiscal year (FY) 2007 to approximately $30.6 billion in allocated funding in FY 2016 as shown in figure 1. Although total federal drug control spending increased from FY 2007 through FY 2016, spending on supply reduction programs, such as domestic law enforcement, interdiction, and international programs remained relatively constant at $13.3 billion in FY 2007 and $15.8 billion allocated in FY 2016. However, federal spending for—treatment and prevention has steadily increased from FY 2007 through FY 2016 and spending in these two programs went from $8.4 billion in FY 2007 to $14.7 billion allocated in FY 2016.
Background Categories of PCS Moves The PCS program enables the military services to move personnel to assignments at new locations, and it supports a wide range of national security requirements and institutional needs. PCS moves are distinct from deployments or temporary duty travel, and they are grouped into six major categories (see table 1). DOD presents these six categories with associated cost information in its military personnel budget justification materials. Expenses Associated with PCS Moves Upon receiving orders requiring a PCS move, a servicemember arranges the move, often with assistance from a local military transportation office. The servicemember is entitled to reimbursement for certain travel and transportation expenses associated with the move. These expenses are authorized under Chapter 8 of Title 37 of United States Code and are specified in DOD’s Joint Travel Regulations. Allowable expenses include those related to, among other things, shipment of household goods (that is, items associated with the home and personal effects of servicemembers and dependents) and privately owned vehicles, commercial air travel, and temporary lodging. A list of allowable expenses is provided in table 6 in appendix III. Costs in excess of allowable expenses are borne by the servicemember. Time-on-Station Requirements Required time-on-station lengths—the minimum period of time between PCS moves, subject to certain exceptions—are specified in DOD and service guidance. For overseas assignments, DOD’s Joint Travel Regulations specifies time-on-station lengths that range from 12 to 36 months. For assignments within the continental United States, time-on- station lengths are specified in service guidance and are generally either 36 months (Navy and Marine Corps) or 48 months (Army and Air Force), subject to certain exceptions. Time-on-station length may be reduced if servicemembers qualify for an exception or obtain an approved waiver. As such, time-on-station requirements can be met in two ways: (1) a servicemember remains in a location for the specified minimum length of time, or (2) a servicemember qualifies for an exception or obtains an approved waiver to move prior to the specified minimum length of time. Time-on-station lengths may vary by specific rank or position. For example, the minimum time-on-station for Air Force lieutenants within the continental United States is 36 months, but it is 48 months for other officers. Also, Marine Corps officers assigned to certain acquisition- related positions have a minimum time-on-station of 36 months, and Army drill sergeants have a time-on-station of 24 months, with an option to extend to 36 months. OSD’s Report to Congress on Extending Time-on- Station In September 2014, OSD reported to Congress the results of a study on increasing time-on-station lengths as a potential approach to reducing PCS moves and costs. OSD’s report, which was prepared in response to Senate Report 112-196, was based in part on research conducted by the RAND Corporation. To inform its report to Congress, OSD tasked RAND to study issues related to time-on-station. Among other things, RAND worked with the Defense Manpower Data Center to survey servicemembers on their perceptions of extending time-on-station lengths. In July 2014, RAND provided OSD a draft of its study findings. According to RAND officials, they plan to finalize their study and issue a final report in 2015, but as of July 2015 they had not done so. OSD stated that it did not agree with increasing time-on-station lengths across the board and cited survey results showing that nearly 60 percent of servicemembers were unwilling to voluntarily extend their time-on- station. At the same time, according to OSD, a significant minority of servicemembers would favor extending their time-on-station under some circumstances. Because these preferences were highly individual, OSD concluded that it was practically impossible to identify the likely preferences of a servicemember without a direct inquiry. DOD Inspector General Report on PCS Efficiencies In May 2014, the DOD Office of Inspector General reported on opportunities for cost savings and efficiencies in the PCS program. The report was conducted in response to a House report accompanying a proposed defense appropriations bill for fiscal year 2014. The Inspector General’s report made seven recommendations for improved management practices in the non-temporary storage, household goods shipments, and air travel features of the PCS program. Service officials told us that the Department has begun implementing changes in response to the report, including altering policies for non-temporary storage to reduce the incidence of continued payments by DOD for storage units that had exceeded the entitlement period for retired and separated servicemembers. Roles and Responsibilities for the PCS Program Management and oversight of the PCS program is a responsibility shared among multiple offices, including the Under Secretary of Defense for Personnel and Readiness, the Under Secretary of Defense (Comptroller), and the military services, among others. The Under Secretary of Defense for Personnel and Readiness is the Secretary of Defense’s senior policy advisor on recruitment, career development, and pay and benefits— including central management of the PCS program. Within the Office of the Under Secretary of Defense for Personnel and Readiness, the Defense Travel Management Office coordinates updates to the Joint Travel Regulations that establish travel and transportation guidance for the department. The Under Secretary of Defense (Comptroller) develops and oversees execution of the Department’s annual budget, including military personnel appropriations and obligations for PCS costs. The Comptroller also publishes and maintains the DOD Financial Management Regulation, which establishes financial management requirements, systems, and functions for all DOD components—including the preparation of budget materials. According to DOD officials, PCS moves and costs are tracked within the services by offices responsible for financial management and budget, and service time-on-station policies are managed by offices responsible for personnel assignments and human resources. PCS Per-Move Costs Have Increased Since 2001, and DOD Has Not Evaluated the Factors Contributing to This Increase DOD has experienced an overall increase in PCS per-move costs since 2001, but it does not have complete, consistent data on the program and has not evaluated the specific factors contributing to per-move cost growth. Our analysis of DOD budget data shows that average PCS per- move costs, after accounting for inflation, increased by 28 percent from fiscal year 2001 to fiscal year 2014. The overall increase in per-move costs varied across the six PCS move categories and the four military services. In addition, detailed PCS data for fiscal years 2010 through 2014 show that PCS per-move costs were consistently higher for officers than for enlisted personnel, due to differences in certain cost categories. However, we found that the services have not reported complete, consistent PCS data, thereby limiting the extent to which DOD can identify and evaluate changes within the PCS program. Program changes and other factors affect PCS costs, but the specific factors driving the overall growth in per-move costs are unclear, and DOD does not know whether the PCS program is efficiently supporting requirements to assign personnel in new locations. PCS Costs Increased While PCS Moves Declined from Fiscal Years 2001 to 2014 Our analysis of DOD budget documents shows that from 2001 to 2014 the department’s total PCS costs increased by 13 percent, from $3.8 billion to $4.3 billion, while the number of PCS moves declined by 12 percent, from 731,967 to 646,387. As a result, per-move costs increased overall by 28 percent, from $5,238 to $6,727, during this period. Per- move costs generally increased through fiscal year 2009, peaking at $7,308, and then generally decreased through fiscal year 2014 (see fig. 1). In fiscal year 2009 DOD began requiring the services to record obligations at the time PCS orders were issued, rather than at the time PCS moves occurred. As a result, the services’ actual fiscal year 2009 obligations included a one-time increase of $745.2 million. Per-Move Costs Varied across Move Categories From fiscal year 2001 through 2014, per-move costs varied across the six PCS move categories (see fig. 2). The highest per-move cost was for rotational travel ($13,336 on average), while the lowest per-move cost was for accession travel ($2,289 on average). More detailed results of our overall analysis of PCS move categories are presented in appendix II. Accession and separation moves accounted for 58 percent of moves and 23 percent of PCS costs from fiscal year 2001 through fiscal year 2014. Service officials stated that they have limited ability to control the numbers of these types of moves, which are primarily driven by changes in end strengths, and therefore they viewed these moves as a non- discretionary expense. Similarly, they stated that training and organized unit moves are a non-discretionary expense because these are driven by training requirements, in the case of training moves, and by strategic decisions on force positioning, in the case of organized unit moves. Operational and rotational moves accounted for 34 percent of moves and 64 percent of costs. Service officials stated that these moves are the primary categories over which the military services have discretionary control—that is, some operational and rotational moves can be delayed to a subsequent fiscal year if sufficient funds are not available. Both we and the RAND Corporation have previously reported that DOD’s overseas presence is the primary factor in rotational travel costs, and we have recently reported that DOD’s overseas military presence represents an area for potential cost savings. For example, permanent overseas stationing—and associated rotational moves—may come at significantly higher costs than alternative approaches, such as deploying domestically stationed forces when needed. We recommended in 2012 that DOD conduct a comprehensive reassessment of its presence in Europe, including the costs of various alternatives. As of March 2015, DOD had partially addressed this recommendation by refining its posture-planning process to require specified cost information, such as rough order-of- magnitude costs for new posture initiatives. Per-Move Costs Increased and Varied Across the Services Per-move costs across the four services were higher in fiscal year 2014 than in fiscal year 2001, and average costs varied among them (see fig. 3). Per-move costs increased the most for the Marine Corps (42 percent), and the least for the Air Force (23 percent). However, the Air Force had the highest average per-move cost ($8,548), and the Marine Corps had the lowest ($4,679). Air Force officials told us that they had not conducted any analyses to investigate why Air Force PCS moves cost more than those at the other services, but they suggested that this difference could be explained by a difference in the proportion of officer and enlisted ranks completing PCS moves. Our analysis shows that for fiscal years 2010 through 2014, the Air Force moved relatively more officers than did the other services—officers accounted for 21 percent of the Air Force’s total PCS moves, as compared with 10 percent of the Marine Corps’ total PCS moves—the Marine Corps being the service that had the lowest proportion of officer moves. However, the officials stated that they have not conducted analyses to determine the factors contributing to the differences in per-move costs that we identified. PCS Costs Were Consistently Higher for Officers than for Enlisted Personnel Due to Differences in Certain Cost Categories Our analysis of detailed cost data by service for fiscal years 2010 through 2014 shows that PCS costs were consistently higher for officers than for enlisted personnel across all the services. During this time period, average per-move costs were 134 percent higher for officers ($12,983 for officers and $5,553 for enlisted personnel). The results of our analysis of detailed PCS cost and move data are included in appendix III. This cost difference was generally due to higher allowances for officers in certain cost categories—that is, household goods shipments, travel expenses, and dislocation allowances. Household goods shipments, including expenses associated with packing, transporting, storing, and unpacking home and personal items, were 85 percent higher for officers than for enlisted personnel. These expenses accounted for approximately 60 to 65 percent of the costs of a PCS move. The costs of these shipments are based on the weight of the goods shipped, and according to law and DOD regulation, higher ranking personnel are provided a higher weight allowance for household goods shipments. For example, the household goods weight allowance for an officer in grade O-6 without dependents is 18,000 pounds, and for an enlisted servicemember in grade E-6 without dependents it is 8,000 pounds. Dislocation allowances, payments made to partially reimburse servicemembers for miscellaneous expenses incurred in relocating,were 49 percent higher for officers than for enlisted personnel. They accounted for 13 percent of costs for officer moves and 9 percent for enlisted servicemember moves. Travel expenses, which include commercial airfare, reimbursements for travel on another type of transportation, and per diem expenses, were 29 percent higher for officers than for enlisted personnel. They accounted for 12 percent of costs for officer moves and 21 percent for enlisted servicemember moves. Per-move costs from fiscal year 2010 through 2014 varied widely among the services, including among the same ranks or move categories. For example, the average per-move cost to transport or store privately owned vehicles was $1,288 for an Army officer and $3,165 for a Marine Corps officer. Dependent move costs also varied across ranks and services. The Marine Corps had the highest average cost for dependent moves— $1,663 for dependents of officers and $1,410 for dependents of enlisted personnel. The average costs for the same categories in the Air Force were $535 and $614, respectively. Marine Corps officials stated that, due to a service approach intended to improve the stability of military families, the Marine Corps has seen an increase in personnel completing PCS moves without their dependents so that military spouses can have increased career stability and dependents can attend the same schools. According to the officials, while not completely accounting for the cost increases we observed, this policy change has increased PCS costs due to the need to fund two separate PCS moves—one for the servicemember and one for their dependents. Certain per-move costs decreased significantly from fiscal years 2010 through 2014. For example, costs for Army officer moves decreased by 21 percent, and the cost of shipping or storing privately owned vehicles for Marines decreased by 40 percent for officers and 27 percent for enlisted personnel. Services Do Not Report Complete and Consistent PCS Data While aggregate totals for PCS moves and costs were available and could be compared across the services, we found that data for specific PCS cost categories are not reported to Congress completely or consistently. More specifically, when we compared the data reported to Congress in each of the services’ budget justification materials from fiscal years 2010 to 2014, we found that the services did not report complete and consistent data for non-temporary storage costs, temporary lodging expenses, or tour extension incentive payments (see table 2). For example, we found that only the Marine Corps reported both the costs and the number of moves associated with non-temporary storage and temporary lodging.costs of non-temporary storage and temporary lodging for the Army, the Navy, or the Air Force over the period of our review. We determined that As a result, we could not determine the per-move the Marine Corps’ per-move cost for non-temporary storage increased from $603 to $1,486 (147 percent) for officers and from $602 to $1,337 (122 percent) for enlisted personnel from fiscal years 2010 to 2014. In the absence of data from the other services, it is difficult to determine whether the increase in the Marine Corps’ per-move costs represents an outlier or reflects a departmentwide trend. Furthermore, in 2014 the Office of the DOD Inspector General reported on waste resulting from DOD’s continuing to pay for retired and separated servicemembers’ non- temporary storage units that had exceeded the entitlement period. DOD and service officials stated that changes are currently being implemented to respond to the Inspector General report’s findings and resolve the issue. Until this issue is resolved, the inconsistency of reporting for this cost category may limit DOD’s ability to track whether these changes are having the intended effect of reducing costs for non-temporary storage. In addition, the services do not consistently include some costs associated with PCS moves in their PCS budget. For example, we found that the Army and the Air Force included tour extension incentive payments as part of the special pays budget rather than the PCS budget when reporting these costs to Congress as part of their budget justification materials. The Navy and Marine Corps, on the other hand, included these payments in both the special pays and PCS budgets, with tour extension benefits related to transportation reported as part of the PCS budget. incentivize delaying a PCS move returning the servicemembers from an overseas location, and as such have a direct impact on the PCS budget. Similarly, family separation allowance payments (payments provided to servicemembers making a move for PCS, temporary duty, or deployment without their dependents) are listed separately from PCS costs in the services’ budget documents that we reviewed. Servicemembers meeting the eligibility requirements for a tour extension incentive may choose either (1) a monetary incentive in the form of a special pay or bonus; or (2) a period of rest and recuperative absence and round-trip transportation to the contiguous United States at government expense. establishing these priorities states that, because budgetary information is used widely and regularly for management, DOD will place the highest priority on improving its budgetary information and processes. Federal accounting standards similarly emphasize the need for managers to have relevant and reliable cost information to assist Congress and executives in making decisions about allocating federal resources, authorizing and modifying programs, and evaluating program performance. The standards also state that information on costs should be reported consistently, including standardizing terminology to improve communication among federal organizations and users of cost information. DOD’s Financial Management Regulation prescribes a budget and accounting classification that is to be used for preparing budget estimates, including the budget justification materials we reviewed. Our analysis of detailed cost data reported in the services’ budget materials for fiscal years 2010 through 2014 and DOD’s guidance on the presentation of budget justification materials found that the services do not report complete and consistent PCS cost and move data in their budget materials because (1) DOD guidance does not always require them to do so, (2) some services are not following existing DOD guidance, and (3) some of the services are providing more detailed data than is required by DOD guidance. For example, DOD guidance specifies that no unit of measure (such as the number of moves) is to be reported for non-temporary storage, and most of the services do not report these data. DOD guidance specifies that both moves and costs should be reported for temporary lodging expenses. However, the Army, the Navy, and the Air Force are not reporting the required data on the number of moves associated with temporary lodging expenses. DOD guidance specifies that tour extension payments be included in the special pays budget rather than the PCS budget, although these payments are also related to the PCS program. The Navy and the Marine Corps reported some costs for tour extensions in the special pays budget and other costs in the PCS budget. The Marine Corps was the only service that reported move data for non-temporary storage, and this level of detail is not required by DOD guidance. Because the services do not report complete and consistent data in their PCS budgets, the resulting information is often not comparable and prevents decisionmakers from having a comprehensive view of the costs associated with PCS. Furthermore, without complete and consistent data on PCS costs and moves, DOD’s ability to analyze the factors that influence PCS costs, make informed decisions about the PCS program, and measure the impact of any changes made may be limited. For example, according to service officials, the services recently made changes to the policy of providing storage space for servicemembers completing an overseas PCS move in an effort to save costs, but without complete and consistent PCS cost data DOD will be unable to determine whether this policy change is having the intended effect. Program Changes and Other Factors Affect PCS Costs, but the Specific Factors Driving Overall Per-Move Cost Growth Are Unclear Changes to the PCS program can affect PCS costs, and multiple changes have been made to the PCS program. For example, changes were made to the DOD Personal Property System, which the United States Transportation Command uses to coordinate the shipment of household goods during a PCS move. The changes, which began in fiscal year 2009, included providing full value replacement and repair (rather than partial value) for damaged or lost household goods at no additional cost to the servicemember; on-line claims filing and direct claims settlement between servicemembers and transportation service providers; and a change from lowest cost to best value transportation services. Also, in 2008, the temporary lodging expense allowance was increased from $180 per day to $290 per day, and the maximum amount of time for which servicemembers and dependents could be authorized reimbursement for temporary lodging during a PCS move involving a major disaster or a housing shortage caused by a sudden increase of servicemembers in a specific area was increased from 20 days to 60 days. OSD and service officials also cited factors outside the PCS program affecting PCS costs, including end strength fluctuations, wartime operational tempo, and higher fuel costs. Across the services, officials consistently stated that the major influence on PCS costs was fluctuations in end strengths, leading to an increased number of moves to train new personnel and staff positions vacated by servicemembers leaving the military. OSD officials stated that wartime operational tempo associated with operations in Iraq and Afghanistan was a major reason for the increase in PCS moves and the associated PCS cost growth, due in part to moving personnel to locations in preparation for deployments. and service officials stated that PCS cost growth was also caused by increases in fuel costs that occurred since fiscal year 2001, given the large influence that fuel costs have on specific PCS costs such as air travel and cargo shipments. For example, DOD officials told us that units preparing to deploy are staffed to higher levels during wartime than during peacetime, and this staffing requires moving more personnel to fill these units. 2014. The Marine Corps also expanded the use of incentives to encourage Marines to extend their overseas assignments. According to the Marine Corps, this initiative reduced the number of rotational moves by 10 percent from fiscal years 2013 to 2014. Although PCS program changes, factors outside the PCS program, and individual efficiency initiatives can affect PCS costs, it is unknown what the collective impact of these factors has been on PCS per-move costs since fiscal year 2001. That is because DOD has not conducted an evaluation of the PCS program to determine whether it was efficiently supporting requirements for assigning personnel to new locations while reimbursing them for the costs of PCS moves. DOD officials did not know when the last evaluation of the PCS program had been conducted. Officials also stated that they believe they have limited ability to control the factors that contribute to increases in PCS costs because PCS budgets are often driven by requirements outside the PCS program, such as the number of personnel stationed overseas. DOD guidance requires that the services analyze the anticipated increases or decreases in PCS costs resulting from any proposed changes to PCS assignment policies, and GAO’s work on strategic human capital management has found that high-performing organizations periodically reevaluate their human capital practices to ensure that resources are properly matched to the needs of the current environment. Standards for Internal Control in the Federal Government state that an agency’s internal controls should provide reasonable assurances that operations are effective and efficient, and that agencies should examine and use the information to make decisions and monitor programs. Without periodic evaluations of the efficiency of the PCS program, DOD will not have an analytical basis for identifying changes in PCS per-move costs over time and the specific factors associated with such changes. It may also be difficult for DOD to identify opportunities where efficiencies could be realized in the PCS program, without significantly impacting program performance or servicemember morale. This type of analysis could better position DOD to identify and take steps for managing and controlling PCS cost growth. DOD Does Not Have Information for Determining Whether Personnel Are Meeting Time-on- Station Requirements DOD does not have information for determining whether personnel are meeting time-on-station requirements. DOD guidance specifies time-on- station lengths for U.S. and overseas locations and also allows for personnel to move prior to reaching these lengths if they qualify for an exception or obtain a waiver. However, DOD does not have complete and consistent data on the reasons why PCS moves occur prior to reaching specified lengths, because the services (1) do not maintain required data on their usage of exceptions, and (2) do not have a requirement to maintain data on their usage of waivers. Moreover, service data on time- on-station lengths are limited. DOD Does Not Have Complete and Consistent Data on the Reasons Why PCS Moves Occur before Personnel Reach Minimum Time-on-Station Lengths DOD does not have complete and consistent data on the reasons PCS moves occur before personnel reach minimum time-on-station lengths. We found that the military services (1) do not maintain required data on their usage of exceptions and (2) do not have a requirement to maintain data on their usage of waivers. DOD and service guidance allow servicemembers to move prior to reaching the minimum time-on-station length for a variety of reasons, and these early moves require servicemembers to either qualify for an exception or obtain an approved waiver. DOD guidance requires the military departments to maintain data on exceptions to help DOD determine the effectiveness of assignment policies. DOD does not have a similar requirement for maintaining data on waivers. Waivers are granted on a case-by-case basis by senior officials and, according to service officials, the approval of a waiver generally depends on whether moving a servicemember before the minimum time-on-station length is in the best interests of the service from the standpoint of operational necessity. OSD (Personnel and Readiness) officials told us that although the services are not required to maintain data on waivers, they expect the services to be able to provide these data, along with exception data, if requested. In addition, Standards for Internal Control in the Federal Government state that program managers need data to determine whether they are meeting their goals for accountability for effective and efficient use of resources. Additionally, these data should be identified and captured in a form and time frame that permits people to perform their duties efficiently. We found, however, that the services are generally not maintaining complete and consistent data on exceptions or waivers. The Marine Corps could not provide us data on the number of moves that did not meet time-on-station length requirements, nor on the number of exceptions or waivers. While the Army, the Navy, and the Air Force could provide data on the number of moves that did not meet time-on-station length requirements, they could not provide data showing that these moves had an associated exception or waiver. DOD guidance specifies that the military departments are required to maintain not only data on the number of exceptions but also historical data that shall enable the military services and DOD to determine the effectiveness of assignment policies and the cost-effectiveness of statutory entitlements. and Readiness) officials stated that while they expect the services to maintain data on exceptions and other data such as waivers for availability in case they are needed to support analyses, they have not requested or analyzed data from the services on exceptions or waivers. databases that are used for time-on-station data. Service officials stated that other than the DOD requirement to maintain data on exceptions, they do not have internal requirements for this information, and therefore do not systematically capture these data in their personnel databases. Without the services’ maintaining complete and consistent data on exceptions and waivers, DOD is limited in its ability to determine the effectiveness of assignment policies as noted in DOD guidance, such as analyzing whether current assignment policies or other factors may be causing PCS moves to occur before personnel reach their minimum time- on-station lengths. In addition, DOD does not know how many moves occurring earlier than minimum time-on-station lengths are being approved using specific types of exceptions or waivers, and whether there are patterns in the use of exceptions or waivers for such moves. For example, information on the use of exceptions and waivers could help to identify personnel in particular locations or in specific military occupations who are experiencing shorter than average time-on-station, and could help determine whether management focus is needed to address issues underlying these trends. Such information could provide insight into how well personnel management policies, procedures, and internal controls are working. Exception and waiver data are also important for evaluating potential policy changes, such as whether increasing minimum time-on-station lengths would lead to an actual increase in time between PCS moves and a reduction in PCS costs. As we observed earlier, for example, the Air Force in 2009 increased minimum time-on-station lengths for locations within the continental United States from 36 months to 48 months but did not see a corresponding increase in actual time-on-station lengths. Without exception and waiver data, it is difficult to determine why this change did not have a significant impact on actual time-on-station lengths. Service Data on Time-on- Station Lengths Are Limited The military services do not have a requirement to track time-on-station length for their personnel, and their personnel information systems could not readily generate time-on-station data. Consequently, the availability of time-on-station data varied across the services. For example, each service had different years of available data. In addition, one service provided time-on-station data for officers and enlisted personnel separately, and these data covered different time periods. Figure 4 depicts the extent to which the services were able to provide data related to time-on-station. The Army was not able to provide data on enlisted servicemembers for 2006 through 2008 continental U.S. assignments, or data on the median time-on-station for enlisted servicemembers for 2006 through 2014 overseas assignments. The data limitations affected our ability to determine the extent to which military personnel met time-on-station lengths specified in guidance. However, on the basis of available data, we were able to make the following observations with regard to each service: Army time-on-station: The Army data show that from 2009 through 2014 at least half of the enlisted and officer moves within the continental United States occurred before the time-on-station minimum, and officer personnel moves overseas generally occurred after the time-on-station minimum. Navy time-on-station: The Navy data show that from 2001 through 2014 at least half of enlisted personnel moves within the continental United States occurred after the time-on-station minimum, while at least half of the officer moves occurred before the minimum. Air Force time-on-station: Of the four services, only the Air Force made a major policy change to minimum time-on-station length—a 12-month increase, from 36 to 48 months, for locations in the continental United States—implemented in 2009. Our analysis, however, indicated that this policy change had only a minor effect on actual times between moves. Between fiscal years 2009 and 2014, median time-on-station increased from 50 months to 51 months for enlisted personnel. During the same time period, median time-on- station for officer personnel decreased from 43 months to 37 months. Marine Corps time-on-station: The Marine Corps data show that from 2008 through 2014 at least half of the enlisted moves within the continental United States occurred before the time-on-station length minimum. For more detail on our analysis of the services’ time-on-station data, see appendix V. OSD’s Report on Increasing Time-on- Station Addressed the Elements in Congressional Direction and Used Approaches Consistent with Generally Accepted Research Standards, but Could Have Contained Additional Information In its September 2014 report on increasing time-on-station, OSD addressed the elements that were specifically identified in congressional direction. OSD also used approaches consistent with generally accepted research standards in preparing its report. Specifically, the report’s design, execution, and presentation of results were consistent with the standards. Nevertheless, OSD could have included additional information—such as a description of the model and underlying assumptions used to identify cost savings associated with increasing time-on-station—that would have improved the utility of the report for decisionmakers. OSD’s Report Addressed the Four Elements in Congressional Direction, but Additional Details Would Have Made the Report More Informative for Decision-Makers OSD’s report addressed the four elements specified in Senate Report 112-196. These elements were as follows: (1) planning to increase tour length (time-on-station); (2) analyzing the impact of increasing tour length on families, quality of life, and job performance; (3) making recommendations to mitigate certain impacts from increasing tour length; and (4) identifying cost savings. Table 3 summarizes our assessment of the extent to which the OSD report addressed each element. Although OSD’s report addressed the elements specified in Senate Report 112-196, the discussion for some of the elements in the report was limited, and OSD did not include additional details that, in our view, would have more fully informed congressional decision makers on the department’s plan. For example, the report did not explain why OSD chose the options for increasing time-on-station that it presented in the report. Additionally, the report’s discussion did not provide details on potential cost savings that may be achieved. More specifically, the report did not fully describe the model and underlying assumptions it used to identify cost savings associated with increasing time-on-station, or the methodological decisions made that may have affected the results generated by the model. OSD’s report contained a footnote stating that the results of a model created by RAND estimate average annual savings in the range of $25 million to $30 million. The footnote stated that the assumptions leading to this estimate include increasing the length of overseas time-on-station by 1 year for 10 percent of the DOD population who would be relocating. However, the RAND study indicated that the model also predicted that these savings could rise to almost $45 million annually if the time-on-station for 10 percent of the population who would be relocating within the continental United States also were increased by 1 year. The OSD report did not provide additional information about the model, such as a discussion about how the model derived the average- cost-per-move that it used to estimate savings. In our view, this additional information would have been helpful to fully inform congressional decision makers about OSD’s plans to increase time-on-station. OSD’s Report Used Approaches Consistent with Generally Accepted Research Standards, but Aspects of the Design and Execution Could Have Been Improved OSD’s September 2014 report used approaches consistent with generally accepted research standards that we derived from our previous work, These generally accepted other research literature, and DOD guidance.research standards are categorized into three overarching areas—design, execution, and presentation—with specific components for each of the areas that allowed us to determine whether a standard was met. Our assessment of OSD’s report included examining the report as well as supporting and supplemental documentation, which included a preliminary report submitted to Congress in March 2014, a RAND study that OSD commissioned to help inform its report (OSD had a draft of the RAND study when it was developing its report to Congress), and contract documents related to the RAND study. We also interviewed officials from OSD and RAND who had knowledge of the study. We determined that the report’s design, execution, and presentation of results were generally consistent with the research standards we identified, and, as a result, we believe that the report’s key findings were reasonable. We determined that the report and supporting documents used approaches consistent with the standard for a well-designed report in that the design and scope were clear and the assumptions were explicitly identified, as well as reasonable and consistent. For example, the assumptions that underpin the design were generally reasonable and consistent. Although the major constraints were generally identified and stated, some of the constraints could have been more explicitly discussed. For example, the report and supporting documents did not discuss constraints imposed by shortcomings presented when data were unavailable. RAND officials told us that unavailability of data from the military services, including information on exceptions and waivers, prevented them from completing in-depth analyses of the effect of increased tour lengths on factors such as career development. However, a discussion of these constraints and their impact on the scope of analysis was not included in the report or supporting documents. We also determined that the OSD report and supporting documents used approaches consistent with the standard for a well-executed report in that the models used to support the analyses, the assumptions in the models, and the data used were appropriate for the defined purposes of the study. In addition, the OSD report notes limitations related to using models to predict changes resulting from new policies—such as the difficulty of forecasting the number of moves saved resulting from increasing time-on- station. Although we determined that the OSD report and supporting documents used approaches consistent with the standard for a well-executed report, some aspects of the report could have been strengthened to better explain the full context of the study’s results. For example, the report to Congress did not include information related to RAND’s model and modeling methodology. In addition, the potential impact of data limitations related to the use of survey data could have been better explained. For example, the RAND study relied in part on results from the Defense Manpower Data Center’s Status of Forces Active Duty Survey of DOD servicemembers, and the OSD report cites these results. Although we have typically observed the response rate to the Status of Forces Active Duty Survey to be low, neither RAND nor OSD included any discussion of the response rate, or of the potential affect it may have had on the survey results. Finally, we also determined that the report and supporting documents used approaches consistent with the standard for a well-presented report in that the results were presented in a clear, timely, accurate, and concise manner and were relevant to Congress. For example, the analysis was well documented and the conclusions were logically derived from the analysis. Although some of the analyses and results were not present in the report to Congress, we determined that the analysis and results contained in the associated RAND study were sufficient to meet the standard. Conclusions Effective management of the PCS program is important to support DOD’s ability to reassign military personnel to new locations, while reimbursing servicemembers for the allowable expenses of moving their households. In addition, efficient use of PCS budgetary resources, like other components of military compensation, is important given the federal government’s continuing fiscal challenges. As DOD officials continue to manage potential budget reductions, including implementation of sequestration, the PCS program is likely to be a continued focus for finding efficiencies and cost savings. However, DOD does not have consistent and complete data on PCS costs and moves and, therefore, cannot publish this information in the services’ budget justification materials and provide it to decisionmakers in Congress and DOD. Furthermore, DOD does not conduct periodic evaluations of the PCS program and is not in a position to identify and evaluate changes that may be occurring over time in PCS per-move costs and the factors driving such changes, nor is it in a position to take steps to manage and control cost growth. In addition, DOD does not have complete and consistent data on waivers and exceptions—information that is needed to determine the military services’ performance in meeting time-on-station length requirements. OSD stated in its September 2014 report to Congress that it is planning to take actions aimed at extending servicemembers’ time- on-station—actions that OSD believes could reduce PCS costs. However, in the absence of more complete and consistent data on both PCS costs and the use of exceptions and waivers, DOD does not have the information it needs for evaluating whether the implementation of its planned actions are effective in extending time-on-station lengths and reducing PCS costs. Recommendations for Executive Action To improve the availability of information needed for effective and efficient management of the PCS program, including program costs and time-on- station requirements, we recommend that the Secretary of Defense take the following four actions: Direct the Under Secretary of Defense (Comptroller), in coordination with the military services, to improve the completeness and consistency of PCS data in service budget materials. This action should include revising existing guidance on the reporting of non- temporary storage costs, and clarifying existing guidance on the presentation of other PCS data. Direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services, to complete periodic evaluations of whether the PCS program is efficiently supporting DOD’s requirements for assigning military personnel to new locations while reimbursing servicemembers for allowable expenses incurred during PCS moves. These periodic evaluations should identify changes in PCS per-move costs over time, factors driving such changes, and steps that could be taken to manage and control cost growth. Direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services, to improve the completeness and consistency of data on exceptions used for PCS moves that occur prior to established time-on-station lengths. This action should include clarifying existing guidance with regard to how the services collect, maintain, and report data on exceptions for use in evaluating performance in meeting time-on-station requirements and addressing challenges related to the services’ abilities to collect, maintain, and report exceptions data. Direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services, to improve the completeness and consistency of data on waivers used for PCS moves that occur prior to established time-on-station lengths. This action should include establishing guidance for the military services to collect, maintain, and report data on waivers for use in evaluating performance in meeting time-on-station requirements and addressing challenges related to the services’ abilities to collect, maintain, and report waiver data. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with three of our recommendations and partially concurred with one. DOD’s comments are reprinted in appendix VII. DOD also provided technical comments that we considered and incorporated as appropriate. In regard to our first recommendation—to improve the completeness and consistency of PCS data in service budget materials—DOD concurred, adding that the Office of the Secretary of Defense (Comptroller) will convene a working group with the military services to review and revise, as necessary, the current budgetary reporting requirements for the PCS program. This action could meet the intent of our recommendation if it results in more complete and consistent PCS budget data; however, DOD did not provide information on the timeline for convening the working group. In regard to our second recommendation—to complete periodic evaluations of the efficiency of the PCS program—DOD partially concurred, noting that it agreed with the recommendation except for the use of the phrase “fully reimbursing servicemembers.” DOD suggested that we remove the word “fully” to reflect that some travel allowances, such as temporary lodging expense and temporary lodging allowance, are not intended to fully reimburse servicemembers, but are expressly intended to offset cost. Upon further consideration, we agree that both temporary lodging and dislocation allowances are defined in guidance as intended to partially offset costs incurred by servicemembers during a PCS move. As a result, we have removed the word “fully” from the recommendation. DOD’s comments did not provide information on the timeline or specific actions it plans to take to implement our recommendation. We continue to believe that without periodic evaluations of the efficiency of the PCS program, DOD will not have an analytical basis for identifying changes in PCS per-move costs over time and the specific factors associated with such changes. It may also be difficult for DOD to identify opportunities where efficiencies could be realized in the PCS program, without significantly impacting program performance or servicemember morale. Implementing our recommendation to complete periodic evaluations of the efficiency and effectiveness of the PCS program could better position DOD to identify and take steps for managing and controlling PCS cost growth. In regard to our third and fourth recommendations—to improve the completeness and consistency of data on exceptions and waivers used for PCS moves that occur prior to established time-on-station lengths— DOD concurred. DOD’s comments did not provide information on the timeline or specific actions it plans to take to implement our recommendations. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Under Secretary of Defense (Comptroller); the Secretaries of the Air Force, the Army, and the Navy; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Scope and Methodology To evaluate the extent to which Permanent Change of Station (PCS) costs have changed from fiscal years 2001 through 2014, and the factors that caused any changes, we obtained and analyzed relevant Department of Defense (DOD) and service budget materials containing information on PCS moves and costs. These budget materials report the services’ actual obligations for PCS costs, along with the associated number of servicemember moves. For our analysis, we divided the cost of PCS moves by the number of servicemembers who were moved in order to calculate the per-move costs. We identified criteria in DOD guidance that prescribes a budget and accounting classification that is to be used for preparing budget estimates, including the budget materials we reviewed, accounting standards that emphasize the need for managers to have relevant and reliable cost information to assist Congress and executives in making decisions about allocating federal resources. We assessed the availability, reliability, and consistency of PCS cost and move data against those criteria. We met with officials at the Office of the Under Secretary of Defense (Comptroller) and the military services’ budget offices to learn about how these data were generated, presented, and used. We also discussed with these officials the reliability and comparability of these data. DODFMR, vol. 2A, chpt. 1 (Sept. 2014). the services use this index to adjust for inflation when preparing PCS budget estimates to support the annual President’s budget request to Congress. We determined that summary PCS data for fiscal years 2001 through 2014 were reliable for the purposes of analyzing and comparing per-move costs for the six major types of PCS moves and across the services. However, we determined that detailed cost data published in the services’ budget materials prior to fiscal year 2010 were not sufficiently reliable for the purposes of reporting trends in specific cost categories by officers and enlisted personnel, because these data were generated using methods that the services subsequently revised, and because DOD’s process for recording obligations changed in fiscal year 2009 to require that obligations be recorded at the time PCS orders were issued, rather than at the time PCS moves occurred. We determined that the services’ detailed data for fiscal years 2010 through 2014 were sufficiently reliable for the purposes of our analysis because (1) service officials confirmed that this level of data would be sufficiently reliable for the purposes of our (2) these data are the actual obligations reported by each of the analysis;services for PCS costs, which is what the services anticipate spending on PCS during the period of availability for the appropriation; and (3) we validated the totals of the services’ detailed data against the summary data in the DOD-wide budget materials. We analyzed the detailed cost data for these years to identify trends across traveler type (that is, officers and enlisted personnel, and their dependents) and among specific cost categories (that is, household goods shipments and temporary lodging expenses). We were not able to calculate per-move costs for all the PCS cost categories because of the lack of data on servicemember moves for certain cost categories for certain services. See 37 U.S.C. § 474, et seq. the Secretary of Defense (OSD) and service officials, and we asked them to identify any program or policy changes that would have affected PCS costs, as well as any external factors—such as decisions to change the number of overseas locations—that may have influenced PCS costs during this timeframe. We also reviewed the results of our analysis of PCS moves and costs, identified specific areas of relatively large increases or decreases over time, and discussed these changes with OSD and service officials to obtain additional context for the changes we identified. To evaluate the extent to which military personnel are meeting time-on- station requirements by either reaching minimum time-on-station lengths or receiving exceptions and waivers to make a PCS move earlier than planned, we obtained and analyzed available data from the military services on time-on-station lengths and associated waivers and exceptions. We asked each of the services to provide the average and median time-on-station for officer and enlisted personnel for operational and rotational moves. We also asked for these data to be broken out by rank and job specialty, and for data on the number of moves that required waivers and exceptions. The services used existing databases—the Army’s Total Army Personnel Database, the Navy’s Enlisted Assignment Information System and Officer Assignment Information System, the Air Force’s Military Personnel Data System, and the Marine Corps’ Total Force Data Warehouse—to produce summary time-on-station data. To assess the reliability of the databases, we reviewed policies and procedures related to the respective databases, and we interviewed agency officials knowledgeable about these data. Data the services provided were not consistent, and we were generally not able to make direct comparisons among the services. However, we determined that the summary data were reliable for purposes of reporting on time-on-station for each of the services. We also interviewed pertinent officials within the Office of the Under Secretary for Personnel and Readiness and the military services to discuss time-on-station data and policies. We reviewed DOD and service guidance related to time-on-station lengths, exceptions, and waivers and evaluated current practices based on this guidance. We also reviewed historical changes to the guidance dating back to 2001 to identify adjustments that have been made to time-on- station lengths for assignments both overseas and in the United States. To determine the extent to which OSD’s September 2014 report on time- on-station addressed the elements identified in Senate Report 112-196, we used a scorecard methodology. We created a checklist of elements, and two analysts independently compared the elements with the OSD report to determine the extent to which the study met the congressional direction. This scorecard methodology assigns a rating of “addresses” if the report engaged with all elements of the relevant congressional direction, even if specificity and details could be improved upon; a rating of “partially addresses” if the report did not include all of the elements of the congressional direction; and a rating of “does not address” when elements of congressional direction were not explicitly cited or discussed, or when any implicit references were either too vague or too general to be useful. We also interviewed pertinent officials within OSD to discuss the report. To guide our assessment of the OSD report, we identified generally accepted research standards for the design, execution, and presentation of findings that define a sound and complete study, and we used a scorecard methodology to determine the extent to which the report used approaches consistent with these standards. To define the set of standards, we reviewed and adapted generally accepted research standards from prior GAO work that reviewed DOD mobility requirements studies.overarching areas—design, execution, and presentation—with specific components for each of the areas that determined whether or not a standard was met (see appendix VI for a list of the standards we used). Our analysis of the report also considered the report’s supporting documentation, which included a preliminary report submitted to Congress in March 2014, a RAND study that OSD contracted to help write its report (OSD had the study in draft form when creating its report), and contract documents related to the RAND study. Four specialists within GAO’s Applied Research and Methods team with collective backgrounds in the areas of economics, statistical modeling, survey methods, and research methods then evaluated the OSD report and supporting documentation against the defined standards. Based on the The set of standards we defined were categorized into three specialists’ preliminary reviews, the GAO team followed up with requests for additional information and clarification from OSD and RAND. The specialists then discussed and reconciled any disagreements within their evaluations to determine the extent to which the report conformed to the three overarching areas and components of the areas. For reporting purposes, the specialists determined that qualitative assessment ratings, rather than numeric ratings for each individual standard, provided the best explanation of the nuances of the analysis and findings. We conducted this performance audit from September 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Permanent Change of Station Moves and Costs for Move Categories Our analysis of Permanent Change of Station (PCS) data for the six Department of Defense (DOD) move categories shows that from fiscal years 2001 through 2014 accession and separation moves constituted 58 percent of PCS moves and about 24 percent of PCS costs. At the same time, operational and rotational moves accounted for 34 percent of PCS moves and 66 percent of PCS costs. Training and organized unit travel together constituted 8 percent of moves and costs (see fig. 5.). As stated above, annual PCS per-move costs for operational and rotational moves were higher, on average, than for accession and separation moves. For example, an accession move over this time period cost $2,296 on average, and a rotational move cost $13,238. Per-move costs for training and organized unit moves were lower than per-move costs for rotational and operational moves and higher than per-move costs for accession and separation moves (see table 4). Service officials stated that rotational travel is the most costly type of move because it involves transoceanic travel, with personnel and cargo moving longer distances than for other types of moves. In contrast, accession moves—when personnel are moved to their first duty station— generally involve travel within the continental United States. Also, servicemembers making an accession move are lower ranked and typically younger, and therefore less likely to be accompanied by dependents, than are servicemembers making other types of moves. While servicemembers making separation moves are generally more senior and older than servicemembers making accession moves, DOD and service officials stated that the per-move costs for these moves are similar to those for accession moves because many servicemembers making a separation move choose to remain in the same geographic area as their final duty station, and thus these moves incur minimal costs. DOD officials stated that the post-9/11 build-up of the military forces and the subsequent drawdown in recent years led to a large number of accessions and separations, as servicemembers were recruited for and subsequently separated from the military. Appendix III: Permanent Change of Station (PCS) Per-Move Costs in Specific Cost Categories Our analysis of detailed Permanent Change of Station (PCS) data for fiscal years 2010 through 2014 shows differences in per-move costs for officers as compared with enlisted personnel in specific cost categories (see table 5). We calculated annual average costs by specific cost category (which are described in table 6) and determined the percentage change in the cost categories over this time period. We further separated these cost data for officers and enlisted personnel in each service. Detailed costs in certain categories were not available for some of the services, and we discuss these data limitations in this report. Also, as noted in this report, PCS per-move costs generally were decreasing during this time period. Appendix IV: Time-on-Station Exceptions Specified in Department of Defense Instruction 1315.18 Department of Defense Time-On-Station Exceptions 1. Servicemembers are reassigned to an overseas or sea tour 2. Servicemembers in sea-intensive skills are assigned from shore to sea duty, are required to complete minimum of 2 years of 3. Servicemembers are accessed, reassigned to a different duty station for initial skill training, or are separated. 4. Servicemembers are reassigned to a different duty station for training or educational purposes. 5. Moves resulting from major weapon-system change or unit conversion (for example, a change from one type of aircraft to another, such as F-4 to F-15, or infantry to mechanized infantry). This exception shall not cover moves associated with replacing a servicemember selected for a new weapon system or unit. 6. Servicemembers are permitted the option to retrain into a new specialty and location in conjunction with reenlistment, in which 7. Servicemembers are permitted the option to select another location in conjunction with an established program, to keep military couples together, in which case a 1-year minimum shall apply. 8. Servicemembers are assigned to the Office of the Secretary of Defense, the Office of the Chairman of the Joint Chiefs of Staff, or a Defense Agency where the tenure is limited by statute or the provisions of this Instruction to a shorter tour. 9. Servicemembers serving under DOD Directive 1100.9, Military Civilian Staffing of Management Positions in the Support Activities which prescribes different assignments for management positions in the support activities. 10. Servicemembers are reassigned under Exceptional Family Member Programs or for humanitarian reasons 11. Servicemembers are reassigned to a different duty station in preparation for a unit deployment/move. 12. Servicemembers who are being considered for reassignment are in their first enlistment. 13. Servicemembers in professional skills, such as doctors and lawyers, serving in assignments designated by the Secretary concerned for the purpose of validating professional credentials or for developing expertise in selected specialized skills before being assigned to independent duty without supervision. 14. Servicemembers disqualified for duty as a result of loss of security clearance, professional certification, nuclear certification, or medical qualification to perform, and where it has been determined that no vacant position exists within the limits of the same geographic location in which the Servicemember may serve pending re-qualification or re-certification. 15. Members reassigned as prisoners including assignments to and from confinement or reassigned for the purpose of standing trial. 16. Members reassigned from patient status. 17. Members curtailed for the purpose of traveling outside of the travel restriction for pregnancy of the member or spouse, or reassigned for the purpose of receiving adequate medical care, including curtailments of female members from unaccompanied tours because of the lack of adequate obstetric care. 18. Members involved in incidents that cause serious adverse publicity or embarrassment for the United States Government, that may jeopardize the mission, or that indicate the member is a potential defector. 19. Members or their dependents are threatened with bodily harm or death and circumstances are such that military and civilian authorities are unable to provide for their continued safety. Appropriate investigative agencies (such as the Air Force Office of Special Investigations or the Army Criminal Investigation Command) and judge advocate offices shall verify the threats and circumstances. 20. Members complete or are eliminated from a training or education program. 21. Members reassigned on low cost moves, as defined in enclosure 2 in DOD Instruction 1315.18 22. The Secretary of Defense waives completion of a full tour of duty in a joint duty assignment, and the action would otherwise require a waiver of a time-on-station requirement. 23. Members rendered as excess as a result of unit inactivation, base closure or consolidation, or organization or staffing changes. The Exceptional Family Member Program is for active duty servicemembers who have family members with special medical needs. When servicemembers are considered for assignment within the United States, consideration is given as to whether needed services, such as specialized pediatric care, are available through the military health system at the proposed location. Appendix V: Time-on-Station Data and Trends for United States and Overseas Locations This appendix discusses the availability of time-on-station data from the military services and our analysis of time-on-station trends, by service, for continental U.S. and overseas locations. We requested time-on-station data from each of the services and assessed the reliability of the data received. The availability of time-on- station data varied across services and ranks. The trends for available data are summarized by service below. For ease of reporting, we refer to assignments outside of the continental United States as overseas assignments. For more information on our methodology, see appendix I. The services provided us with data on the median and average time-on- station lengths. Based on our analysis of these data, we determined that the averages were consistently higher than the medians. We discussed our analysis with service officials and they told us this was likely due to certain servicemembers remaining in one location for extended periods of Based on our analysis and conversations with service officials, we time.decided to report the median of these data, rather than the average. For purposes of consistency, we rounded all data to the nearest whole month. Army Time-on-Station The Army provided us data on the median time-on-station for officer personnel for assignments in the continental United States and overseas for fiscal years 2006 through 2014 and for fiscal years 2009 through 2014 for enlisted personnel. For overseas assignments, the Army was not able to separate out 12-, 24-, and 36-month tours for enlisted personnel, so we were not able to report on these data. For consistency, we reported data from fiscal years 2009 through 2014 when comparing enlisted and officer personnel, and by available year when discussing the personnel separately. Data from the Army show that from fiscal years 2009 through 2014, at least half of the enlisted personnel moves within the continental United States occurred at or before 38 months. For officers, the Army data show that at least half of the moves within the continental United States occurred at or before 34 months (see fig. 6). For overseas assignments, the Army reported that from fiscal years 2006 through 2014, at least half of the officer personnel time-on-station lengths met or exceeded the minimum. Specifically, Army data show that most officers on 12-month assignments moved at or after 13 months. For officers on 24-month tours, data show that at least half moved at or after 25 months. At least half of officers on 36-month tours moved at or after 36 months. Navy Time-on-Station The Navy provided us data on the median time-on-station for officer and enlisted personnel for assignments in the continental United States from fiscal years 2001 through 2014. The Navy was not able to provide data for overseas assignments. Data obtained from the Navy show that from fiscal years 2001 through 2014, at least half of the enlisted personnel moves within the continental United States occurred at or after the 36-month minimum specified in guidance. For officers, at least half of the moves within the continental United States occurred at or before 33 months (see fig. 7). Air Force Time-on-Station The Air Force provided us data on the median time-on-station for officer and enlisted personnel for assignments both in the continental United States and overseas. They also provided us data on average time-on- station for officers based on their career fields. Air Force data show that from fiscal years 2003 through 2014, at least half of the enlisted personnel moves within the continental United States occurred at or after 44 months. For officers, the Air Force data show that least half of the moves within the continental United States occurred at or after 35 months (see fig. 8). These data include moves that occurred prior to the Air Force’s increasing its minimum time-on-station requirement from 36 months to 48 months in 2009. Despite this policy change, actual time-on-station did not significantly change. Between fiscal years 2009 and 2014, median time-on-station increased from 50 months to 51 months for enlisted personnel. During the same time period, median time-on-station for officer personnel decreased from 43 months to 37 months. For overseas assignments, Air Force data show that from fiscal years 2003 through 2014 at least half of officers and half of enlisted personnel on 12-month assignments moved at or after 12 months. For 24-month tours, at least half of officers moved at or after 24 months, and at least half of enlisted personnel moves occurred at or after 24 months. For 36- month tours, at least half of officers moved at or after 36 months and at least half of enlisted moves occurred at or after 37 months. Air Force data also show that time-on-station varied among officer career fields. For example, from fiscal years 2009 through 2013, officers ranked O-1 through O-3 with a title of Logistics Commander moved the most frequently, while those with a specialty of Operations Management / Command and Center moved least frequently. Officers ranked O-4 through O-6 with a title of Aerospace Medicine moved the most frequently, while those with the Support Commander title moved least frequently. Air Force officials told us that that time-on-station varied among career fields because some career fields have fewer personnel, which may lead to more moves—and shorter time-on-station lengths—in order to keep positions filled. Marine Corps Time-on- Station The Marine Corps provided us data on the median time-on-station for officer and enlisted personnel for assignments in the continental United States for fiscal years 2008 through 2014. The Marine Corps could not provide data separated by time-on-station length for personnel serving in overseas assignments. Data from the Marine Corps show that from fiscal years 2008 through 2014, at least half of the enlisted personnel moves within the continental United States occurred at or before 32 months. For officers, the Marine Corps data show that at least half of the moves within the continental United States occurred at or before 35 months (see fig. 9). Appendix VI: GAO Generally Accepted Research Standards Checklist Used to Assess the Office of the Secretary of Defense’s September 2014 Study A. Design: Is the study is well designed? Are the study’s objectives clearly stated? Is the study’s scope clearly defined? Are the assumptions explicitly identified? Are the assumptions reasonable and consistent? Are the assumptions varied to allow for sensitivity analyses? Are major constraints identified and discussed? Are the scenarios that were modeled reasonable ones to consider? Do the scenarios represent a reasonably complete range of conditions? B. Execution: Is the study well executed? Is the study’s methodology consistent with the study’s objectives? Are the study’s objectives addressed? Were the models used to support the analyses appropriate for their intended purpose? Were the data used valid for the study’s purposes? Were the data used sufficiently reliable for the study’s purposes? Were any data limitations identified, and was the impact of the limitations adequately explained? Were any modeling or simulation limitations identified, explained, and justified? Have the models used in the study been described and documented adequately? C. Presentation of results: Are the results timely, complete, accurate, concise, and relevant to the client and stakeholders? Do the results of the modeling support the report findings? Does the report present an assessment that is well documented? Are the study results presented in the report in a clear manner? Appendix VIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report were Tom Gosling (Assistant Director); James Ashley; Timothy Carr; Farrah Graham; Foster Kerrison; Amie Lesser; Amanda Manning; Ruben Montes de Oca; Terry Richardson; Ben Sclafani; Michelle A. Vaughn; Cheryl Weissman; and Michael Willems.
PCS involves moving military personnel to new locations and is a key tool used by the military services to fill assignments both in the United States and overseas. In fiscal year 2014, DOD obligated $4.3 billion for approximately 650,000 servicemember PCS moves. Senate Report 113-176 included a provision for GAO to report on aspects of the PCS program. This report evaluates the extent to which (1) PCS per-move costs have changed since 2001, (2) military personnel are meeting time-on-station requirements, and (3) OSD's September 2014 study on increasing time-on-station addressed the elements in Senate Report 112-196 and used approaches consistent with generally accepted research standards. GAO analyzed PCS cost and move data for fiscal years 2001 through 2014 using fiscal year 2014 dollars; obtained and analyzed available time-on-station data; reviewed OSD's September 2014 report to Congress on increasing time-on-station; and interviewed OSD and service officials. The Department of Defense (DOD) has experienced an overall increase in Permanent Change of Station (PCS) per-move costs since 2001. GAO's analysis of DOD budget data shows that average PCS per-move costs, after accounting for inflation, increased by 28 percent from fiscal years 2001 to 2014. However, GAO's review of the services' annual budget materials found that the services have not reported complete and consistent PCS data, thereby limiting the extent to which DOD can identify and evaluate changes occurring within the PCS program. For example, the services did not completely or consistently report budget data on non-temporary storage costs, temporary lodging expenses, or tour extension payments. Program changes and factors outside the program can affect PCS costs. The specific factors driving the growth in per-move costs are unclear, however, because DOD does not periodically evaluate whether the PCS program is efficiently supporting requirements to relocate personnel. DOD therefore is not in a position to identify and evaluate changes that may be occurring over time in PCS per-move costs, or to take steps to manage and control cost growth. DOD does not have information for determining whether personnel are meeting time-on-station requirements. DOD guidance specifies time-on-station lengths for U.S. and overseas locations and also allows for personnel to move prior to reaching these lengths if they qualify for an exception or obtain a waiver. However, DOD does not have complete or consistent data on the reasons why PCS moves occur prior to reaching specified lengths, because the services (1) do not maintain required data on their usage of exceptions and (2) do not have a requirement to maintain data on their usage of waivers. Moreover, availability of service data on time-on-station lengths is limited and varies by service. For example, each service has different years of available data. In addition, one service provided time-on-station data for officers and enlisted personnel separately, and these data covered different time periods. In its September 2014 report to Congress on increasing time-on-station, the Office of the Secretary of Defense (OSD) addressed the elements that were specifically identified in congressional direction. OSD also used approaches consistent with generally accepted research standards in preparing its report. Nonetheless, OSD could have included additional information, such as more explicitly discussing constraints and information about the model used to develop cost savings estimates, and thereby improved the utility of the report for decision makers. The report stated that DOD plans to take actions aimed at extending servicemembers' time-on-station, which OSD believes could reduce PCS costs. However, without more complete and consistent data on both PCS costs and the use of exceptions and waivers, DOD does not have the information it needs for evaluating whether the implementation of its planned actions will be effective in extending time-on-station lengths and reducing PCS costs.
Background The multifamily housing finance market has three principal participants: (1) primary lenders, which originate mortgage loans; (2) secondary market institutions, which purchase mortgage loans from primary lenders; and (3) investors in securities issued by secondary market institutions that All three participants contribute to the are backed by mortgage loans.flow of funds to the multifamily borrower. Lenders originate mortgages, which they may either retain as an income-earning asset (an approach called portfolio lending) or sell to a secondary market institution. The sale of these mortgages provides the lender with funds to make additional loans. A secondary market institution, in turn, purchases a mortgage and may retain it as a portfolio asset or use the individual loan or a pool of loans as collateral for a security. Investors then buy these securities from a lender or secondary market institution. Multifamily mortgages differ from single-family mortgages in several ways. A multifamily property is a cash-generating asset, with rental income used to pay the multifamily mortgage, while single-family properties are not generally cash-generating assets. Many single-family mortgages are 30-year, fully amortizing mortgages, while most multifamily loans have terms of 5, 7, or 10 years with a balloon payment due at maturity.against borrower prepayment (using a prepayment premium or other limitation on prepayment), while single-family loans generally do not. In addition, multifamily mortgages have different risk characteristics. For example, it is harder to predict credit risk for multifamily mortgages than Most multifamily loans also include protection for the investor for single-family mortgages. Finally, securitizing multifamily loans (that is, packaging them into mortgage pools to support MBS) is more challenging because they are not as standardized as single-family loans. For example, the multifamily loan pools that back MBS have varied loan terms while single-family securities have historically been backed by 15- year and 30-year mortgages. Fannie Mae and Freddie Mac were established to provide liquidity, stability, and affordability in the secondary market for both single- and multifamily mortgages. Their charters do not allow them to operate in the primary mortgage market by originating loans or lending money directly to consumers. Rather, they purchase mortgages that meet their underwriting standards from primary mortgage lenders, such as banks or thrifts, and either hold the mortgages in their portfolios or package them into MBS. Multifamily loans make up a small part of the enterprises’ total loan purchases. According to FHFA’s 2011 Annual Report to Congress, the enterprises purchased single-family mortgages with an unpaid principal balance of $879.0 billion and multifamily mortgages totaling $44.6 billion in 2011. According to a 1998 article, Fannie Mae and Freddie Mac both entered the conventional multifamily loan market in 1983 and were experiencing significant losses by 1991. For example, the article stated that in 1991, Fannie Mae’s multifamily loans were 5.7 percent of all its loans, but multifamily charge-offs were 30.2 percent of its total charge-offs. Freddie Mac’s 1991 losses were even greater. According to the article, its multifamily loans were 2.6 percent of all loans, but multifamily charge-offs were 51.4 percent of its total charge-offs. Due to these losses, Freddie Mac exited the multifamily market for 3 years starting in 1991. The same article noted that boom-and-bust cycles are common in the multifamily housing market due to the relative ease of entry into the industry. During periods of strong performance, new apartment supply increases, which leads to overexpansion and high vacancy rates. According to the authors, such a cycle contributed to the enterprises’ losses in the late 1980s. Fannie Mae currently participates in the multifamily mortgage finance market primarily through its Delegated Underwriting and Servicing (DUS®) program. Under this program, which was initiated in 1988, Fannie Mae approves lenders and delegates to them the authority to underwrite, close, and sell loans to the enterprise without its prior review. In exchange for granting this authority, DUS lenders share the risk of loss with Fannie Mae. The most common loss-sharing structures are standard DUS loss sharing and pari passu. The standard model has a tiered loss system, generally with the maximum lender loss capped at the first 20 percent of the original loan amount. Under the pari passu model, lenders share all losses on a pro rata basis with Fannie Mae (the lender assumes one-third of the loss and Fannie Mae two-thirds). A small portion of Fannie Mae’s multifamily business comprises non-DUS deliveries, which typically are small balance loans or pools of seasoned loans (loans that have typically been in a financial institution’s portfolio for at least 1 year and have a satisfactory repayment record). In 1994, Fannie Mae began securitizing DUS loans by creating DUS MBS, each of which is backed by a Fannie Mae guarantee to the investor of principal and interest. Typically, each DUS MBS pool contains one DUS loan, but can incorporate multiple DUS loans. Freddie Mac participates in the multifamily market by underwriting all of the loans it purchases. It purchases loans from a network of approved lenders, but completes the underwriting and credit reviews in-house. Freddie Mac also conducts negotiated transactions or purchases of seasoned loans. For a majority of its business, Freddie Mac sells a significant amount of multifamily credit risk, as defined by expected losses, to investors by issuing securities backed by its mortgages. In general, these securities, known as K‐deals, are backed by pools of newly originated mortgages underwritten by Freddie Mac. Loss-sharing arrangements such as Fannie Mae’s DUS program and Freddie Mac’s K- deal program do not exist in either enterprise’s single-family business. In addition to Fannie Mae and Freddie Mac, the following entities participate in the multifamily housing financing marketplace: Life insurance companies originate and hold in portfolio multifamily mortgages. CMBS lenders originate multifamily loans that are packaged into CMBS, which are MBS backed by commercial rather than residential properties. Commercial properties include multifamily housing as well as retail, office, and industrial space. Commercial banks and thrifts originate commercial and industrial loans, including loans secured by multifamily properties. They may retain these loans in their portfolios or sell them to the enterprises or other secondary market investors. FHA insures multifamily loans originated by FHA-approved lenders for the construction, substantial rehabilitation, and acquisition and refinancing of apartments. RHS has a guaranteed loan program for rural multifamily housing. State and local housing finance agencies (HFA) are state or locally chartered authorities established to help meet the affordable housing needs of the residents of their states or localities. HFAs sell tax- exempt housing bonds, commonly known as Multifamily Housing Bonds, to investors to finance multifamily housing production. Loan consortiums—which are organized by a group of commercial banks and savings institutions in a local housing market or at the state level to provide multifamily affordable construction and mortgage loans—are primary lenders for multifamily housing with less than 50 units. The Enterprises’ Multifamily Loan Activities Generally Increased and Delinquencies Remained Low From 1994 through 2011, the multifamily loan activities of Fannie Mae and Freddie Mac generally increased, while delinquency rates remained relatively low. During this period, the number of loans they purchased spiked in some years to meet goals for financing affordable housing. Fannie Mae has held a lower percentage of its loans in portfolio than Freddie Mac, but both enterprises have increased securitization activities in recent years partly in response to a mandate from the Department of the Treasury (Treasury) to reduce retained portfolios. Serious delinquency rates for the enterprises’ multifamily loans were generally less than 1 percent from 1994 through 2011, but the unpaid principal balance on seriously delinquent loans rose considerably starting in 2008. For all of the analyses of Fannie Mae and Freddie Mac’s purchases, we adjusted the dollar amounts for inflation to 2012 dollars. As a result, the numbers we present are unlikely to correspond to similar numbers previously reported by the enterprises in their public disclosures. Multifamily Loan Purchases Peaked in Recent Years, with Some Fluctuation Tied to Housing Goals From 1994 through 2011, the enterprises’ multifamily loan activities generally increased. As shown in figure 1, the enterprises’ annual purchases of multifamily loans (in terms of unpaid principal balances) dramatically increased starting in 2000, peaked in 2007 and 2008, and generally declined in the years following. Fannie Mae’s purchases ranged from $6.3 billion in 1994 to $49.8 billion in 2007. Freddie Mac’s purchases ranged from $885.5 million in 1994 to $25.5 billion in 2008. The enterprises’ annual loan purchases increased dramatically in 2007 and 2008 as other participants exited the market during the financial crisis. The enterprises’ multifamily activities (by number of loans acquired) varied over the period we reviewed, in some cases because the enterprises purchased additional loans to meet affordable housing goals. For example, Fannie Mae acquired a large number of loans in 2003 and 2007, and Freddie Mac in 2003. According to Fannie Mae officials, the majority of these acquisitions were pools of seasoned multifamily loans purchased through negotiated transactions to meet affordable housing goals for the purchase of mortgages that served targeted groups such as low- and moderate-income households. Freddie Mac officials offered a similar explanation for the increase in their 2003 purchases. From 2003 through 2007, affordable housing goals were set as the percentage of the enterprises’ total (single-family and multifamily) mortgage purchases. Increased activity in the single-family financing market in 2003 and 2007 (that is, more people buying and refinancing homes) meant that the enterprise needed to acquire more mortgages to meet affordable housing goals. As discussed in more detail later in this report, multifamily mortgages had a disproportionate importance for the housing goals because most multifamily housing serves targeted groups. Fannie Mae Retained a Lower Percentage of Multifamily Loans Than Freddie Mac From 1994 through 2011, Fannie Mae retained a lower percentage of its annual multifamily loan purchases in portfolio at acquisition than Freddie Mac (see fig. 2). From 1994 through 2003, the majority of Fannie Mae’s multifamily loan purchases were packaged into MBS. The percentage of unpaid principal balance associated with these MBS ranged from 53 percent ($3.3 billion) in 1994 to 86 percent ($14.3 billion) in 1998. From 2004 through 2008, this trend reversed, with the majority of the unpaid principal balance of Fannie Mae’s loan acquisitions being held in portfolio as whole loans. The percentages held in portfolio ranged from 50 percent ($10.8 billion) in 2004 to 82 percent ($41 billion) in 2007. Following the conservatorship, the majority of Fannie Mae’s loan purchases were again packaged into MBS, with the 2011 data showing that 98.6 percent ($24.1 billion) of the unpaid principal balance of multifamily loans Fannie Mae acquired that year was securitized. As previously discussed, Treasury required the enterprises to reduce their retained portfolios each year (starting in 2010) as a condition of agreements providing financial support. Prior to 2008, the majority of Freddie Mac’s multifamily business (in terms of unpaid principal balance) remained in its retained portfolio. Retained loans represented the majority of Freddie Mac’s multifamily business in every year from 1994 through 2007 except 2003. The percentage of unpaid principal balance retained ranged from 65 percent ($8.3 billion) in 2002 to 93 percent ($13 billion) in 2006. In 2003, the percentage of unpaid principal balance securitized was 51 percent ($9.3 billion), while the percentage retained was 45 percent ($8.1 billion). Bond credit enhancements constituted the remainder (4 percent) of its multifamily business in 2003. Separate and apart from the portfolio reduction requirement in the preferred stock purchase agreement, Freddie Mac started a new program in 2008, which it called K-certificates or K-deals, to securitize its loans and sell a significant portion of the credit risk associated with the loans. With the start of the K-deal program, Freddie Mac began categorizing multifamily loans it held in portfolio at acquisition as loans held for investment and loans held for sale. According to Freddie Mac officials, loans held for investment were those it planned to hold in its portfolio until maturity. Loans held for sale were those that Freddie Mac initially held in its portfolio but planned to include in a K-deal (that is, securitize) at a future time. According to officials, loans held for sale were almost always securitized. Table 1 shows the unpaid principal balance of loans held for investment and loans held for sale from 2008 through 2011. Loans held for sale had become the predominant loan type by 2010. The enterprises may hold their own MBS in their retained portfolio. Fannie Mae officials indicated that MBS it purchased were typically either resold in their original state or resecuritized with the purpose of making them a more suitable investment for a broader range of participants. They noted that their goal was to hold MBS purchases temporarily and to operate in a manner that is consistent with the FHFA directive to reduce the size of the retained mortgage portfolio. Data on Fannie Mae’s multifamily MBS portfolio balance for 2010 through 2011 showed that its portfolio grew from $9.5 billion at the beginning of 2010 to $28.3 billion at the end of 2011. The majority of this growth was due to the securitization of multifamily whole loans previously held in their portfolio with an unpaid principal balance of $18.7 billion. Fannie Mae started this initiative in the fourth quarter of 2010 in response to Treasury’s requirement to reduce its retained portfolio. Its sales and purchases of multifamily MBS in the secondary market were about the same. For example, during 2011 it purchased and sold MBS totaling about $11 billion. Data on Freddie Mac’s purchases of its own MBS in 2010 and 2011 show that the enterprise purchased $382 million and $472 million, respectively. These amounts were about 4 percent of the MBS issued each year. Enterprises Generally Financed Large Multifamily Properties in Large Metropolitan Areas Size of Multifamily Properties Financed The majority of Fannie Mae and Freddie Mac’s purchases of multifamily loans, as measured by unpaid principal balance, were for properties with more than 50 units (see fig. 3). For example, from 1994 through 2011, Fannie Mae acquired $292.0 billion of multifamily loans for properties with more than 50 units, compared to $56.2 billion of loans for properties with 5 to 50 units. Similarly, Freddie Mac acquired $199.1 billion of multifamily loans for properties with more than 50 units, compared to $15.4 billion of loans for properties with 5 to 50 units. While the majority of the unpaid principal balance was on loans for properties with more than 50 units, the enterprises acquired more loans for properties with 5 to 50 units over this period. For example, from 1994 through 2011 Fannie Mae purchased 62,353 multifamily loans for properties with 5 to 50 units and 33,178 loans for properties with more than 50 units. Similarly, Freddie Mac purchased 20,900 multifamily loans for properties with 5 to 50 units and 15,817 loans for properties with more than 50 units. Both enterprises purchased the highest number of loans for properties with 5 to 50 units in 2003. According to FHFA officials, the enterprises purchased a large number of loans for smaller properties that year because they received “bonus points” toward meeting their affordable housing goals when they purchased these mortgages.purchased a large number of loans for properties with 5 to 50 units in 2007. As noted previously, these purchases helped them meet their affordable housing goals that year although the bonus points were no longer in effect. Freddie Mac purchased the vast majority of its loans for smaller properties in 2001 through 2003, when the bonus points were in effect. Since 2005, Freddie Mac has purchased very few multifamily loans for smaller properties. According to Freddie Mac officials, the enterprise is not currently active in the small multifamily loan market in part because of the credit characteristics of these loans. While they are considered by definition to be "multifamily" properties because they have five or more units, these transactions generally need to be underwritten more similarly to single-family loans. Freddie Mac officials noted that because the cost of underwriting is essentially the same for loans on larger and smaller properties, purchasing loans on small income properties on an individual loan basis is less cost-effective than the purchase of individual loans on larger properties. We discuss the enterprises’ role in this market segment in more detail later in this report. The majority of the multifamily loans that the enterprises purchased, as measured by unpaid principal balances, were for loans with balances at acquisition of $5 million to less than $50 million (see fig. 4). From 1994 through 2011, Fannie Mae purchased $203.5 billion of multifamily loans in this category, while Freddie Mac purchased $155.1 billion of multifamily loans of this size. While the majority of the unpaid principal balance was on multifamily loans with balances at acquisition of $5 million to less than $50 million, the enterprises acquired more multifamily loans with balances at acquisition of less than $5 million. For example, from 1994 through 2011 Fannie Mae purchased 81,156 multifamily loans with balances at acquisition of less than $5 million and 15,425 multifamily loans with larger loan balances. Similarly, Freddie Mac purchased 26,944 multifamily loans with balances at acquisition of less than $5 million and 10,202 multifamily loans with larger loan balances. The majority of the multifamily loans that the enterprises purchased were loans for properties in the largest metropolitan areas. We used 2010 U.S. Census Bureau data to identify the 25 largest metropolitan statistical areas (MSA) by population. Data from the 2010 American Community Survey show that 56.3 percent of the nation’s multifamily housing was located in these 25 MSAs. For Fannie Mae, 69 percent of the unpaid principal balance of multifamily loans it purchased from 1994 through 2011 was for properties located in these 25 MSAs (see fig. 5). Further, from 1994 through 2011, the loans that Freddie Mac purchased for properties in the 25 largest MSAs constituted 68 percent of its unpaid principal balance (see fig. 6). For information on both enterprises’ purchases by metropolitan area and state, see appendixes II and III, respectively. In terms of unpaid principal balance, over half of the multifamily loans that Fannie Mae purchased and almost half of the multifamily loans that Freddie Mac purchased from 1994 through 2011 were loans with terms of 60, 84, and 120 months (5, 7, and 10 years). This is in contrast to single- family mortgages purchased by the enterprises, many of which were 30- year mortgages. Figure 7 shows that with some exceptions the enterprises annually purchased more multifamily loans with terms of 120 months than any other category. The exceptions were 1994-1995 and 2003 for Fannie Mae and 1994 and 2001-2007 for Freddie Mac. During these years, loans with terms longer than 120 months generally constituted the largest category. According to Fannie Mae officials, the enterprise has periodically purchased pools of seasoned loans with loan terms greater than 120 months. The enterprises acquired multifamily loans for a variety of asset classes— traditional rental, student, senior, manufactured, and cooperative housing—but the majority of the multifamily properties that they financed from 1994 through 2011 were traditional rental properties. As shown in figure 8, 87.7 percent of Fannie Mae’s multifamily mortgage purchases during this period and 91.6 percent of Freddie Mac’s multifamily mortgage purchases were loans for traditional rental housing.explained this occurred because the majority of the multifamily mortgage market is concentrated in traditional rental housing. Enterprises Increased Their Multifamily Market Share and Generally Met Affordable Housing Goals Fannie Mae and Freddie Mac have played an increasingly large role in the multifamily marketplace since the beginning of the financial crisis in 2007, as evidenced by the increase in their market share. Although empirical research on the enterprises’ role in multifamily housing financing is limited, the literature we reviewed generally stated that the enterprises have provided liquidity and market stability. The enterprises met their affordable housing goals in most years, with multifamily activities greatly contributing to their fulfillment. Enterprises’ Multifamily Market Share Has Grown Since the Financial Crisis Although the enterprises historically have played a smaller role in financing multifamily housing than single-family housing, their role in the multifamily housing financing marketplace has grown since the financial crisis began in 2007, as evidenced by the increase in their market share. We relied on two sources of data on market share in the multifamily housing financing marketplace from 2005 through 2011: (1) data from the Federal Reserve on all multifamily mortgage debt outstanding and (2) MBA data on sources of financing for mortgages originated in a given year. Our analysis of Federal Reserve data shows that as of the end of 2011, the enterprises held or guaranteed almost 34 percent of the outstanding multifamily mortgage debt compared to about 24 percent in 2005 (see fig. 17). According to MBA data on the financing of loans by investor type, the enterprises financed less than 30 percent of annual multifamily loans originated before 2008 (see fig. 18). Their share of the multifamily market increased to 86 percent in 2009, but decreased to about 57 percent in 2011 as other participants reentered the market. These data are based on MBA’s annual survey of large institutional lenders, which it defines as firms with a dedicated commercial/multifamily origination platform. enterprises’ role in the multifamily housing finance marketplace was about equal to that of the combined total of originations for life insurance companies and CMBS lenders before the financial crisis (2005 through 2006), Fannie Mae and Freddie Mac dominated the marketplace during the height of the crisis (2008 through 2009) as life insurance companies and CMBS lenders significantly reduced their presence in the market. Data from the enterprises, ACLI, and Trepp show that by 2008, the enterprises’ combined purchases were almost $60 billion compared with almost $4 billion for life insurance companies and CMBS lenders combined (see table 2). The data from ACLI and Trepp also showed that life insurance companies and CMBS lenders started reentering the market in 2010. Based on our reviews of existing literature and interviews with stakeholders, the enterprises have provided access to multifamily financing although some view their role in the small-loan market segment as limited. Our review of the available literature on the role the enterprises have played in the secondary market for multifamily housing revealed few studies on this issue, partly due to the long-standing emphasis on Fannie Mae and Freddie Mac’s single-family portfolios. Additionally, the studies we found generally lacked both empirical research and a balanced analysis of the benefits and costs of the enterprises. This was driven, in part, by the lack of publicly available data on the enterprises’ multifamily activities and on the multifamily housing finance marketplace as a whole. However, the available literature we reviewed included statements that the enterprises have provided liquidity, stability, and affordability. For example, five of the seven studies we reviewed stated that the enterprises have helped ensure a robust financing system for multifamily housing by providing vital liquidity and counter-cyclical stability, but did not include empirical evidence supporting these statements. According to these studies, Fannie Mae and Freddie Mac have provided capital to the secondary mortgage market for multifamily financing during all economic climates, including times of credit market stress. One study cited certain instances in which the enterprises had provided liquidity (in the wake of the currency crisis in 1998, after the 2001 recession, and in 2007 through 2008 when purely private sources withdrew or charged untenable interest rates). Most mortgage finance and housing policy groups with whom we spoke generally agreed that the enterprises had provided liquidity and counter- cyclical stability. They agreed that the enterprises were a major source of funding for multifamily projects during the recent financial crisis. According to one group, the flight of traditional providers of private capital (such as banks and life insurance companies) would have been more devastating to renters had it not been for the enterprises’ presence. The five studies we cited previously also stated that the enterprises generally promoted access to affordable rental housing. As discussed in more detail later in this report, the enterprises must meet affordable housing goals for targeted groups such as low- and moderate-income households. One study stated, “the government’s involvement in ensuring that capital is available during times of credit contraction is a critical factor in mitigating fluctuations in the supply of market-rate and affordably priced rental housing.” All five studies discussed the role the enterprises have played in the LIHTC program. For example, one housing policy group wrote that the enterprises have acted both as equity investors and purchasers of mortgages for affordable housing developments financed by the LIHTC program. It noted that through their loan purchases, the enterprises facilitated 15-year, fixed-rate mortgages that were essential for these tax credits to be attractive to LIHTC investors. However, the studies also noted that with no income tax liability to shelter, Fannie Mae and Freddie Mac have withdrawn from the LIHTC investment market. According to FHFA, it instructed the enterprises to withdraw from the LIHTC market. Although it is no longer an active equity investor, Freddie Mac officials noted that the enterprise continues to purchase and guarantee mortgages that support LIHTC programs. While representatives of most of the groups we interviewed stated that Fannie Mae and Freddie Mac generally had played a role in providing access to affordable rental housing, they emphasized that the enterprises could do more. For example, one association told us that the affordable housing goal levels are generally set too low. In its comment letter on FHFA’s proposed rule on the 2010 and 2011 affordable housing goals, it wrote that Fannie Mae and Freddie Mac actually had been doing even less to finance what it called legitimate, affordable rental housing since conservatorship. The association provided as an example the experience of one of its members, a lending consortium whose funds came from a pool of 45 investors. One of the enterprises had been an investor since 1993, but recently had been the only major investor to not renew its commitment. According to this enterprise, FHFA has directed it to cease making certain types of investments and loans. In addition, our literature review and interviews indicated that the enterprises have played a limited role in financing small multifamily properties, which tend to have lower rents than larger properties. According to the 2010 American Community Survey, almost one-third of renters live in structures with 5 to 49 units (see fig. 19). Representatives from trade associations, housing policy groups, industry researchers, and a consumer advocacy group generally agreed on the limited role that Fannie Mae and Freddie Mac played in the small-loan market segment. For example, two trade associations stated that enterprise financing generally has not flowed outside major metropolitan areas, where there is a need for small-loan financing. In its comment letter on FHFA’s proposed rule on the 2010 and 2011 affordable housing goals, one of these trade associations stated that smaller-sized properties that are affordable to low- and moderate-income persons are the most underserved segment of the multifamily market, in large part because of low levels of enterprise activity in this market segment. As previously noted, small loans (those for properties with 5 to 50 units) make up a small percentage of the loans that the enterprises have purchased. According to Fannie Mae’s 2011 activity report, 22,382 of the 390,526 multifamily units that Fannie Mae financed (5.7 percent) were through small loans. Of these multifamily units, 76 percent were low- income or very low-income rental units. According to Freddie Mac’s 2011 activity report, the enterprise’s units financed through small loans comprised 0.7 percent of the multifamily units it financed (2,173 of 290,116 units). About 35 percent of these units were low-income or very low-income rental units. Although well below the enterprises’ participation in the larger property market, officials at FHFA, Fannie Mae, and Freddie Mac contend that the enterprises’ small-loan activity levels are noteworthy because this segment of the market has been dominated by banks and thrifts—institutions that have greater familiarity with local needs. (For information on how the size of the loans that the enterprises purchased compares with the size of loans financed by other major participants in the multifamily housing financing marketplace, see app. IV.) While most studies we reviewed and individuals we interviewed stated that the enterprises played an important role in multifamily housing finance, two studies stated that private capital should play a larger role. The first study concluded that without the enterprises over the past 20 years, “a fully functioning, private debt financing market for multifamily housing would have existed, in the same way that fully private debt financing markets exist for office, retail, and industrial properties.” According to the study, the multifamily market is dependent on the enterprises because of a lack of competition among other lenders, derived from the enterprises’ unfair pricing advantages. Similarly, the second study stated that before the enterprises’ involvement, life insurance companies, pension funds, and banks supported a robust conventional multifamily lending market. According to the study, once the enterprises entered the multifamily market, the private sector had an increasingly difficult time competing with the enterprises because their charter provided them with certain advantages, such as pricing. The authors of the first study and other stakeholders we interviewed stated that the benefits conferred on Fannie Mae and Freddie Mac by their status as government-sponsored entities created a competitive advantage over other market participants, temporarily crowding them out of the market. In February 2012, FHFA released a strategic plan for the enterprises’ single- and multifamily operations during the next phase of conservatorship. In the plan, FHFA asked Fannie Mae and Freddie Mac to conduct a market analysis of the viability of their multifamily operations without government guarantees. FHFA also released a draft strategic plan for 2013 through 2017, which includes the strategic plan for conservatorship. This plan noted that the enterprises would be working to According to further standardize the process for securitizing mortgages. Fannie Mae and Freddie Mac officials, both enterprises have begun their market analyses and expect to meet FHFA’s deadline of December 31, 2012. According to FHFA officials, the focus of this initiative is single-family mortgages. Securitization for single-family mortgages has been more standardized than for multifamily mortgages, which also has employed a different process. For example, the typical multifamily MBS that Fannie Mae issues is backed by a single multifamily loan, while its single-family MBS are backed by numerous single-family loans. Currently, the enterprises compose the largest share of the multifamily market as they and other federal entities compose most of the single- family market. While markets are expected to eventually recover from the financial crisis, the future role of the enterprises is unknown due to a number of factors. However, our analysis of multifamily funding activity over time provides insight into the role the enterprises have played in the marketplace during periods in which the markets were relatively stable. For example, prior to the financial crisis, our analysis revealed that the enterprises generally financed about 30 percent of multifamily mortgages. Enterprises’ Multifamily Activities Are Important to Meeting Affordable Housing Goal Requirements Beginning in 2010, FHFA implemented the Housing and Economic Recovery Act of 2008 (HERA) by making significant changes to the housing goal framework, including establishing separate goals for the purchases of single- and multifamily mortgages. The Safety and Soundness Act required the enterprises to meet annual numeric goals for the purchase of mortgages serving targeted groups. Specifically, the act established three broad affordable housing goals for Fannie Mae and Freddie Mac: (1) a broad low- and moderate-income goal for families earning less than the area median income (AMI); (2) a geographically targeted goal for housing located in underserved areas, such as central cities and rural areas; and (3) a special affordable housing goal, which targets housing that is affordable to very low-income families and low-income families living in low-income areas. In HUD’s first rulemaking on the affordable housing goals, it defined underserved areas as census tracts with median income at or below 90 percent of AMI in metropolitan areas and 95 percent of AMI in nonmetropolitan areas, or high-minority areas (metropolitan census tracts in which at least 30 percent of households are minority and the tract median income does not exceed 120 percent of AMI). The special affordable goal targeted borrowers or renters earning no more than 60 percent of AMI or earning no more than 80 percent of AMI and residing in census tracts with median income at or below 80 percent of AMI. The Safety and Soundness Act required HUD to consider several factors in establishing these housing goals, including: (1) national housing needs; (2) economic, housing, and demographic conditions; (3) past performance on each goal; (4) the size of the corresponding primary mortgage market; (5) the ability of the enterprises to lead the industry, and (6) the need to maintain the sound financial condition of the enterprises. For the period 1993 through 2008, HUD considered past performance and the size of the corresponding primary market as the two primary factors when setting the goals. The enterprises could purchase both single- and multifamily mortgages to satisfy these goals. Starting in 1996, HUD established a dollar-based special affordable multifamily subgoal related to purchases of multifamily mortgages for properties affordable to very low-income tenants (no more than 60 percent of AMI) or in low-income neighborhoods and affordable to low-income tenants. See figure 20 for information on changes made to the affordable housing goals since they were first set in 1992. HUD oversaw the enterprises’ compliance with the housing goals through 2008. On July 30, 2008, HERA transferred the housing goal oversight function to FHFA.HERA, requires FHFA to consider the following when setting multifamily goals: national multifamily mortgage credit needs and the ability of the enterprises in making mortgage credit available to provide additional liquidity and stability for the multifamily mortgage market; the performance and effort of the enterprises in making mortgage credit available for multifamily housing in previous years; the size of the multifamily mortgage market for housing affordable to low-income and very low-income The Safety and Soundness Act, as amended by families; the ability of the enterprises to lead the market in making multifamily mortgage credit available; the availability of public subsidies; and the need to maintain the sound financial conditions of the enterprises. In establishing affordable housing goals under HERA, FHFA focused more on the role of the enterprises in the multifamily market given current market conditions and competitors’ roles and less on past performance. In August 2009, FHFA issued a final rule that kept many of the existing housing goals provisions, but revised the levels of the existing affordable housing goals downward in light of current market conditions. in 2010, FHFA, implementing HERA, made significant changes to the goals framework, such as separating the goals for multifamily and single- family mortgage purchases. In its final rule on the affordable housing goals for 2010 and 2011, the agency also redefined the goal targets to reach lower-income groups and required the enterprises to report on their acquisition of mortgages involving low-income units in small (5 to 50 unit) multifamily properties. Further, FHFA prohibited the enterprises from crediting purchases of private-label securities, including CMBS, toward housing goals. On June 11, 2012, FHFA issued a proposed rule on the affordable housing goals for 2012 through 2014. FHFA is proposing to continue the existing structure, with revised single-family and multifamily housing goal benchmark levels for 2012, 2013, and 2014. See 74 Fed. Reg. 39873 (Aug. 10, 2009). performance that include loan-level data, as well as an annual report at the end of each year. FHFA uses these data to determine official goal performance. If an enterprise fails to attain a goal, the FHFA director may require submission of a housing plan describing the specific action that the enterprise will take to achieve the goal for the next year. The enterprises generally have met their affordable housing goals. According to HUD documents, the enterprises generally exceeded their affordable housing goals from 1993 through 2000, with their performance generally increasing during that time period. Official reports on goal performance for 2001 through 2009 show that Fannie Mae and Freddie Mac exceeded their goals from 2001 through 2007, but failed to meet some of the goals in 2008 and 2009 (see table 3). FHFA determined that the two goals both enterprises failed to meet in 2008 (the low- and moderate-income and special affordable goals) were infeasible due to structural changes in the market from 2006 through 2008, which were not anticipated in 2004 when HUD established the goals. Specifically, FHFA took into consideration such factors as tightened underwriting standards in the mortgage industry, the decreased availability of private mortgage insurance in the primary market, the increase in the share of single-family mortgages insured by FHA, and the fall in the issuance of goals- qualifying, private-label securities. Freddie Mac also did not meet the underserved areas goal in 2008. Although FHFA determined that the market conditions that made the other two goals infeasible made meeting the underserved areas goal more difficult for Freddie Mac, it did not declare this goal to be infeasible for Freddie Mac. But based on Freddie Mac's financial condition in 2008, FHFA did not require Freddie Mac to submit a housing plan. FHFA also declared that the underserved areas goal that both enterprises failed to meet in 2009 was infeasible. In making this determination, FHFA considered the same factors it took into account the previous year. Freddie Mac also did not meet the special affordable goal in 2009. FHFA determined that this goal was feasible for Freddie Mac, but in light of the near achievement of the goal, did not require a housing plan. As noted previously, both enterprises also had to meet a special affordable multifamily subgoal in 2001 through 2009. Fannie Mae and Freddie Mac exceeded this subgoal in each year except for 2009 (see table 4). FHFA declared each enterprise’s subgoal for that year to be infeasible after considering the collapse of the CMBS market and the financial condition of the enterprises. When the single-family and multifamily goals were combined, the enterprises’ multifamily activities were “goal rich,” meaning that purchasing multifamily mortgages had a disproportionate importance for the housing goals because most multifamily rental units are occupied by households with low and moderate incomes. For example, in 2008 the enterprises’ multifamily business, which represented 4.5 percent of the enterprises’ total unpaid principal balance financed, accounted for 32 percent of the units that met the low- and moderate-income goal, 27 percent of the units that met the underserved areas goal, and 39 percent of the units that met the special affordable goal (see table 5). For 2010, FHFA established two specific multifamily goals. The low- income multifamily goal was targeted at rental units in multifamily properties affordable to families with incomes no greater than 80 percent of AMI, and the very low-income multifamily subgoal was for units affordable to families with incomes no greater than 50 percent of AMI. Both enterprises’ performance levels exceeded the two multifamily goal targets in 2010 and 2011 (see table 6). Enterprises’ Purchased Multifamily Loans Have Performed Relatively Well, but Regulators Identified Issues with Credit Risk Management The enterprises have purchased multifamily loans that have underwriting standards and loan performance that compared favorably with those of other market participants. For example, from 2005 through 2010, the enterprises experienced lower default rates than the other major mortgage capital sources, with the exception of life insurance companies. To help offset some of their credit risks and increase the supply of affordable multifamily housing, the enterprises have risk-sharing programs with FHA and RHS.relatively few loans. OFHEO and FHFA, through their examination and oversight of the enterprises, identified a number of credit risk deficiencies since 2006. For example, they found deficiencies in Fannie Mae’s delegated underwriting and servicing program, its risk-management reorganization, and information systems; and Freddie Mac’s management of its lower-performing assets. Both enterprises are taking steps to address these deficiencies. Enterprises’ Multifamily Underwriting Standards and Serious Delinquency Rates Compared Favorably with Those of Other Sources of Multifamily Credit Based on underwriting standards and loan performance, the loans the enterprises purchased generally performed as well as and oftentimes better than other major sources of financing for multifamily housing. The major sources include life insurance companies, CMBS lenders, banks and thrifts, and FHA and RHS lenders. We compared the enterprises’ credit standards to those of CMBS lenders and life insurance companies, two major participants in the multifamily housing financing marketplace. From 2005 through 2011, the enterprises’ underwriting standards—as measured by median debt-service coverage and LTV ratios—were generally stricter than CMBS lenders. Higher debt-service coverage ratios and lower LTV ratios indicate lower risk. For example, from 2005 through 2011, the enterprises’ median debt-service coverage ratios were always higher than those of CMBS lenders, with the exception of Fannie Mae in 2007 (see table 7). Also, over this period, Fannie Mae’s LTV ratios were lower than those of CMBS lenders for every year except 2010, while Freddie Mac’s LTV ratios were lower than those of CMBS lenders in 3 years. When compared with life insurance company ratios, Fannie Mae’s median debt-service coverage ratios were lower for every year from 2005 through 2011, except for 2009. In contrast, Freddie Mac had higher median debt-service coverage ratios than life insurance companies for 3 of the 7 years. In addition, Fannie Mae had lower LTV ratios than life insurance companies for all years except 2008 and 2011, while Freddie Mac had higher LTV ratios than life insurance companies in all years from 2005 through 2011. We also obtained data on credit standards from six HFAs and three loan consortiums. The median debt-service coverage ratios for five of the HFAs were generally lower than those of the enterprises. For example, from 2005 through 2011, the median debt-service coverage ratios for the five HFAs ranged from 0.92 to 1.72. Likewise, the median debt-service coverage ratios for the three loan consortiums were lower than the ratios for both of the enterprises over this same period. When we compared the median LTV ratios of the six HFAs and the enterprises, we found that three HFAs generally had lower LTV ratios than both of the enterprises for all years from 2005 through 2011, two had LTV ratios that varied compared with those of the enterprises, and one HFA had LTV ratios that were higher than the enterprises. Data from two of the loan consortiums showed that one had median LTV ratios that were lower than those of The second had both enterprises for all years from 2005 through 2011.ratios that were higher than Fannie Mae, but lower than Freddie Mac in 3 of the 7 years. For more information on the credit standards of these HFAs and loan consortiums, see appendix V. When comparing the performance of multifamily loans financed by Fannie Mae and Freddie Mac with those financed by other major sources of multifamily credit, we were not always able to make direct comparisons because market participants track delinquencies in different ways. We generally found that from 2005 through 2010, the enterprises experienced lower default rates than the other major mortgage capital sources, with the exception of life insurance companies in some cases. For example, as of December 31, 2011, Fannie Mae’s serious delinquency rates (loans 60 days or more delinquent) for only those loans acquired or guaranteed in 2005 through 2007 ranged from 0.66 to 0.89 percent (see table 8). As of the same date, Freddie Mac’s serious delinquency rates for loans acquired or guaranteed during this period ranged from 0.20 to 0.74 percent. These rates are considerably lower than the serious delinquency rates for loans originated by CMBS lenders in 2005 through 2007, which Starting in peaked at about 24 percent for loans originated in 2007.2008, the serious delinquency rates for loans originated each year by CMBS lenders dropped considerably. FHA’s serious delinquency rates for loans originated in 2005 through 2010 were much higher than either of the enterprises for three of the six years. FHA’s highest delinquency rate was for 2009, with loans originated that year having a serious delinquency rate of more than 5 percent, compared to a negligible delinquency rate for the enterprises’ loans acquired that year. The enterprises also performed better than commercial banks and thrifts insured by the Federal Deposit Insurance Corporation (FDIC). FDIC’s Quarterly Banking Profile provides information on the multifamily loan performance of insured commercial banks and thrifts based on their total outstanding portfolios at the end of each quarter. Specifically, the profile As shown provides data on loans that are 90 days or more delinquent.in table 9, with the exception of 2005 for Fannie Mae, for the period 2005 through 2011 the percentage of the enterprises’ multifamily loans that were delinquent for 60 days or more was always lower than the percentage of bank and thrift loans that were delinquent for 90 days or more. This was the case even though the 60-day delinquency rate is a stricter measure of delinquency than the 90-day rate. The life insurance companies generally performed better than Fannie Mae, while Freddie Mac’s performance was generally comparable to that of life insurance companies. As shown in table 10, for life insurance companies, the percentage of unpaid principal balance on multifamily loans that was 60 days or more delinquent ranged from 0 to 0.21 percent from 2005 through 2011. Fannie Mae had higher multifamily serious delinquency rates for all the years in the period, while Freddie Mac had rates closer to those of life insurance companies until 2010 and 2011. Smaller participants in the multifamily marketplace generally experienced fewer delinquencies. From 2005 through 2011, only 1 of 401 loans guaranteed or financed by RHS was delinquent for 60 days or more. Further, only one of the six HFAs and one of the three loan consortiums reported delinquencies (60 days or more delinquent). Both reported delinquencies in 2005 and 2006. Enterprises’ Participation in Multifamily Risk-Sharing Programs Has Been Limited Fannie Mae and Freddie Mac have entered into risk-sharing agreements with FHA and RHS to increase the supply of affordable multifamily housing and help offset some of their credit risk, but these programs have involved relatively few loans. The enterprises participate in two risk- sharing programs with FHA. The first of these programs, known as the “standard” FHA risk-sharing program, started in 1994. Under the program, the enterprises acquire loans for eligible affordable multifamily housing projects (either new construction or rehabilitation).are not to exceed $50 million and the term of the mortgage is for 15 years The loans generally or more. In the event of a loss on a loan, the enterprises assume the primary risk of loss and FHA reimburses them for 50 percent of the loss. In exchange for reimbursing the enterprises, FHA charges them an annual risk-sharing premium of 25 basis points (.25 percent) of the average unpaid principal balance. The second of these programs, the Green Refinance Plus program, was established in 2011 to preserve and improve existing affordable housing by providing financing to renovate or retrofit properties. No less than 5 percent of the principal balance must be used for renovation or energy or water retrofitting, and the term of the loans must be for no less than 10 years. Under the program, HUD assumes the first loss in an amount equal to 4.35 percent of the unpaid principal balance on a defaulted loan plus 50 percent of the balance of the loss on an equal basis with the enterprises. FHA charges the enterprises an annual risk-sharing premium of 40 basis points (.40 percent) of the average unpaid principal balance for loans with terms of 15 years or more. According to Fannie Mae officials, HUD has been considering changes that Fannie Mae suggested to the execution of this program that would make it more attractive to the owners of existing affordable multifamily properties. million. Over this period, Freddie Mac purchased or guaranteed more than 9,000 loans with unpaid principal balances of $129 billion. The enterprises also have entered into risk-sharing agreements with RHS, but to a lesser extent than with FHA. Under the enterprises’ risk- sharing agreement with the RHS loan guarantee program (known as 538 loans), RHS will guarantee up to 90 percent of the loan. According to RHS, the only risk to the enterprises would be due to nonperformance by the lender. From 2004 through 2011, Fannie Mae purchased and securitized four loans under the RHS program, with unpaid principal balances of more than $7 million. From 2001 through 2011, Freddie Mac purchased and securitized three loans or bonds with RHS, with unpaid principal balances of $6 million as of the end of 2011. As noted previously, the enterprises also have supported state and local HFAs by providing credit enhancements to tax-exempt bonds used to finance affordable multifamily housing. We interviewed selected state and local HFAs and in general, they viewed the enterprises as important players in providing liquidity for affordable multifamily properties. For example, officials from a large local HFA told us that the enterprises have played a critical role in providing liquidity and long-term credit enhancement to affordable and market-rate developments. According to an official from a small state HFA, before 2008 Fannie Mae was an active and highly valued buyer of their small tax-exempt private activity bonds. The official added that the ability to sell bonds under $5 million on a direct placement basis to Fannie Mae was extremely helpful and has been missed. Allowing Fannie Mae and Freddie Mac to reenter the bond market with private placements would be a significant benefit, and would allow the HFA to provide reliable and dependable financing options to their multifamily affordable housing projects, according to this HFA official. The enterprises have also participated in two temporary Treasury HFA initiatives: (1) the Temporary Credit and Liquidity Facilities (TCLF) program and (2) the New Issue Bond Program (NIBP). TCLF, which provides replacement credit enhancement and liquidity support to outstanding HFA variable-rate demand bonds, is set to expire in 2015. The multifamily NIBP was established to facilitate the purchase of newly issued HFA bonds, the proceeds of which would be used to finance multifamily projects under each participating HFA’s program. In general, Treasury sets the pricing parameters and agrees to take the first loss of principal up to 35 percent. The enterprises participate in the program on a 50-50 loss-sharing basis with each other after the top loss coverage by Treasury. Regulators Have Identified a Number of Deficiencies in Multifamily Credit Risk Management From 2006 through 2011, FHFA and its predecessor, OFHEO, identified deficiencies in management of credit risk at the enterprises. FHFA oversees the enterprises’ credit risk management through on-site examinations and off-site monitoring. As part of its annual on-site examination of the safety and soundness of the enterprises, FHFA assesses their enterprise risk, which includes credit, market, and operational risk. The written report that FHFA submits to Congress by June 15 of each year describes the financial safety and soundness of each enterprise, including the results and conclusions from annual examinations. FHFA also can conduct targeted examinations, which are in-depth, focused evaluations of a specific risk or risk-management system. Throughout the year, FHFA conducts ongoing supervision of the enterprises that includes on-site and off-site monitoring and analyzing of each enterprise’s overall business profile, including trends or emerging risks. FHFA’s Division of Enterprise Regulation prepares quarterly risk assessments that inform an Interim Supervisory Assessment Letter, which provides FHFA’s view of the condition of the enterprise midway through the examination cycle. FHFA documents deficiencies identified in examinations or ongoing supervision in a conclusion letter that communicates findings, conclusions, and the assigned supervisory rating. FHFA is to follow up on deficiencies to ensure that the enterprise’s response is appropriate, timely, and effective. Since 2006, OFHEO and FHFA have identified a number of deficiencies with Fannie Mae’s management of multifamily credit risk, including several weaknesses in oversight of its DUS program. Specifically, in OFHEO’s 2006 Annual Report to Congress, the agency reported that Fannie Mae’s underwriting standards needed updating because of the volume of waivers granted to DUS lenders. According to OFHEO, the high waiver rate was indicative of a policy that was too restrictive, lending practices that were too liberal, or a policy that was not current relative to market conditions. And while Fannie Mae authorized DUS lenders to review and approve waivers, the enterprise had not established a strong and comprehensive quality control process. In 2011, FHFA communicated supervisory concerns related to Fannie Mae’s DUS program. Fannie Mae reported that it was taking several steps to address these deficiencies, including training its credit underwriting staff, conducting due diligence and credit analysis for DUS transactions, and expanding its monitoring of multiple loans with the same entity. FHFA noted that the steps Fannie Mae planned to take appeared reasonable but indicated that the enterprise must show that it had implemented these changes and that they could be sustained. In addition to deficiencies in the DUS program, OFHEO and FHFA identified deficiencies with Fannie Mae’s multifamily quality control function, asset management (that is, how it manages loans it acquires), underwriting practices, and information systems supporting credit risk management: OFHEO reported in 2006 that Fannie Mae faced deficiencies with its quality control function because Fannie Mae’s oversight focused on reviewing documents rather than analyzing and assessing credit information. OFHEO noted that credit information was incomplete or not readily available. In 2007, OFHEO reported that the multifamily quality-control process was improved and expanded to provide better coverage of multifamily loans. In 2008, OFHEO reported that Fannie Mae had begun to address deficiencies in asset management. Further, in 2010 FHFA reported that Fannie Mae was identifying problem assets earlier, developing workout strategies for problem loans, and managing delinquencies and foreclosed properties to improve the amount recovered on sales of property in markets and minimize losses. In 2011, FHFA found that Fannie Mae needed to improve its risk- management practices for multifamily loans. To address this issue, Fannie Mae stated that it planned to review its existing risk- management processes and controls. In 2011, FHFA reviewed certain loans and advised Fannie Mae to strengthen its underwriting and quality control practices related to appraisals and verification of financial information. Fannie Mae stated that it would review its procedures and consult with FHFA as it proceeded. Additionally, in 2011 Fannie Mae agreed to respond to supervisory concerns relating to waiver and exception monitoring and reporting. Fannie Mae stated that it would analyze the loans that had been granted waivers or extensions to determine if there were correlations between waivers and subsequent loan performance. OFHEO reported in 2008 that Fannie Mae’s credit review function was understaffed and that information system deficiencies or inefficiencies compromised the enterprises’ ability to manage risks. FHFA also identified deficiencies with Fannie Mae’s information systems in 2009 and 2010. According to FHFA, Fannie Mae has begun a transformation initiative to centralize data sources and improve data integrity. We reviewed OFHEO’s annual reports to Congress from 1997 through 2008. During this 12-year period, OFHEO did not report any credit risk deficiencies in Freddie Mac’s multifamily housing activities. However, since 2009 FHFA has identified deficiencies in Freddie Mac’s multifamily asset-management function. For example, FHFA reported in its 2009 Annual Report to Congress that a targeted examination of Freddie Mac’s asset-management function had found that the function needed to be strengthened. FHFA noted that the multifamily business unit had begun to address some of the issues identified. In its 2010 Annual Report to Congress, FHFA continued to report on deficiencies with Freddie Mac’s asset-management function, including that it was poorly managed and lacked the necessary process and controls to identify, evaluate, and control problem assets. Additionally, the 2010 report noted problems with the multifamily division’s management of the problem loan watch list. In this report, FHFA noted that while risk management for multifamily asset management was unsatisfactory, management had corrected or was addressing these issues. According to FHFA officials, these deficiencies have since been addressed and closed. Agency Comments and Our Evaluation We provided a draft of this report to FHFA, and it provided copies to Fannie Mae and Freddie Mac. FHFA, Fannie Mae, and Freddie Mac provided technical comments, which we incorporated into the report where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Director of FHFA and interested committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine (1) how Fannie Mae and Freddie Mac’s (the enterprises) multifamily loan activities, products, and loan performance have changed over time; (2) the enterprises’ role in the multifamily housing financing marketplace and the extent to which they have met their affordable housing goals; and (3) how the enterprises’ credit standards and delinquency rates compare with those of other mortgage capital sources and how they have managed credit risk associated with their multifamily housing activities. To describe how the enterprises’ multifamily loan activities, products, and performance have changed, we analyzed loan-level data from Fannie Mae and Freddie Mac for 1994 (the earliest year for which data were available) through 2011. Each enterprise provided data on the characteristics (at acquisition) of the multifamily loans they purchased and data on the performance of loans over time. We used the data to determine how many loans each enterprise purchased each year from 1994 through 2011 and the unpaid principal balance of those loans at the time of acquisition. Because certain Fannie Mae multifamily loan products roll over periodically (which creates a new loan number but does not represent a new acquisition), we used a new acquisition indicator provided by Fannie Mae to identify its acquisitions each year. When fluctuations in purchase volume were identified, we interviewed Fannie Mae and Freddie Mac officials to determine the reasons for these fluctuations. We also analyzed data on each enterprise’s annual purchases of multifamily loans to determine the unpaid principal balance of (1) loans that the enterprise expected to hold in portfolio; (2) loans that it expected to securitize (that is, packaging them into mortgage pools to support mortgage-backed securities (MBS)); and (3) bond credit enhancements. For Fannie Mae, securitized loans included MBS and discount MBS. Freddie Mac’s securitized portfolio included participation certificates, tax-exempt bond securitization, and K-deals. Because we were unable to use the loan-level data provided by the two enterprises to determine how much of the multifamily MBS they issued were held in portfolio or sold to investors, we asked both enterprises to provide additional information on MBS held in portfolio. Both enterprises were able to provide data on purchases of their own MBS in 2010 and 2011. In addition, we analyzed each enterprise’s annual multifamily loan purchases from 1994 through 2011 as follows: Size of properties financed—We determined the number and unpaid principal balance of loans purchased each year that financed properties with 5 to 50 units and properties with 51 or more units. Loan size—We determined the number and unpaid principal balance of loans purchased each year that fell into the following four categories: $0 to less than $5 million, $5 million to less than $50 million, $50 million to less than $100 million, and $100 million or greater. Geography—We determined the unpaid principal balance of loans acquired each year in the 25 largest metropolitan statistical areas. For Fannie Mae, we limited our analysis to single loans associated with a single property. To determine the percentage of the nation’s multifamily stock that was located in these 25 metropolitan areas, we analyzed data from the 2010 American Community Survey. In addition to focusing on the 25 largest metropolitan areas, we also determined the unpaid principal balance of loans acquired in each state (excluding loans associated with multiple properties as described above). Period of the loan—We determined the number and unpaid principal balance of loans purchased each year that fell into the following four categories: 60 months or fewer, greater than 60 months to less than 120 months, 120 months, and more than 120 months. Asset class—We determined the percentage of loans purchased during the 18-year period (based on unpaid principal balance) in the following five asset classes: traditional rental, student, senior, manufactured, and cooperative housing. Type of interest rate—We determined the percentage of loans purchased each year (based on unpaid principal balance) that were fixed- and adjustable-rate mortgages. Structured finance—We determined the percentage of unpaid principal balance acquired each year that was associated with transactions involving multiple loans or multiple properties. Fannie Mae’s Delegated Underwriting and Servicing (DUS®) Program—We determined the number and unpaid principal balance of loans that were purchased under the DUS program each year. For all of the analyses of the enterprises’ multifamily loan purchases, we adjusted the dollar amounts for inflation. In addition, for each analysis we did not include loans with missing values. We also analyzed data on the performance of multifamily loans each enterprise purchased from 1994 to 2011 as follows: Serious delinquency rates—We calculated annual serious delinquency rates by dividing the current unpaid principal balance of loans that were 60 or more days delinquent as of the end of the year by the total outstanding unpaid principal balance as of the end of the year. We also determined the amount of each enterprise’s outstanding unpaid principal balance that was 60 or more days delinquent at the end of each year. Loan maturity—We determined the number and unpaid principal balance of loans that were going to mature within the next 10 years. Serious delinquency rates for loans with varying debt-service coverage and loan-to-value (LTV) ratios—We determined the serious delinquency rates for loans with debt-service coverage ratios above and below 1.25 and for loans with LTV ratios above and below 80 percent. For the loan maturity analysis, we adjusted the dollar amounts for inflation. For each analysis, we did not include loans with missing values. We also analyzed aggregated multifamily data that both Fannie Mae and Freddie Mac provided on the following: real estate-owned (REO) properties from 2002 through 2011 for Fannie Mae and from 1995 through 2011 for Freddie Mac; net income from 2002 through 2011 for Fannie Mae and from 2005 through 2011 for Freddie Mac; net charge-offs (debts an entity is unlikely to collect) from 2002 guarantee fees collected from 2002 through 2011; and administrative costs from 2002 through 2011 for Fannie Mae and from 2005 through 2011 for Freddie Mac. To assess the reliability of these data, we interviewed Fannie Mae and Freddie Mac representatives about how they collected data and helped ensure data integrity, and reviewed internal reports on data reliability. We also compared selected enterprise data with information in public filings. In addition, we conducted reasonableness checks on the data to identify any missing, erroneous, or outlying figures. We determined that the data were sufficiently reliable for our purposes. To determine what information is available about the enterprises’ role in the multifamily housing financing marketplace, we analyzed data on Fannie Mae and Freddie Mac’s share of the multifamily housing market from 2005 through 2011. First, we analyzed Flow of Funds data (Table L.219, published on June 7, 2012) from the Board of Governors of the Federal Reserve System (Federal Reserve) on multifamily mortgage debt outstanding to determine the enterprises’ share of debt holdings. Second, we analyzed data on the financing of multifamily loans originated by large institutional lenders that the Mortgage Bankers Association (MBA) published. MBA gathers these multifamily origination data through a survey and publishes the data in its Annual Commercial/Multifamily Mortgage Bankers Origination Summation. In 2011, 99 firms participated in MBA’s survey, including life insurance companies, commercial mortgage-backed securities (CMBS) lenders, lenders that sell loans to Fannie Mae and Freddie Mac, FHA lenders, and other lender groups. The survey did not include small banks and savings and loan associations (thrifts) because they tend to operate as a separate market, according to MBA. Although not comprehensive, the survey data from MBA are the data most often cited when discussing the enterprises’ share of the multifamily housing financing marketplace. We assessed the reliability of both types of data—mortgage debt outstanding and multifamily origination data—by interviewing Federal Reserve and MBA representatives, respectively, about the methods they used to collect and help ensure the integrity of the information. We determined that the data were sufficiently reliable for our purposes. We also compared data from the enterprises and two major participants in the multifamily housing financing marketplace—life insurance companies and CMBS lenders—to illustrate how these participants’ multifamily activities have changed over time. Specifically, we compared data obtained from the American Council of Life Insurers (ACLI) and Trepp on loans originated from 2005 through 2011 with our analysis of data from Fannie Mae and Freddie Mac on loans purchased during those years. We assessed the reliability of the ACLI and Trepp data by sending them a set of standard data reliability questions and obtaining their written responses. We followed up with them when we had questions on the data or their responses to our data reliability questions. Where possible, we also compared the data they provided to us with published data. We determined that the data were sufficiently reliable for our purposes. To identify reports on the enterprises’ role in the multifamily housing financing marketplace, we conducted a search of literature published since 1995 but found few studies that focused on the enterprises’ multifamily activities. Ultimately, we identified seven studies that provided varying viewpoints on the role that Fannie Mae and Freddie Mac have played in multifamily housing finance. Although we found that these studies lacked empirical research, we determined that they were sufficiently reliable for our purposes (identifying literature and authors’ conclusions and the limitations of the studies). To obtain additional views on the enterprises’ role in multifamily housing financing, we met with the authors of two of these studies and with researchers who have knowledge about housing finance and the operations of the enterprises. We also discussed the enterprises’ role with representatives from the Federal Housing Finance Agency (FHFA); Fannie Mae; Freddie Mac; other participants in the multifamily housing financing marketplace such as ACLI, the Commercial Real Estate Finance Council (CREFC), the Federal Housing Administration (FHA), MBA, the National Association of Affordable Housing Lenders (NAAHL), the National Council of State Housing Agencies (NCSHA), and the Department of Agriculture’s Rural Housing Service (RHS); the National Multi Housing Council; and the Consumer Federation of America. To comment on the enterprises’ role in financing loans for small properties (5 to 50 units), we analyzed data from the 2010 American Community Survey on the percentage of renters who live in small multifamily structures and data from the enterprises’ 2011 Annual Housing Activity Reports on their purchases of loans for such properties. We reviewed information on the American Community Survey data we used and determined that the data were sufficiently reliable for the purposes of our report. To report on the extent to which the enterprises have met their affordable housing goals, we reviewed the laws and regulations establishing the goals from 1992 to the present. We also reviewed reports on the affordable housing goals, including GAO reports and reports from the Department of Housing and Urban Development (HUD), FHFA, the Office of Federal Housing Enterprise Oversight (OFHEO), and independent researchers. We relied on HUD documents to assess the enterprises’ annual goal performance from 1993 through 2000. For 2001 through 2009, we analyzed data in Annual Housing Activity Reports (activity report) provided by FHFA. To assess the contribution of the enterprises’ multifamily activities to achievement of the affordable housing goals from 2001 through 2009, we calculated multifamily purchases as a percentage of the total mortgage purchases used to meet each goal using data from the activity reports. We assessed the reliability of the data used to document goal performance by interviewing FHFA officials and representatives at Fannie Mae and Freddie Mac about the methods they use to collect and help ensure the integrity of the information. We also reviewed internal reports that the enterprises completed related to data reliability. We determined that the data were sufficiently reliable for our purposes. To compare the enterprises’ credit standards and delinquency rates with those of major mortgage capital sources, we analyzed loan-level data on the enterprises’ median debt-service coverage and LTV ratios and delinquency rates. We compared these ratios and delinquency rates with those of selected market players: For life insurance companies, ACLI provided us with data from 2005 through 2011 on median debt-service coverage ratios, median LTV ratios, the percentage of the outstanding unpaid principal balance that was 60 days or more delinquent as of the end of each year, median loan size, and median number of units per property. For CMBS lenders, we obtained data from Trepp for 2005 through 2011 on median debt-service coverage ratios, median LTV ratios, median loan size, median number of units per property, and the percentage of loans 60 days or more delinquent—based on unpaid principal balance—for only those loans originated from 2005 through 2010. We obtained data from FHA on the percentage of loans 60 days or more delinquent—based on unpaid principal balance—for only those loans originated from 2005 through 2010. We obtained data from RHS for 2005 through 2011 on the number of loans 60 days or more delinquent, the average loan size, and the average units per property. For state and local housing finance agencies (HFA), NCSHA helped Specifically, we us obtain data from four state and two local HFAs.obtained data on median debt-service coverage ratios, median LTV ratios, percentage of outstanding unpaid principal balances that were 60 days or more delinquent, median loan size, and median number of units per property. For loan consortiums, NAAHL helped us obtain data from three consortiums. Specifically, we obtained data for 2005 through 2011 on median debt-service coverage ratios, median LTV ratios, and the percentage of loans 60 days or more delinquent. Additionally, two of the three loan consortiums provided us with data on median loan size and the median number of units per property. We assessed the reliability of the data provided by these data sources by sending them a set of standard data reliability questions and obtaining their written responses. We followed up with the specific sources of data when we had questions about the data or their responses to our data reliability questions. Where possible, we also compared the data they provided to us with published data. We determined that the data were sufficiently reliable for our purposes. We also interviewed officials from ACLI, CREFC, FHA, RHS, NCSHA, and NAAHL. To determine the extent to which the enterprises shared risk with FHA and RHS, we obtained loan-level and aggregated data from the enterprises on the number of loans in risk-sharing programs with FHA and RHS. As discussed earlier, we took a number of steps to assess the reliability of the loan-level and aggregated data and determined that the data were sufficiently reliable for our purposes. To help us understand how these risk-sharing programs operate, we reviewed documents describing the programs, including memorandums of understanding between the enterprises and FHA and RHS. We reviewed documentation on the enterprises’ efforts to support state and local HFAs, including the Department of the Treasury’s temporary HFA initiative. We also interviewed officials from Fannie Mae, Freddie Mac, FHA, RHS, NCSHA, and selected state and local HFAs to obtain information on their experiences with these programs. To describe how the enterprises managed credit risk associated with their multifamily activities, we reviewed FHFA’s examination reports and OFHEO and FHFA annual reports to Congress, which summarize credit risk issues identified during annual examinations of the enterprises. We also interviewed FHFA officials to obtain information on current credit risk issues. To describe how the enterprises have addressed or will address these issues, we reviewed the enterprises’ formal responses to FHFA’s examination reports and any subsequent FHFA responses. Because FHFA’s examination reports and the enterprises’ responses are confidential, we limited our discussions of them to a summary. We also made revisions based on concerns FHFA raised with our original language summarizing supervisory concerns expressed in examination reports. We conducted this performance audit from November 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Fannie Mae and Freddie Mac’s Multifamily Loan Purchases in the 25 Largest Metropolitan Areas Table 13 shows Freddie Mac’s multifamily loan purchases in the 25 largest MSAs from 1994 through 2011. Appendix III: Additional Data on Fannie Mae and Freddie Mac’s Multifamily Activities In this appendix, we present additional analyses of loan-level data and aggregated data provided by Fannie Mae and Freddie Mac. Specifically, we analyzed multifamily loan-level data for 1994 through 2011 from both enterprises to determine (1) the unpaid principal balance of loans purchased in each state and (2) the delinquency rates of loans with various debt-service coverage and loan-to-value (LTV) ratios. In addition, we asked both enterprises to provide aggregated multifamily data on their administrative costs. Multifamily Loan Purchases by State Table 14 contains data on the unpaid principal balance of multifamily loans Fannie Mae and Freddie Mac acquired in the 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands from 1994 through 2011. Serious delinquency rates for multifamily loans varied by underwriting characteristic. Table 15 contains loan counts and serious delinquency rates (based on unpaid principal balance) for multifamily loans with original debt-service coverage ratios less than 1.25 purchased from 1994 through 2011. Table 16 contains loan counts and delinquency rates (based on unpaid principal balance) for multifamily loans with original debt-service coverage ratios greater than or equal to 1.25 purchased from 1994 through 2011. Table 17 contains loan counts and delinquency rates (based on unpaid principal balance) for multifamily loans with original loan-to-value (LTV) ratios less than or equal to 80 percent purchased from 1994 through 2011. Table 18 contains loan counts and delinquency rates (based on unpaid principal balance) for multifamily loans with original LTV ratios greater than 80 percent purchased from 1994 through 2011. The enterprises’ administrative costs associated with their multifamily business from 2002 through 2011 are shown in table 19. Appendix IV: Enterprises’ Multifamily Loan and Property Sizes Compared with Other Market Participants From 2005 through 2011, the size of the multifamily loans that Freddie Mac purchased and the properties it financed were more comparable with loans financed by life insurance companies than Fannie Mae. As shown in table 20, during this period Freddie Mac purchased loans and financed properties that were larger than those financed by Fannie Mae, generally comparable to those financed by life insurance companies, and generally larger than those financed by commercial mortgage-backed securities (CMBS) lenders. During the same period, Fannie Mae purchased smaller loans and financed smaller properties than those financed by Freddie Mac, life insurance companies, and CMBS lenders (except for 2010). Other sources of multifamily housing financing—state and local housing finance agencies (HFA), loan consortiums, and the Rural Housing Service (RHS)—focused on smaller loans and properties for the most part.example, as shown in table 21, data for 2005 through 2011 from three state HFAs showed that they financed small loans (with median loan For sizes ranging from more than $165,000 to about $4 million) and small properties (median units per property ranging from 12 to 95). Three other state and local HFAs reported median loan sizes ranging from $3 million to $26 million and median units per property ranging from 76 to 230. RHS’s average loan size ranged from about $1 million to $1.4 million. During this period, the median number of multifamily units supported by RHS ranged from 44 to 50. The two loan consortiums that provided us with data reported much smaller loans than the enterprises, with median loan sizes ranging from $284,000 to $1.6 million and median number of units per property ranging from 12 to 43 (see table 22). Appendix V: Data from Selected Housing Finance Agencies and Loan Consortiums This appendix contains data provided by six state and local housing finance agencies (HFA) and three loan consortiums on the first-lien multifamily mortgages that they originated from 2005 through 2011. Specifically, they provided data on their median debt-service coverage ratios, median loan-to-value (LTV) ratios, and delinquency rates. These data are presented as examples of specific HFA and loan consortiums for the purposes of comparison against information provided in this report for each of the enterprises. Because there can be variation between individual HFAs and individual loan consortiums, these data should not be seen as representative of all HFAs or loan consortiums. Table 23 provides information on the median debt-service coverage and LTV ratios for loans originated by six HFAs from 2005 through 2011. The debt-service coverage ratio estimates a multifamily borrower’s ability to service its mortgage obligation using the secured property’s cash flow, after deducting nonmortgage expenses from income. The higher the debt- service coverage ratio, the more likely a multifamily borrower will be able to continue servicing its mortgage obligation. The LTV ratio is the ratio of the unpaid principal balance of a mortgage loan to the value of the property that serves as collateral for the loan, expressed as a percentage. Loans with high LTV ratios generally tend to have a higher risk of default and, if a default occurs, a greater risk that the amount of the gross loss will be high compared to loans with lower LTV ratios. Table 24 provides information on the median debt-service coverage and LTV ratios for loans originated by three loan consortiums from 2005 through 2011. Table 25 includes information on the percentage of loans seriously delinquent for the six HFAs. The percentage of loans seriously delinquent (60 or more days delinquent) each year is based on unpaid principal balance and the status (as of December 2011) of only those loans originated in that year. Table 26 includes information on the percentage of loans seriously delinquent for three loan consortiums. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Paige Smith (Assistant Director), Farah Angersola, Steve Brown, William Chatlos, John Karikari, John McGrail, Jon Menaster, John Mingus, Marc Molino, José R. Peña, Barbara Roesmann, Jim Vitarello, and Heneng Yu made key contributions to this report.
Congress established the enterprises to provide stability in the secondary market for residential mortgages and serve the mortgage credit needs of targeted groups. But in September 2008, FHFA placed the enterprises in conservatorship out of concern that their deteriorating financial condition would destabilize the financial system. As Congress and the Executive Branch have explored options for restructuring\ the enterprises, most of the discussion has focused on the single-family market. But the enterprises also play a large role in providing financing for\ multifamily properties (those with five\ or more units). GAO was asked to describe (1) how the enterprises’ multifamily loan\ activities have changed, (2) the financing marketplace and how they met affordable housing goals, and (3) how the enterprises’ multifamily delinquency rates compare with those of other mortgage capital sources and how they have managed their credit risk. To address these objectives, GAO analyzed (1) loan-level data from 1994 (the earliest period for which loan-level data were available) through 2011 from the enterprises and (2) data from the Mortgage Bankers Association; interviewed key multifamily housing stakeholders; and reviewed FHFA examination reports. FHFA, Fannie Mae, and Freddie Mac provided technical comments, which GAO incorporated where appropriate. From 1994 through 2011, the multifamily loan activities of Fannie Mae and Freddie Mac (the enterprises) generally increased. In this period, Fannie Mae held a lower percentage of multifamily loans in its portfolio than Freddie Mac. While the enterprises’ multifamily business operations generally were profitable, both enterprises reported losses in 2008 and 2009. In recent years, Fannie Mae and Freddie Mac played a larger role in the multifamily marketplace, and their multifamily activities contributed considerably to meeting their affordable housing goals (set by their regulator for the purchase of mortgages that serve targeted groups or areas). Before 2008, the enterprises financed about 30 percent of multifamily loans. Their share increased to 86 percent in 2009, but decreased to 57 percent in 2011 as other participants reentered the market. GAO's analysis showed that multifamily activities greatly contributed to the enterprises' ability to meet affordable housing goals. For example, the enterprises' multifamily activities constituted 4.5 percent of their total business in 2008, but about a third of the units used to meet the goal of serving low- and moderate-income persons were multifamily units. The enterprises have purchased multifamily loans that generally performed as well as or better than those of other market participants, but the Federal Housing Finance Agency (FHFA) has identified deficiencies in their credit risk management. In 2005-2008, the enterprises' serious delinquency rates (less than 1 percent) were somewhat lower than the rates on multifamily loans made by commercial banks and much lower than rates for multifamily loans funded by commercial mortgage-backed securities. FHFA, through its examination and oversight of the enterprises, identified a number of credit risk deficiencies over the past few years. For example, FHFA found deficiencies in Fannie Mae's delegated underwriting and servicing program, risk-management practices, and information systems; and Freddie Mac's management of its lower-performing assets. Both enterprises have been taking steps to address these deficiencies.
Background The DOD Medicare subvention demonstration created a link between the DOD health care delivery system and Medicare, a health insurance program for the elderly and disabled, which is operated by the Health Care Financing Administration (HCFA) within the Department of Health and Human Services (HHS). DOD and HCFA implemented this demonstration during a period of change in both Medicare and military health care. DOD’s TRICARE System Since its beginning in 1995, DOD’s health system, called TRICARE, has offered care to active duty members of the uniformed services, retired members under age 65, and their respective families and survivors—a population of about 6.6 million. An additional 1.5 million retirees (including dependents) aged 65 and older could receive limited health care. DOD delivers care through about 600 MTFs worldwide. TRICARE covers a broad range of outpatient and inpatient services, including home health, hospice, and skilled nursing facility care. Services not available at an MTF are purchased through a network of civilian specialists and hospitals. TRICARE includes a managed care option, TRICARE Prime, which offers care at the MTF augmented by the civilian network. TRICARE Prime enrollees, including all active duty members of the armed services, have priority for care at the MTFs. There is also a fee-for-service option called TRICARE Standard that offers a broader choice of civilian providers, and a preferred provider option called TRICARE Extra. These options offer generally similar benefits but differ considerably in the nature and amount of costs to beneficiaries. Pharmacy services are available at most MTFs for all TRICARE eligibles as well as for retirees on Medicare. MTF pharmacy services are free-of-charge but limited to the medications carried at each MTF. TRICARE is managed at multiple levels. The Office of the Assistant Secretary of Defense for Health Affairs sets TRICARE policy – which governs both MTF and civilian care – and establishes regulations in coordination with the Army, Navy, and Air Force. Responsibility for policy execution is delegated to the TRICARE Management Activity (TMA) but is shared with the military Surgeons General, who are responsible for implementing TRICARE policies within their respective services. TMA performs programwide support functions, such as managing TRICARE’s information technology and data systems, preparing the budget and managing the accounts. In addition, TMA selects, directs and pays managed care support contractors, who maintain the private provider network and perform many services assisting beneficiaries and supporting management. In each TRICARE region within the United States, MTF and contractor activities are coordinated by a lead agent, usually the commander of the region’s largest MTF. At the MTF level, MTF commanders report to the Surgeon General of their respective service, who allocates part of the service’s appropriated funds to each MTF. MTF officials have input into network size and composition but lack direct authority over these providers or the network, which the managed care support contractor manages. Medicare Medicare is a federally financed health insurance program for people aged 65 and over, some people with disabilities, and people with end-stage kidney disease. Eligible beneficiaries automatically are covered under Part A, which covers inpatient hospital, skilled nursing facility, and hospice care as well as home health care that follows a stay in a hospital or skilled nursing facility. They also can pay a monthly premium to join Part B, which covers physician and outpatient services as well as those home health services not covered under Part A. Traditional Medicare allows beneficiaries to choose any provider that accepts Medicare payment and requires beneficiaries to pay for part of their care as well as for any services not covered by Medicare. To help meet these costs, some beneficiaries purchase supplemental “Medigap” policies from private insurers. Beneficiaries can choose from up to 10 standard policies. The less expensive cover Medicare deductibles and coinsurance, while the more expensive policies offer broader coverage, including prescription drugs. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in managed care or other private health plans. All Medicare+Choice plans cover basic Medicare benefits, and many also cover additional benefits such as prescription drugs. Typically, these plans have limited cost sharing but restrict members’ choice of providers and may require an additional monthly premium. The Subvention Demonstration Under the Medicare subvention demonstration, DOD established and operated Medicare+Choice managed care plans, called TRICARE Senior Prime, at six sites. Senior Prime added benefits and network providers to those already in place for TRICARE Prime, where needed to meet Medicare managed care requirements. Enrollment in Senior Prime was open to military retirees enrolled in Medicare Part A and Part B who resided within the plan’s service area. Open enrollment for those already in Medicare was capped at a number that DOD selected—roughly 28,000 for the demonstration as a whole. In addition, retirees enrolled in TRICARE Prime could “age in” to Senior Prime upon reaching age 65, even if the cap had been reached. Beneficiaries enrolled in the program paid the Medicare Part B premium but no additional premium to DOD. Senior Prime enrollees received the same priority for care at the MTFs as younger retirees enrolled in TRICARE Prime. Care at the MTFs was free, but beneficiaries had to pay any applicable cost-sharing amounts for care in the civilian network (for example, $12 for an office visit). The demonstration authorized Medicare to pay DOD for Medicare-covered health care services provided to retirees at an MTF or through private providers under contract to DOD. HCFA calculated capitation rates for the demonstration areas, discounted from what Medicare would pay private managed care plans in the same area. However, to receive payment, DOD had to spend at least as much of its own funds in serving this dual-eligible population as it had in the recent past. The six demonstration sites are each in a different TRICARE region and include 10 MTFs that vary in size and types of services offered. (See table 1.) The five medical centers offer a wide range of inpatient services and specialty care as well as primary care. These centers also have graduate medical education (GME) training programs. The community hospitals are smaller, have more limited capabilities, and can accommodate fewer Senior Prime enrollees. At these smaller facilities, much of the specialty care is provided by the civilian network. At the Dover site, the MTF is a clinic that offers only outpatient services, thus requiring all inpatient and specialty care to be obtained at another MTF or purchased from the civilian network. For all the sites, Senior Prime’s share of total enrollment (TRICARE Prime plus Senior Prime) was relatively small—an average of about 9 percent of all enrollees toward the end of 2000. Before the demonstration, seniors at all demonstration sites received MTF care when space was available, but at some sites seniors had more regular or formalized access. At the medical centers, seniors had been a substantial part of the workload to support GME in specialty care. In particular, centers with GME programs in internal medicine had formed panels of retirees who regularly received primary care at the MTF. However, at most of the smaller sites, MTF care for seniors was more limited. Changes in Medicare and TRICARE Senior Prime began delivering services just as a period of major change started in both Medicare and DOD managed care. The BBA replaced Medicare’s previous managed care program with Medicare+Choice, which brought many administrative changes, including a new process for demonstrating compliance with Medicare managed care requirements. Medicare+Choice also established a more structured quality improvement program than had been in effect previously. Medicare+Choice officially began January 1, 1999, but the process of issuing regulations and guidance continued into 2000. During this same period, DOD initiated its Military Health System Optimization Plan, a wide-ranging effort to re-engineer many facets of military health care. Among the issues addressed in the plan are adjustments in primary care staffing, adoption of productivity benchmarks for primary care, and use of clinical best practices and other initiatives to improve health service delivery. More sweeping changes in retiree benefits and military health care are occurring in 2001 as a result of the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001. This legislation gave Medicare- eligible military retirees two major benefits: Pharmacy benefit—Beginning April 1, 2001, Medicare-eligible retirees from the uniformed services were given access to prescription drugs through TRICARE’s national mail order pharmacy and at retail pharmacies as well as through pharmacies at MTFs. TRICARE eligibility—Beginning October 1, 2001, retirees enrolled in Medicare Part B will also become eligible for TRICARE coverage— commonly termed TRICARE For Life. Under TRICARE For Life, military retirees who use traditional Medicare will be able to stay with their current private sector providers, while being relieved of most of their Medicare cost sharing. TRICARE will pay nearly all out-of-pocket costs for Medicare-covered services that these retirees previously had to pay. The law also authorizes continuation of Senior Prime—with Medicare paying DOD for seniors’ care, including care received in MTFs—for 1 additional year (through 2001), with the possibility of further extension and expansion. Any such continuation will require agreement between DOD and HHS as well as congressional approval. DOD is reviewing its options for providing military managed care to seniors under this legislation and is holding discussions with HCFA. DOD Successfully Operated Medicare Managed Care Plans, Enrolled Many Retirees The Senior Prime sites were successful in operating Medicare managed care plans. Sites expended substantial effort to meet Medicare+Choice requirements, and HCFA reviewers said that they generally did as well as other new health plans in meeting these requirements. The demonstration showed that there is a demand among retirees for DOD managed care with low out-of-pocket costs. Strong enrollment, and particularly the large number who joined the program when they turned 65, generated concerns about MTFs’ capacity for further growth. Enrollees were generally satisfied and relatively few left the program. Sites Were Successful in Operating Medicare Managed Care Plans Meeting Medicare+Choice requirements was a challenge for site officials, who had no prior experience doing so. However, HCFA reviewers found no major problems in the sites’ compliance and said that such deficiencies as they did note were generally typical of new plans. Senior Prime sites put considerable effort into complying with Medicare regulations. The sites became familiar with Medicare+Choice policies and procedures; added benefits and network providers as needed to meet Medicare obtained Medicare+Choice certification, which required developing policies and procedures consistent with Medicare+Choice requirements in such areas as enrollment and quality assurance; and implemented grievances and appeals, claims processing, and performance measurement procedures that differed from TRICARE Prime’s. Sites had to perform new tasks and functions to meet these requirements with no additional funds from DOD for Senior Prime administration. This was a particular challenge for smaller MTFs that had limited administrative resources. However, TMA performed some administrative tasks centrally. For example, TMA prepared informational materials for retirees. TMA also selected and paid for contractors to do the special studies on quality that Medicare+Choice requires as well as to report data on health status and HEDIS (the Health Plan Employer Data and Information Set) performance measures. Nonetheless, several sites observed that, even with this help, they faced additional work because they had to make medical records available to the contractors, which was time-consuming and, in some cases, disruptive to normal operations. By December 2000, HCFA had performed an initial review of each site and full monitoring reviews at three sites. The reviews examined each site’s compliance with Medicare+Choice regulations, including documentation and data submitted by the sites. HCFA staff and our review of HCFA reports indicated that no major compliance problems were identified. HCFA reviewers did identify deficiencies in administrative procedures that are common among new Medicare+Choice plans. For example, HCFA found instances of incomplete documentation and correspondence and failure to meet timelines for action on enrollment, grievances and appeals, and claims. However, HCFA said the deficiencies rarely had a direct impact on the services that beneficiaries received. Sites Attracted and Kept Medicare Enrollees Senior Prime attracted enrollees throughout the demonstration period. By December 2000, enrollment in Senior Prime was only about 1,600 short of the demonstrationwide cap of roughly 28,000 for open enrollment. Six of the 10 MTFs had waiting lists and two others were at 90 percent of their enrollment caps. The two MTFs that fell significantly short of the cap— Dover and Sheppard—were among the smallest and were located in nonmetropolitan areas. In addition, more than 6,500 younger retirees enrolled in Senior Prime when they turned age 65. Under demonstration rules, TRICARE Prime enrollees who had a primary care manager at a Senior Prime MTF could “age-in” to Senior Prime, and MTFs could not limit the number of such age-ins. In fact, the majority of those eligible to age-in did so. All but one MTF enrolled more age-ins than expected, and by December 2000 age-ins accounted for about one-fifth of overall Senior Prime enrollment. (See table 2.) This increased concern among MTF officials about MTFs’ capacity to accommodate future growth, especially at sites that had reached their enrollment caps early in the demonstration. TMA asked sites to examine their capacity in light of continuing growth because of age-ins, but decided against any major change in enrollment policy. However, it did, with HCFA’s concurrence, start requiring that age-ins live within the Senior Prime service area. While Senior Prime’s growing enrollment indicates that retirees found its benefits attractive, enrollees also appeared relatively satisfied with Senior Prime. Relatively few enrollees left the program. In addition, both DOD data and our survey of enrollees, as well as site officials’ observations, suggest that enrollees were generally satisfied with Senior Prime. Site officials said that features such as limited out-of-pocket costs and a substantial drug benefit made the program attractive compared to other Medicare+Choice plans. The demonstration also appealed to many retirees because it would give them better access to MTF care. However, the demonstration does not allow us to tell definitively which feature of Senior Prime—its low out-of-pocket costs or access to MTF care—was more important to enrollees. DOD Officials Indicated That, On Balance, Demonstration Had Positive Effects DOD officials said that providing coordinated care for limited numbers of retirees yielded benefits for both retirees and medical staff. In addition, given its small scale, Senior Prime had little adverse effect on younger TRICARE Prime enrollees. While noting that it took considerable effort to meet Medicare requirements, officials also said that working with HCFA had spurred improvements in DOD administrative and clinical practices. Site Officials Said Demonstration Benefited Seniors, Enhanced Readiness Skills, Had Little Adverse Effect on Other Beneficiaries Site officials reported that Senior Prime enrollees received coordinated care and a broader range of services in contrast to the episodic space- available care or the mix of military and private care that many had received prior to Senior Prime. In Senior Prime, enrollees were assigned to primary care managers who were responsible for their patients’ care in both the MTFs and civilian networks. Also, those with complex problems were given case managers who coordinated and helped arrange services. Senior Prime also augmented its network to provide services such as skilled nursing facility care that DOD did not provide to seniors under space-available care. Site officials said that providing a broad range of primary and specialty care to seniors also benefited MTF clinical staff. Providing a broader set of services to seniors exposed staff to a wider range of conditions than seen under space-available care for seniors or among younger patients. At smaller MTFs, Senior Prime offered clinicians more experience providing inpatient care. MTF providers also reported that they were more satisfied because they could be assured that follow-up and other services would be available when needed. Site officials also identified ways in which seniors’ care contributed specific skills that are important for medical readiness. For example, surgeons need practice in joint and vascular surgery, and intensive care teams need to learn how to work together under pressure. Seniors’ joint and circulatory problems and the conditions that put them in intensive care are not the same as would be experienced on the battlefield but, site officials explained, treating such conditions keeps staff familiar with relevant medical procedures. Experience with the elderly can also be directly relevant to peacekeeping and humanitarian missions, where staff may deal with chronically ill or older individuals. Officials at one medical center, however, noted that despite these benefits, they were seeing fewer seniors overall because they were providing more comprehensive services to Senior Prime enrollees and offering less space-available care to nonenrolled seniors. These officials noted that, in some specialties, this smaller pool of seniors did not provide as many of the complex cases that are important for readiness training. Site officials found little evidence that, at its current small scale, Senior Prime had affected TRICARE Prime enrollees’ satisfaction or access to care. Even where enrollment met the cap, Senior Prime remained a small portion of each MTF’s enrolled population. By late 2000, the demonstration accounted for 9 percent of the enrolled population, although it reached 16 percent at two MTFs. Through their routine monitoring, officials identified some decreases in satisfaction and access among younger TRICARE Prime enrollees, but attributed them largely to factors other than Senior Prime. These included a sudden increase in TRICARE Prime enrollment, changes in appointment systems, and decreases in available MTF services. Site officials had varying views about the extent to which Senior Prime affected nonenrolled retirees’ access to space-available care. Some said that space-available care had declined largely due to Senior Prime enrollment and health care use. (Many of those who enrolled in Senior Prime were previous users of space-available care.) However, other officials indicated that the decline would have occurred even in the absence of Senior Prime. Many officials emphasized that the growth in TRICARE Prime resulted in less capacity for space-available care. Other factors that predated Senior Prime, including staffing reductions, also limited space-available care. Involvement With Medicare Spurred Certain Improvements Despite the effort required to implement a Medicare+Choice managed care plan, DOD officials at every site readily acknowledged that working with HCFA was educational and spurred improvements. Requiring DOD to take a close look at its administrative and clinical procedures for a small population led to insights that could be applied more generally. For example, HCFA requirements and oversight highlighted the importance of accurately recording all care a patient receives and led to improvements in coding and patient records. Implementing requirements, such as Medicare+Choice appeals and grievance rules, suggested improvements for similar TRICARE processes, and several sites planned to implement parts of the Medicare procedures in TRICARE Prime. Similarly, the quality improvement studies undertaken for Medicare+Choice revealed opportunities for improving patient care that site officials said could be applied to the TRICARE Prime population. Working with HCFA also brought MTF officials into contact with private Medicare+Choice plans, practices, and data. Staff at two sites met regularly with private Medicare+Choice plan representatives and said that they found it useful to discuss Medicare+Choice issues with them and HCFA staff. Participation in Senior Prime also led sites to compare their performance with that of private plans. The Madigan and San Diego sites purchased data on private plans in their market area from a private firm, including benchmarks for utilization of services. The private plan data provided DOD with a basis for comparing performance as well as for understanding how patient care and data recording practices differ between the two sectors. Demonstration Challenges Reflected, In Part, Larger DOD Managed Care Issues Although some difficulties that DOD encountered in implementing Senior Prime reflected Medicare+Choice requirements or factors specific to the subvention demonstration, others highlighted underlying features of DOD managed care. These included maintaining sufficient staff, given military medical staff turnover and deployments; managing care that is delivered in two separate systems—the military system and the contractor-managed network; and working within the confines of a slow and cumbersome contracting process. Sites Faced Challenges in Securing and Maintaining Adequate Medical Staff Ensuring the availability of MTF and network providers and maintaining continuity of care are issues in TRICARE generally, but sites’ experiences showed that these issues are more pressing when seniors are involved. This is because seniors typically have more health care needs than younger beneficiaries and use certain specialists and services more intensively. While Senior Prime enrollees generally had good access to care and sites managed to provide the full range of services, sites had difficulty in arranging some resources that were particularly critical for seniors. Maintaining adequate staff at MTFs is an ongoing challenge because of routine turnover, military deployments, and readiness training. In the demonstration, replacement staff needed due to routine turnover did not always arrive when they were needed, sometimes reporting months after the previous staff had left. Staff deployments and readiness training also led to gaps in provider availability, although this varied among MTFs. Some MTFs experienced mostly short-term deployments during the demonstration, while others contributed staff for assignments lasting several months. The resources deployed ranged from individual staff members, including specialists important for senior care, to an entire operating room team. Some of the larger MTFs were also responsible for filling positions at other MTFs that were short-staffed, increasing the pressure on staff resources at those sites. Sites took several steps to mitigate the effect of military staff absences on patient care. Some absences could be unpredictable, but sites often had advance notice and could plan to minimize interruptions. For short-term training absences, at least one MTF was able to adjust schedules so that not all members of a particular team were away at once. For temporary assignments, MTFs could sometimes send specialists rather than primary care providers, thereby minimizing the impact on primary care management. To cover for absent Senior Prime primary care managers, some MTFs used other primary care team members or specialists to fill in, some arranged for civilian providers to fill in temporarily, and one was able to arrange for a temporary replacement from another MTF. For specialty care, larger MTFs generally had more staff to cover short-term gaps, but smaller MTFs with few providers in a specialty had to rely more on network providers. Obtaining staff for the longer term was more problematic. First, DOD procedures for assigning staff to MTFs are not generally geared to making needed adjustments quickly. MTFs sometimes could not meet their authorized staffing levels because no one was available. Second, when MTFs tried to hire civilian personnel, their ability to do so was generally dependent on the local market, and several reported that recruiting civilians to fill certain positions was difficult. Although another option, TRICARE’s “resource sharing” program, allows MTFs to use civilian staff provided by the managed care support contractors to deliver care with the MTFs, only a few MTFs were using resource sharing providers to treat Senior Prime enrollees. At the time of our visits, site officials did not share a common understanding of when resource sharing could be used within Senior Prime. Despite managed care support contractors’ recruiting on an ongoing basis to ensure network adequacy, several sites had problems securing local providers for their network and had to send patients outside the network for care. Pulmonology, dermatology, and rheumatology were areas in which more than one site encountered problems. Also, site officials reported that some providers were reluctant to contract with Senior Prime. For example, some did not want to accept the contracted payment rate, which was lower than the out-of-network rate they could otherwise receive. Officials noted that network development was generally more difficult for TRICARE in rural areas, where the supply of specialty providers is limited. Rural sites were able to build networks that met most of their referral needs, although their networks sometimes had only one or two providers in certain specialties. Seniors who were enrolled at more rural MTFs at times had to travel significant distances to reach certain specialist providers. However, in some areas longer travel times were common. For example, an official from the Texoma area commented that beneficiaries are accustomed to traveling some distance for care, and that Senior Prime still met Medicare’s standards regarding access to care for their communities. Dual Delivery Systems Added to Sites’ Difficulties TRICARE in general has difficulty integrating MTF and network care, but sites’ experience showed that this is a larger issue for seniors, who have more extensive needs than the TRICARE Prime population. From the start of the demonstration, sites’ ability to integrate care at the MTF with care purchased through the network was limited. In particular, sites had to find ways to coordinate those services that the military health system has not traditionally provided to seniors as well as to resolve issues common to TRICARE of integrating MTF and network data. Most MTFs encountered problems in coordinating care to Senior Prime enrollees, especially when they were in a skilled nursing facility or nursing home. A central issue for the sites was the provision of case management services by nurses or social workers. Senior Prime required a shift in focus for case management, from managing primarily catastrophic cases in a younger population to coordinating chronic medical care for an older population. This included support in assisting families and patients in transitioning from the MTF to an institutional setting or to home. Particularly for older patients, case managers are often pivotal in coordinating care. Officials at five MTFs reported having added case managers for Senior Prime or changed the case manager’s role. In addition to providing case management, some MTFs had another problem: coordinating information when an enrollee had two case managers—one at the MTF, the other, a managed care support contractor responsible for the enrollee’s network care. Sites also had to determine who would oversee the medical care of the Senior Prime patient while he or she was in a skilled nursing facility or a rehabilitation hospital—the MTF’s primary care physician who was responsible for the enrollee’s care or a physician associated with the civilian institution. This issue was complicated by the fact that MTF physicians are typically not licensed to practice medicine in the state where the MTF is located. As a result, they cannot be given medical privileges at the local health care facilities. Most sites used the institutional staff or network physicians to see the admitted Senior Prime patients and relied on the managed care support contractor’s case managers to communicate with the patient’s MTF physician. For patients who were in skilled nursing facilities or rehabilitation facilities or receiving most of their care from network physicians, sites had to decide who would provide needed lab tests and routine appointments— the MTF or the network physicians. Some of the larger sites elected to return Senior Prime patients admitted at local institutional facilities to the MTF for lab tests ordered by the network physician or for routine clinic appointments. This practice could help ensure that medical information, such as the results of a lab test ordered by a network physician, was shared between the MTF physician and the network physician. However, transporting patients back and forth is not always feasible, cost-effective, or convenient for the patient, and one MTF reported it was considering other options. In managing patient care, MTF primary care physicians faced two additional difficulties in bridging the gap between network and military care. First, they needed to ensure that patients followed through on referrals—making and keeping their appointments with network providers. TRICARE appointment and referral procedures did not necessarily record this information, which required good communication with providers outside the MTF. Second, MTFs needed to ensure that clinical results of referrals were shared with the patient’s primary care physician. One site, observing that the referral process needed improvement, established centers to coordinate referrals. The centers’ staff created a database to track the status of referrals, so that they could inform primary care physicians when patients had not made or kept their referral appointments. The staff also monitored whether primary care physicians had received the clinical results. Integration of data on MTF care with data on network care was a problem for overall management of Senior Prime as well as for physicians’ management of their individual patients’ care. Different data systems were involved—one for network care, maintained by the managed care support contractor, and one for MTF care, maintained by DOD. To obtain a comprehensive picture of care that individual patients or groups of patients (for example, all patients with diabetes) received, sites had to manipulate the data in the two systems extensively. This hindered the sites’ obtaining such information routinely. Both the sites and DOD are undertaking initiatives to improve the integration of data from different sources. For example, DOD now maintains three separate pharmacy data systems –one for prescriptions filled at an MTF, one for prescriptions filled through DOD’s national mail order pharmacy, and one for those filled at network pharmacies. DOD has begun implementing a pharmacy data transaction system, which will create an integrated record of all prescriptions received by TRICARE beneficiaries. In general, however, DOD has encountered persistent problems in its efforts to integrate other types of health care information (including data on network care, MTF inpatient care, and MTF outpatient care). DOD Contracting Process Hindered Response to Necessary Changes Modifying managed care support contracts in a timely way was a significant problem in the demonstration. Negotiating contract changes has been a longstanding problem for DOD in managing TRICARE. The problem was more acute for the demonstration because a significant number of additional contract modifications had to be negotiated specific to subvention. Shortly after Senior Prime’s startup, HCFA began implementation of Medicare+Choice, which resulted in far-reaching changes in Medicare regulations. These changes, which were released over more than a year, required Medicare+Choice plans to implement new practices and procedures—generally, within 90 days of receiving the changes. Many of these changes affected contractor-performed activities including enrollment, reporting, and network contracting. Senior Prime involved six “change orders”—modifications to the TRICARE managed care support contracts—to set up the demonstration and make it conform to the evolving Medicare+Choice rules. Handling these changes was cumbersome in several ways, detailed in figure 1, and highlighted how ill-suited the contract change order process was to making changes expeditiously. The problems encountered in Senior Prime were typical of TRICARE change orders generally, except for the delay in requesting proposals, which reflected the special circumstances of the demonstration. This system had several disadvantages. First, delays in the process meant that Medicare+Choice requirements went into effect before TMA could authorize contractors to implement them. In order to achieve the demonstration’s timely compliance with Medicare+Choice requirements, lead agent staff wanted contractors to move forward without a formal change order—which they sometimes did—although TMA contracting officials cautioned against this practice. Second, lack of timely payment was a major concern at the managed care support corporate level. Third, the fact that contractors had already incurred actual costs may have put DOD at a disadvantage when negotiating change orders. Finally, TMA was authorizing changes without knowing what they would actually cost. Unsettled change orders could represent a significant future liability for the Defense Health Program if they are settled at higher amounts than DOD estimated. The backlog in processing contract changes and the practice of implementing changes before their costs were negotiated have long been problems for TRICARE. Efforts initiated in 1997 to remedy the problem were not successful. In July 2000, TMA began an effort to negotiate and pay for all of its outstanding change orders. This effort eliminated most of the backlog, but the $900 million cost of the settlements contributed to a shortfall in funding for the Defense Health Program for fiscal year 2001. TMA has instituted a new process that requires costs to be negotiated and settled before changes are implemented, but evidence of the effectiveness of this process is not yet available. Concluding Observations Senior Prime, while demonstrating that DOD can operate a Medicare+Choice plan, also illustrated the complexities of offering managed care within the military health system. Some lessons from the demonstration apply to military managed care generally. These include the difficulties of linking MTFs with network care and the importance of re- engineering the previous managed care support contract change order process. Other lessons apply specifically to military managed care for seniors. These include the importance of accurately estimating MTFs’ capacity for enrolling seniors, especially given the potential for age-ins; the need to provide seniors with more complex care, including case management and post-acute care; and the value of contacts with HCFA and private Medicare+Choice plans. Much of Senior Prime’s experience in providing care to seniors may be applicable to the new TRICARE For Life program. DOD officials may be able to draw on lessons learned from Senior Prime as they define the new program’s options for seniors. Agency Comments DOD and HCFA provided written comments on a draft of this report, which are reprinted in appendixes I and II. Both agencies said the report contained an accurate description of implementation issues encountered in the demonstration. DOD noted that expanding Medicare subvention or making it permanent should be approached cautiously, with an understanding of cost and funding issues. We will address cost issues in future reports. The two agencies also provided suggestions for clarity and technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Defense and the Administrator of HCFA. We will make copies available to others upon request. If you or your staffs have questions about this report, please contact me at (202) 512-7114. Key contributors to this assignment included Gail MacColl, Robin Burke, and Lisa Rogers. Appendix I: Comments From the Department of Defense Appendix II: Comments From the Health Care Financing Administration Related GAO Products Defense Health Care: Continued Management Focus Key to Settling TRICARE Change Orders Quickly (GAO-01-513, April 30, 2001). Medicare Subvention Demonstration: Enrollment in DOD Pilot Reflects Retiree Experiences and Local Markets (GAO/HEHS-00-35, January 31, 2000). Medicare Subvention Demonstration: DOD Start-up Overcame Obstacles, Yields Lessons, and Raises Issues (GAO/GGD/HEHS-99-161, September 28, 1999). Medicare Subvention Demonstration: DOD Data Limitations May Require Adjustments and Raise Broader Concerns (GAO/HEHS-99-39, May 28, 1999). Defense Health Care: Actions Under Way to Address Many TRICARE Contract Change Order Problems (GAO/HEHS-97-141, July 14, 1997).
This interim report reviews the implementation of the Department of Defense (DOD) Medicare Subvention Demonstration. GAO found that the demonstration sites were successful in operating Medicare managed care plans. Officials put substantial effort into meeting Medicare managed care requirements and, according to Health Care Financing Administration reviewers, were generally as successful as other new Medicare managed care plans in this regard. Most sites reached the enrollment limits they had established for retirees already covered by Medicare. DOD officials indicated that the demonstration's effect was positive. Enrollees received a broader range of services from DOD than in the past, when they got care only when space was available in DOD facilities. Officials also noted that providing more comprehensive care to seniors helped sharpen the skills of military clinical staff, which contributed to their readiness for supporting combat or other military missions. Some challenges encountered in the demonstration reflect larger DOD managed care issues and may have implications for DOD managed care generally. Although access to care was generally good, the demonstration experienced some problems in maintaining adequate clinical staff.